Mission Control Installation

Mission Control is the next generation operations platform for DataStax Enterprise Hyper-Converged Database (HCD) 1.0. It simplifies management of all operations across an array of hosting options from the cloud, to hybrid, to on-premises, in environments on bare-metal or virtual machines (VMs). Powered by the advanced automation that runs DataStax Astra DB, Mission Control provides 24/7 automated operations of nodes, datacenters, and clusters.

Mission Control v1.3.0+ supports Hyper-Converged Database (HCD) 1.0, the new hyper-converged product featuring vector indexes. These new indexes enable application development using generative Artificial Intelligence (AI).

Easily deploy Hyper-Converged Database (HCD) 1.0 in a production-ready environment across multiple servers and immediately start testing. To run a single-node development instance, consider following the container-based instructions.

Run Hyper-Converged Database (HCD) 1.0 in Mission Control

Mission Control is the easiest way to deploy both self-managed and cloud-based installations across multiple nodes running either in the same datacenter or on multiple clouds. Mission Control builds on the rock-solid K8ssandra project to manage the configuration and life cycle of core services built around cloud-native including Kubernetes (K8s).

After meeting the Prerequisites, install Mission Control on either:

The Kind tool can be used to create a local Kubernetes cluster with Docker container nodes on which to install Mission Control. Allocate Docker with at least 16GB of RAM and four cores.

Prerequisites

  • A downloaded Mission Control license file.

    Mission Control requires a license file to provide Kubernetes Off-The-Shelf (KOTS) with required information out installation. Information includes customer identifiers, software update channels, and entitlements.

    Are you exploring Mission Control as a solution for your organization? Fill out this registration form to request a community edition license.

    If you need a replacement license file or a non-community edition, or want to convert your Public Preview license to use a stable channel release version, please contact your account team.

  • kubectl: the Kubernetes command line tool, v1.22 or later, allows direct interactions with the Kubernetes cluster.

  • cert-manager: certificate management for Kubernetes. Follow the installation instructions if cert-manager is not yet installed in the cluster.

  • Kots CLI: Mission Control is packaged with Replicated, and is installed through the Kots plugin for kubectl. Follow the installation instructions.

  • An existing cluster running with at least three worker nodes (each with a minimum of four cores and 8GB of RAM).

  • Kubeconfig file or context pointing to a` Control Plane` Kubernetes cluster.

Choose how to install Mission Control

Install Mission Control on existing Kubernetes infrastructure

Mission Control is compatible with cloud-managed Kubernetes services such as Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), and Azure Kubernetes Service (AKS) as well as the on-premises deployment tools kubeadm, Openshift, RKE, and Kind.

  1. Ensure that the Kubernetes context is targeting the right cluster, and then run the following command:

    kubectl kots install mission-control -n mission-control
    Results
    • Deploying Admin Console
        • Creating namespace ✓
        • Waiting for datastore to be ready ✓
    Enter a new password to be used for the Admin Console: ••••••••••
      • Waiting for Admin Console to be ready ✓
    
      • Press Ctrl+C to exit
      • Go to http://localhost:8800 to access the Admin Console

    The installer prompts the user to set an administrative password for the Kots UI and then to port-forward that admin console to localhost:

  2. Proceed to Finalize the Mission Control installation.

Install on bare-metal or VMs

The open-source K8s installer kURL is an embedded Kubernetes runtime that allows the installation of Mission Control on bare-metal or virtual machines without any pre-existing Kubernetes cluster.

The following example assumes that a set of three VMs running in the same network in the chosen environment, with no network restrictions between them and with internet access.

  1. Run the following command on one of the VM instances:

    curl -sSL https://kurl.sh/mission-control-private-preview | sudo bash
    Results
            Installation
                Complete ✔
    
    Kotsadm: http://30.91.53.115:8800
    Login with password (will not be shown again): *********

    This is a default password. The recommendation is to change this password as follows:

    kubectl kots reset-password default
  2. Save the results! After the installation completes, the result provides two important pieces of information that must be saved for future reference:

    • The URL and admin password for the Kots admin UI that you use to proceed with the rest of the Mission Control installation.

    • The command to run on each and every other node that is part of the cluster.

  3. To add worker nodes to this installation, run the following script on each of the other nodes in the cluster:

    curl -fsSL
    
    https://kurl.sh/version/v2023.07.11-0/mission-control-private-preview/join.
    sh | sudo bash -s kubernetes-master-address=175.32.21.207:6443
    kubeadm-token=519y9r.5rvob6osa35gq
    kubeadm-token-ca-hash=sha256:e5f1923e8648372f632e3af251617612459a26ba51e3fe
    f54b2639043788c kubernetes-version=1.24.15 ekco-address=172.31.24.207:31880
    ekco-auth-token=t93tD121B5WiGHD3glwGs0UEHuMJGHydbjpdjEDA9EWsUiz4SbbRqRHuaHH
    i4docker-registry-ip=11.90.12.92
    additional-no-proxy-addresses=11.90.0.0/22,11.30.0.0/20
    primary-host=175.32.21.207
  4. Run that script on each remaining node in the cluster before accessing to the Kots admin UI.

  5. After finishing the installation on all nodes, ssh back into the Control Plane node and check the cluster state:

    kubectl get nodes
    Results
    NAME               STATUS   ROLES                  AGE   VERSION
    ip-175-32-24-217   Ready    control-plane,master   61m   v1.24.15
    ip-175-32-36-162   Ready    <none>                 49m   v1.24.15
    ip-175-32-7-69     Ready    <none>                 47m   v1.24.15

    The result should display three nodes at Ready state. Now proceed to Finalize the Mission Control installation.

Finalize the Mission Control installation

  1. Log into the Kots admin UI and go through the certificates-related screens ending with selecting a Self Signed certificate for https purposes.

  2. When prompted, upload your previously downloaded and saved license file. After the system accepts your license file, it displays a configuration window.

  3. In the configuration window, select the corresponding checkbox to enable the following settings:

    1. Control Plane - specifies whether Mission Control should be deployed in control plane versus data plane mode.

    2. Advanced Options Allow monitoring components on DSE nodes - to deploy monitoring components (such as Vector and Mimir) on DSE worker nodes. Enable this only for constrained environments.

    3. Enable Observability Stack - which requires setting up an object storage connection to either S3 or GCS.

  4. After clicking Continue, a preflight check runs. All green circles with checkmarks indicates success and allows you to click on Deploy to finalize the installation:

  5. The deployment continues and displays Mission Control installation status with a Currently deployed version label.

Mission Control is now operational! Continue with Deploy a HCD 1.0 cluster.

Deploy a HCD 1.0 cluster

Mission Control supports Hyper-Converged Database (HCD) and this is one way to set it up.

  1. On a local machine create a manifest file named hcd1.0cluster.yaml to describe the cluster topology. Copy the following code segments as prescribed into the file:

    apiVersion: missioncontrol.datastax.com/v1alpha1
    kind: HCDCluster
    metadata:
      name: test
    
      namespace: mission-control
    spec:
      serverVersion: 1.0
      storageConfig:
        cassandraDataVolumeClaimSpec:
          storageClassName: default
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 10Gi
      config:
        jvmOptions:
          heapSize: 1G
      datacenters:
        - metadata:
            name: dc1
          size: 3
          resources:
            requests:
              memory: 2Gi
  2. Either change the storageClassName to a preferred value, matching the ones available in the installation, or leave the default value. To determine which storage classes are available in the environment, run:

    kubectl get sc
  3. When using VMs with Mission Control embedded Kubernetes runtime, append the hostNetwork section at the same level as the config section in the hcd1.0cluster.yaml file:

     ...
      networking:
        hostNetwork: true
      config:
        ...

    This enables the deployed services to be directly available on the network.

  4. Apply the manifest by running the following kubectl command from a machine console:

    kubectl apply -f hcd10cluster.yaml

    After a few seconds, check that the pods representing the nodes appear:

    $ kubectl get pods -n mission-control
    Results
    NAME                                                  READY   STATUS    RESTARTS   AGE
    cass-operator-controller-manager-6487b8fb6c-xkjjx     1/1     Running   0          41m
    k8ssandra-operator-55b44544d6-n8gs8                   1/1     Running   0          41m
    mission-control-controller-manager-54c64975cd-nvcm7   1/1     Running   0          41m
    test-dc1-default-sts-0                                0/2     Pending   0          7s
    test-dc1-default-sts-1                                0/2     Pending   0          7s
    test-dc1-default-sts-2                                0/2     Pending   0          7s

    Each node must go through the standard bootstrapping process, taking approximately 2 to 3 minutes. Upon completion, the nodes should display 2/2 under READY and Running under STATUS:

    NAME                                                  READY   STATUS    RESTARTS   AGE
    cass-operator-controller-manager-6487b8fb6c-xkjjx     1/1     Running   0          50m
    k8ssandra-operator-55b44544d6-n8gs8                   1/1     Running   0          50m
    mission-control-controller-manager-54c64975cd-nvcm7   1/1     Running   0          50m
    test-dc1-default-sts-0                                2/2     Running   0          9m6s
    test-dc1-default-sts-1                                2/2     Running   0          9m6s
    test-dc1-default-sts-2                                2/2     Running   0          9m6s

    Should any pods list their STATUS as Pending there may be issues with resource availability. Check the pod information with this command:

    kubectl describe pod pod-name

    The HCD 1.0 cluster is operational when all of the nodes indicate 2/2 under READY and Running under STATUS.

Connect to the cluster using CQLSH

Now that HCD 1.0 is up and running, connect to the cluster using the previously downloaded cqlsh binary with Vector index support.

Mission Control is secured by default and generates an unique superuser after disabling the default cassandra account.

  1. Discover the username of this generated superuser by accessing the <cluster-name>-superuser secret in the Kubernetes cluster in the mission-control namespace. Run the following command:

    $ kubectl get secret/test-superuser -n mission-control -o
    jsonpath='{.data.username}' | base64 -d; echo
    Results
    test-superuser
  2. Read the username’s password by running this command:

    kubectl command
    $ kubectl get secret/test-superuser -n mission-control -o
    jsonpath='{.data.password}' | base64 -d; echo
    Sample result
    PaSsw0rdFORsup3ruser

Embedded Kubernetes cluster

Because host networking is enabled, connect to any of the nodes through its Internet Protocol (IP) address or hostname using cqlsh with the correct Superuser credentials (port 9042 must be accessible from cqlsh):

$ cqlsh --username test-superuser --password <PaSsw0rdFORsup3ruser>
ip-175-32-24-217
Results
Connected to test at ip-175-32-24-217:9042
[cqlsh 6.0.0 | Cassandra 4.0.7-c556d537c707 | CQL spec 3.4.5 | Native
protocol v5]
Use HELP for help.
test-superuser@cqlsh>

External Kubernetes cluster

  1. Port forward the service that exposes the CQL port for the cluster with:

    kubectl port-forward svc/test-dc1-service 9042:9042 -n mission-control
  2. Connect using cqlsh pointing at localhost:

    $ cqlsh --username test-superuser --password <PaSsw0rdFORsup3ruser> 127.0.0.1
    Results
    Connected to test at 127.0.0.1:9042.
    [cqlsh 6.0.0 | Cassandra 4.0.7-c556d537c707 | CQL spec 3.4.5 | Native
    protocol v5]
    Use HELP for help.
    test-superuser@cqlsh>

Start using HCD with Vector! Now access HCD 1.0 through either a standalone container or a Mission Control deployment, and start using the new vector indexes by following the vector search quickstart.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com