Install DSE and Mission Control

DataStax Mission Control is the next generation operations platform starting with DataStax Enterprise version 6.8.26 and including 6.9. It simplifies management of DSE operations across an array of hosting options from the cloud to hybrid to on-premises. Install and run Mission Control in DSE environments on bare-metal, virtual machines (VMs), or on your existing Kubernetes (K8s) cluster. Powered by the advanced automation that runs DataStax Astra DB, Mission Control provides 24/7 automated operations of DSE nodes, datacenters, and clusters.

Starting with version v0.7.0, Mission Control supports DSE 6.9, the latest generation of DSE featuring vector indexes and Data API. These new indexes enable application development using generative Artificial Intelligence (AI).

Easily deploy DSE 6.9 in a production-ready environment across multiple servers and immediately start testing. To run a single-node development and testing instance consider following the container-based instructions.

Run DSE 6.9 in Mission Control

Mission Control is the easiest way to deploy DSE across multiple nodes running either in the same datacenter or on multiple clouds. Mission Control builds on the rock-solid K8ssandra project to manage the configuration and life cycle of DSE with core services built around cloud-native including Kubernetes (K8s).

Meet the Prerequisites and then prepare your environment on either:

After you set up the environment, install Mission Control.

You can also use the Kind tool to create a local Kubernetes cluster with Docker container nodes on which to install Mission Control. Allocate Docker with at least 16GB of RAM and four cores.

Prerequisites

  • A DSE license that includes a Mission Control license. Register.

    • Contact your account team to obtain a complimentary DataStax Enterprise (DSE) license, if you don’t have one. A Mission Control license provides installation manifests for Mission Control and configures trivial upgrades as new versions are released. Download the Mission Control license to use as an upload file during installation.

      To replace a Mission Control license file or a non-community edition, or to convert your Public Preview license to use a stable channel release version, please contact your account team.

  • kubectl: the Kubernetes command line tool, v1.22 or later, allows direct interactions with the Kubernetes cluster.

  • cert-manager: certificate management for Kubernetes. Follow the installation instructions if cert-manager is not yet installed in your cluster.

  • Kots CLI: Mission Control is packaged with Replicated, and is installed through the KOTS plugin for kubectl. Follow the installation instructions.

  • An existing cluster running with at least three worker nodes (each with a minimum of four cores and 8GB of RAM).

  • Kubeconfig file or context pointing to a Control Plane Kubernetes cluster.

Install Mission Control on your existing Kubernetes infrastructure

Mission Control is compatible with cloud-managed Kubernetes services such as Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), and Azure Kubernetes Service (AKS) as well as the on-premises deployment tools kubeadm, Openshift, RKE, and Kind.

  1. Ensure that your Kubernetes context is targeting the correct cluster, and then run:

    kubectl command
    kubectl kots install mission-control -n mission-control
    Sample result
    • Deploying Admin Console
        • Creating namespace ✓
        • Waiting for datastore to be ready ✓
    Enter a new password to be used for the Admin Console: ••••••••••
      • Waiting for Admin Console to be ready ✓
    
      • Press Ctrl+C to exit
      • Go to http://localhost:8800 to access the Admin Console

    The installer prompts you to set an administrative password for the KOTS web interface and then to port-forward that admin console to localhost.

  2. Proceed to Finalizing the Mission Control installation.

Install Mission Control with an embedded Kubernetes runtime

The open-source K8s installer kURL allows the installation of Mission Control without any pre-existing Kubernetes cluster.

The following example assumes that a set of three (3) VMs run in the same network in the chosen environment, with no network restrictions between them and all with internet access.

  1. Choose one of the VM instances and install the DSE 6.9 image containing Mission Control:

    curl command
    curl -sSL https://kurl.sh/dse-6.9.0 | sudo bash
    Sample result
            Installation
                Complete ✔
    
    Kotsadm: http://30.91.53.115:8800
    Login with password (will not be shown again): *********
    This password has been set for you by default. It is recommended that
    you change this password; this can be done with the following command:
    kubectl kots reset-password default
  2. Save the results! After the installation completes, the result provides two important pieces of information that you must save for future reference:

    • The URL and admin password for the Kots admin UI, from which you can proceed with the rest of the Mission Control installation.

    • The command to run on each and every other node that is part of the cluster.

  3. To add worker nodes to this installation, run the following script on each of your other nodes in the cluster:

     curl -fsSL
    
    https://kurl.sh/version/v2023.07.11-0/mission-control-private-preview/join.
    sh | sudo bash -s kubernetes-master-address=175.32.21.207:6443
    kubeadm-token=519y9r.5rvob6osa35gq
    kubeadm-token-ca-hash=sha256:e5f1923e8648372f632e3af251617612459a26ba51e3fe
    f54b2639043788c kubernetes-version=1.24.15 ekco-address=172.31.24.207:31880
    ekco-auth-token=t93tD121B5WiGHD3glwGs0UEHuMJGHydbjpdjEDA9EWsUiz4SbbRqRHuaHH
    i4docker-registry-ip=11.90.12.92
    additional-no-proxy-addresses=11.90.0.0/22,11.30.0.0/20
    primary-host=175.32.21.207
  4. Run that script on each remaining node in the cluster.

  5. After all nodes have finished their installation, ssh back into the Control Plane node and check the cluster state:

    kubectl command
    kubectl get nodes
    Sample result
    NAME               STATUS   ROLES                  AGE   VERSION
    ip-175-32-24-217   Ready    control-plane,master   61m   v1.24.15
    ip-175-32-36-162   Ready    <none>                 49m   v1.24.15
    ip-175-32-7-69     Ready    <none>                 47m   v1.24.15

    The result displays three nodes at Ready state. Now proceed to Finalizing the Mission Control installation.

Finalizing the Mission Control installation

  1. Log into the Kots admin UI and skim through the certificates-related screens until you are able to select a Self Signed certificate for https purposes.

  2. Upload the license file that you previously downloaded at the prompt. After the license file is accepted, a configuration window displays.

  3. On the configuration window, match the following settings. That includes a disabled (unchecked) observability stack - which requires setting up an object storage connection to either S3 or GCS:

  4. After pressing Continue, a preflight check runs. All green checks indicate success. If successful, then click on Deploy to finalize the installation:

  5. The deployment continues and displays your Mission Control installation status.

    DSE with Mission Control is now operational. Proceed with Deploying a DSE 6.9 cluster.

Deploying a DSE 6.9 cluster

  1. On a local machine create a manifest file named dse69cluster.yaml to describe the cluster topology. Copy the following code segments as prescribed into the file:

    apiVersion: missioncontrol.datastax.com/v1beta2
    kind: MissionControlCluster
    metadata:
      name: my-cluster
      namespace: mission-control
    spec:
      ...
      dataApi:
       enabled: false
      ...
      k8ssandra:
        auth: true
        cassandra:
          ...
          serverImage: "
          serverType: dse
          serverVersion: 6.9.0
          storageConfig:
            cassandraDataVolumeClaimSpec:
              storageClassName: default
              accessModes:
                - ReadWriteOnce
              resources:
                requests:
                  storage: 10Gi
          config:
            jvmOptions:
              heapSize: 1G
          datacenters:
            ...
            - metadata:
                name: dc1
              size: 3
              resources:
                requests:
                  memory: 2Gi
    ...
  2. Either change the storageClassName to your preferred value, matching the ones available in your installation, or keep the default value. To determine which storage classes are available in your environment, run:

    kubectl get sc
  3. When using VMs with Mission Control embedded Kubernetes runtime, append the hostNetwork section at the same level as the config section in your dse69cluster.yaml file:

     ...
      networking:
        hostNetwork: true
      config:
        ...

    This enables direct availability of the deployed services on your network.

  4. Apply the manifest by running the following kubectl command from your machine:

    kubectl apply -f dse69cluster.yaml

    After a few seconds, check that the pods representing the DSE nodes appear:

    kubectl command
    $ kubectl get pods -n dse-6.9.0
    Sample result
    NAME                                                  READY   STATUS    RESTARTS   AGE
    cass-operator-controller-manager-6487b8fb6c-xkjjx     1/1     Running   0          41m
    k8ssandra-operator-55b44544d6-n8gs8                   1/1     Running   0          41m
    mission-control-controller-manager-54c64975cd-nvcm7   1/1     Running   0          41m
    test-dc1-default-sts-0                                0/2     Pending   0          7s
    test-dc1-default-sts-1                                0/2     Pending   0          7s
    test-dc1-default-sts-2                                0/2     Pending   0          7s

    Each DSE node must go through the standard bootstrapping process, taking approximately 2 to 3 minutes. Upon completion, the DSE nodes should display 2/2 under READY and Running under STATUS:

    NAME                                                  READY   STATUS    RESTARTS   AGE
    cass-operator-controller-manager-6487b8fb6c-xkjjx     1/1     Running   0          50m
    k8ssandra-operator-55b44544d6-n8gs8                   1/1     Running   0          50m
    mission-control-controller-manager-54c64975cd-nvcm7   1/1     Running   0          50m
    test-dc1-default-sts-0                                2/2     Running   0          9m6s
    test-dc1-default-sts-1                                2/2     Running   0          9m6s
    test-dc1-default-sts-2                                2/2     Running   0          9m6s

    Pods listing their STATUS as Pending indicates potential issues with resource availability. Check detailed pod information:

    kubectl describe pod <pod-name>

    Your DSE cluster is operational when all of its DSE nodes indicate 2/2 under READY and Running under STATUS.

Connect to your cluster using CQLSH

Now that DSE 6.9 is up and running, connect to the cluster using the previously downloaded cqlsh binary with vector index support.

Mission Control, secured by default, generates a unique superuser after disabling the default cassandra account.

  1. Discover the username of this generated superuser by accessing the <cluster-name>-superuser secret in your Kubernetes cluster in the mission-control namespace:

    kubectl command
    $ kubectl get secret/test-superuser -n mission-control -o
    jsonpath='{.data.username}' | base64 -d; echo
    Sample result
    test-superuser
  2. Read the username’s password:

    kubectl command
    $ kubectl get secret/test-superuser -n mission-control -o
    jsonpath='{.data.password}' | base64 -d; echo
    Sample result
    PaSsw0rdFORsup3ruser

Embedded Kubernetes cluster

Because host networking is enabled, connect to any of the nodes through its internet protocol (IP) address or hostname using cqlsh with the correct superuser credentials (port 9042 must be accessible from cqlsh):

cqlsh command
$ cqlsh --username test-superuser --password <PaSsw0rdFORsup3ruser>
ip-175-32-24-217
Sample result
Connected to test at ip-175-32-24-217:9042
[cqlsh 6.0.0 | Cassandra 4.0.7-c556d537c707 | CQL spec 3.4.5 | Native
protocol v5]
Use HELP for help.
test-superuser@cqlsh>

External Kubernetes cluster

  1. Port forward the service that exposes the CQL port for your cluster with the following command:

    kubectl port-forward svc/test-dc1-service 9042:9042 -n mission-control
  2. Connect using cqlsh pointing at localhost:

    cqlsh command
    $ cqlsh --username test-superuser --password <PaSsw0rdFORsup3ruser> 127.0.0.1
    Sample result
    Connected to test at 127.0.0.1:9042.
    [cqlsh 6.0.0 | Cassandra 4.0.7-c556d537c707 | CQL spec 3.4.5 | Native
    protocol v5]
    Use HELP for help.
    test-superuser@cqlsh>

Start using vector search! Access DSE 6.9 through either a standalone container or a Mission Control deployment, and start using the new vector indexes by following the quickstart guides from Astra DB.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com