Create a database cluster

Creating a cluster on Mission Control is a simple task. Given that your Data Plane clusters have either the appropriate compute capacity or the capability to auto-scale, define a simple MissionControlCluster YAML file and invoke kubectl to create a running DSE cluster.

Mission Control manages or reconciles MissionControlCluster manifests defined either through the User Interface (UI) or through the Command Line Interface (CLI). When a MissionControlCluster resource is created the Mission Control automation picks up the changes and reconciles them into established containers running across the Data Planes.


  • Because Mission Control organizes clusters by projects, an existing project is required. See create project.

  • A prepared environment.

  • For UI Mission Control provides a UI through the IP address of any node using port 30880 on the Control-Plane cluster. For example, issue from a web browser, where is a qualifying node’s IP address.

Define and create a cluster

Create a cluster by completing the following define and submit tasks. Review the automatic reconciliation workflow, and then monitor the reconciliation status with one kubectl command.

To define a new MissionControlCluster start with creating a new YAML file that defines the topology and configuration for our new cluster. This file is an instance of a MissionControlCluster Kubernetes Custom Resource (CR) and it describes the target end-state for the cluster. What follows is a minimal example of a MissionControlCluster instance which creates a 3-node DSE cluster running version 6.8.26. Each node has 5 GB of storage available for data and requests 32 GB of RAM. See capacity planning documentation for system requirements.

Sample (partial) MissionControlCluster manifest (object):

kind: MissionControlCluster
  name: my-cluster
      serverVersion: 6.8.26
      serverType: dse
          storageClassName: standard
            - ReadWriteOnce
              storage: 5Gi
        hostNetwork: true
        - metadata:
            name: dc1
          size: 3
              memory: 32Gi

During Mission Control installation, the user interface shares various configuration settings to review or set. Use the reference catalog to guide your configuration decisions.

  1. Specify certain parameters in this CR file.

    1. The apiVersion and kind parameters indicate what type of resource this file represents. In this example, kind is a MissionControlCluster resource with an apiVersion of v1beta1.

    2. This yaml specification outlines metadata associated with this cluster. At a minimum you must specify a name for your cluster. This value is used in the cluster_name parameter of cassandra.yaml.

      Each name must be unique within a Kubernetes namespace. Submitting two clusters with the same name results in the first cluster being overwritten by the second.

    3. Other fields that may be present in the metadata include annotations or labels to provide additional ancillery data. At this time Mission Control does not use any of these fields, but they may be leveraged by automation within the user’s environment.

    4. After the metadata block review the spec, or specification, section. spec is the declaration of our target end-state for the cluster. Instead of describing the various steps to create a cluster you simply define what you want your cluster to look like and Mission Control handles reconciling existing or missing resources towards that end-state.

      See the MissionControlCluster reference for a list of options and their descriptions.

  2. The given MissionControlCluster is saved to disk as my-cluster.`MissionControlCluster.yaml`.

    Any filename is valid here. Using <resource_name>.<kind>.yaml allows you to easily differentiate multiple files in a given directory.

  3. Submit the MissionControlCluster YAML file to the Mission Control Control Plane Kubernetes cluster with kubectl.

    kubectl acts as a Kubernetes API client and handles calls to the Kubernetes API server. Advanced users may choose to leverage programmatic clients or GitOps tooling such as Flux instead of the imperative nature of the kubectl CLI.

    Submission of the object is handled with the kubectl apply sub-command.

    For example:

    kubectl apply -f my-cluster.dsecluster.yaml

    This reads the file specified with the -f flag and submits it to the Control Plane Kubernetes cluster. Should an object exist within the Kubernetes API sharing the same namespace and name it is updated to match the local file. When no file exists a new entry is created. As the new MissionControlCluster object becomes available within the Kubernetes API, Mission Control detects the new resource and immediately begins reconciliation.

Mission Control automatic reconciliation steps for MissionControlCluster resources

The following steps describe the automated process for informational purposes only; no user intervention is required.

  1. Cluster-level operators detect a new MissionControlCluster custom resource through the Kubernetes API within the Control Plane.

  2. Cluster-level operators identify which Data Plane clusters should receive datacenters defined within the MissionControlCluster. In this example the east Data Plane cluster is specified so datacenter-level resources are created and reconciled there.

  3. Datacenter-level operators within the Data Plane clusters detect new datacenter-level custom resource definition (CRD) via the Kubernetes API

  4. Datacenter-level operators generate and submit rack-level resources (StatefulSets) to their local Kubernetes API.

  5. Built-in Kubernetes reconciliation loops detect the new rack-level resources and begin creating pods and storage resources representing the underlying DSE nodes.

  6. Status of resource creation rises to operators at the datacenter and cluster levels.

  7. When all pods are up and running the cluster-level operator signals the datacenter-level operators to begin bootstrap operations of DSE within the created and running pods.

  8. As pods come online their status is escalated and operations continue until all 3 nodes are up and running with services discoverable via the Kubernetes API.

Monitor bootstrap progress

  1. Monitor the progress of the bootstrap to determine completion status or note any errors.

    After submission of the MissionControlCluster custom resource (CR) the operator modifies the resource within the Kubernetes API adding a status field to the top-level of the resource. This status field provides valuable insight into the health of the MissionControlCluster as one or more operators detect definition changes. status indicates everything from reconciliation phase to errors encountered while attempting to create storage. Run the following command to retrieve the descriptive status for the my-cluster MissionControlCluster object:

    % kubectl describe mccluster/my-cluster
    Sample results
    Name:         my-cluster
    Namespace:    default
    Labels:       <none>
    Annotations:  <none>
    API Version:
    Kind:         `MissionControlCluster`
      Creation Timestamp:  2023-10-30T11:09:33Z
      Generation:        1
      Resource Version:  105388250
      UID:               57e956f8-1f87-422f-a7f8-b9ec87b956c4
      Create Issuer:  true
        Internode Encryption:
            Cert Template:
              Issuer Ref:
              Secret Name:
            Create Certs:   true
          Enabled:          true
        Auth:  true
            Dse Workloads:
              Name:  dc1
                Additional Seed Service:
                All Pods Service:
                Dc Service:
                Node Port Service:
                Seed Service:
            Per Node Config Init Container Image:  <name>/yq:4
            Per Node Config Map Ref:
              Name:   rack1
            Size:     3
            Stopped:  false
              Additional Seed Service:
              All Pods Service:
              Dc Service:
              Node Port Service:
              Seed Service:
          Per Node Config Init Container Image:  <name>/yq:4
              Memory:      32Gi
          Server Type:     dse
          Server Version:  6.8.26
          Storage Config:
            Cassandra Data Volume Claim Spec:
              Access Modes:
                  Storage:         5Gi
              Storage Class Name:  standard
          Superuser Secret Ref:
            Name:          my-cluster-superuser
        Secrets Provider:  internal
        Last Transition Time:  2023-10-30T14:08:08Z
        Reason:                UpdatingIssuers
        Status:                False
        Type:                  UpdatingIssuers
        Last Transition Time:  2023-10-30T14:08:08Z
        Reason:                UpdatingCertificates
        Status:                False
        Type:                  UpdatingCertificates
        Last Transition Time:  2023-10-30T14:08:08Z
        Reason:                UpdatingReplicatedSecrets
        Status:                False
        Type:                  UpdatingReplicatedSecrets
        Last Transition Time:  2023-10-30T11:16:38Z
        Reason:                UpdatingCluster
        Status:                False
        Type:                  UpdatingCluster
        Last Transition Time:  2023-10-30T14:08:08Z
        Message:               Ready
        Reason:                Ready
        Status:                True
        Type:                  Ready
    Events:                    <none>
  2. Access operator logs to discover more detail:

    kubectl logs -n mission-control <pod-name>

    An example <pod-name> is mission-control-controller.

    The StatefulSet controller is one of the core Kubernetes controllers that creates the pods. There is one pod per StatefulSet:




What’s next

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000,