• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

DataStax Project Mission Control

    • Overview
      • Release notes
      • FAQs
      • Getting support
    • Installing DataStax Mission Control
      • Planning your install
      • Server-based Runtime Installer
        • Services setup with DataStax Mission Control Runtime Installer
      • Bring your own Kubernetes
        • Installing Control Plane
        • Installing Data Plane
    • Migrating
      • Migrating DSE Cluster to DataStax Mission Control
    • Managing
      • Managing DSE clusters
        • Configuring DSE
          • Authentication
          • Authorization
          • Securing DSE
          • DSE Unified Authorization
        • Cluster lifecycle
          • Creating a cluster
          • Creating a single-token cluster
          • Creating a multi-token cluster
          • Terminating a DSE cluster
          • Upgrading a DSE cluster
        • Datacenter lifecycle
          • Adding a DSE datacenter
          • Terminating a DSE datacenter
        • Node lifecycle
          • Adding DSE nodes
          • Terminating DSE nodes
          • Using per-node configurations
      • Managing DataStax Mission Control infrastructure
        • Adding a node to DataStax Mission Control clusters
        • Terminating a node from DataStax Mission Control clusters
        • Storage classes defined
      • Managing DataStax Mission Control resources
        • Accessing Admin Console
        • Configuring DataStax Mission Control
        • Generating a support bundle
    • Operating on DSE Clusters
      • Cleanup
      • Rebuilding
      • Replacing a node
      • Rolling restart
      • Upgrading SSTables
    • Reference
      • DSECluster manifest
      • CassandraTask manifest
  • DataStax Project Mission Control
  • Managing
  • Managing DSE clusters
  • Cluster lifecycle
  • Creating a cluster

Creating a DataStax Enterprise Cluster

DataStax Mission Control is current in Private Preview. It is subject to the beta agreement executed between you and DataStax. DataStax Mission Control is not intended for production use, has not been certified for production workloads, and might contain bugs and other functional issues. There is no guarantee that DataStax Mission Control will ever become generally available. DataStax Mission Control is provided on an “AS IS” basis, without warranty or indemnity of any kind.

If you are interested in trying out DataStax Mission Control please contact your DataStax account team.

Creating a DataStax Enterprise (DSE) cluster on DataStax Mission Control is a simple task. Given that your Data Plane clusters have either the appropriate compute capacity or the capability to auto-scale, use a simple YAML file and invoke kubectl to create a running DSECluster.

The DataStax Mission Control operator manages or reconciles DSEClusters. When a DSECluster is created the DataStax Mission Control operator creates a K8ssandraCluster in the namespace in the Control Plane cluster. The K8ssandraCluster has the same name as the DSECluster.

Prerequisites

  • kubectl

  • Kubeconfig file or context pointing to a Control Plane Kubernetes cluster

Procedure

Create a DSE cluster by completing the following define and submit tasks. Review the automatic reconciliation workflow, and then monitor the reconciliation status with one kubectl command.

  1. To define a new DSECluster start with creating a new YAML file that defines the topology and configuration for our new cluster. This file is an instance of a DSECluster Kubernetes Custom Resource (CR) and it describes the target end-state for the cluster. What follows is a minimal example of a DSECluster instance which creates a 3-node DSE cluster running version 6.8.26. Each node has 5 GB of storage available for data and requests 32 GB of RAM. See capacity planning documentation for system requirements.

    Sample DSECluster manifest (object):

    apiVersion: missioncontrol.datastax.com/v1alpha1
    kind: DSECluster
    metadata:
      name: my-cluster
    spec:
      serverVersion: 6.8.26
      storageConfig:
        cassandraDataVolumeClaimSpec:
          storageClassName: standard
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 5Gi
      networking:
        hostNetwork: true
      datacenters:
        - metadata:
            name: dc1
          size: 3
          resources:
            requests:
              memory: 32Gi
  2. Specify certain parameters in this CR file.

    1. The apiVersion and kind parameters indicate what type of resource this file represents. In this example, kind is a DSECluster resource with an apiVersion of v1alpha1.

    2. This yaml specification outlines metadata associated with this cluster. At a minimum you must specify a name for your cluster. This value is used in the cluster_name parameter of cassandra.yaml.

      Each name must be unique within a Kubernetes namespace. Submitting two clusters with the same name results in the first cluster being overwritten by the second.

    3. Other fields that may be present in the metadata include annotations or labels to provide additional ancillery data. At this time DataStax Mission Control does not use any of these fields, but they may be leveraged by automation within the user’s environment.

    4. After the metadata block review the spec, or specification, section. spec is the declaration of our target end-state for the cluster. Instead of describing the various steps to create a cluster you simply define what you want your cluster to look like and DataStax Mission Control handles reconciling existing or missing resources towards that end-state.

      See the DSECluster reference for a list of options and their descriptions.

  3. The given DSECluster is saved to disk as my-cluster.dsecluster.yaml.

    Any filename is valid here. Using <resource_name>.<kind>.yaml allows you to easily differentiate multiple files in a given directory.

  4. Submit the DSECluster YAML file to the DataStax Mission Control Control Plane Kubernetes cluster for reconciliation with kubectl.

    kubectl acts as a Kubernetes API client and handles calls to the Kubernetes API server. Advanced users may choose to leverage programmatic clients or GitOps tooling such as Flux instead of the imperative nature of the kubectl CLI.

    Submission of the object is handled with the kubectl apply sub-command.

    For example:

    kubectl apply -f my-cluster.dsecluster.yaml

    This reads the file specified with the -f flag and submits it to the Data Plane Kubernetes cluster. Should an object exist within the Kubernetes API sharing the same namespace and name it is updated to match the local file. When no file exists a new entry is created. As the new DSECluster object becomes available within the Kubernetes API, DataStax Mission Control operators detect the new resource and immediately begin reconciliation.

DataStax Mission Control automatic reconciliation steps for DSECluster resources

  1. Cluster-level operators detect a new DSECluster custom resource via the Kubernetes API.

  2. Cluster-level operators identify which Data Plane clusters should receive datacenters defined within the DSECluster. In this example the east Data Plane cluster is specified so datacenter-level resources are created and reconciled there.

  3. Datacenter-level operators within the Data Plane clusters detect new datacenter-level custom resource definition (CRD) via the Kubernetes API

  4. Datacenter-level operators generate and submit rack-level resources (StatefulSets) to their local Kubernetes API.

  5. Built-in Kubernetes reconciliation loops detect the new rack-level resources and begin creating pods and storage resources representing the underlying DSE nodes.

  6. Status of resource creation rises to operators at the datacenter and cluster levels.

  7. When all pods are up and running the cluster-level operator signals the datacenter-level operators to begin bootstrap operations of DSE within the created and running pods.

  8. As pods come online their status is escalated and operations continue until all 3 nodes are up and running with services discoverable via the Kubernetes API.

Monitor bootstrap progress

  1. Monitor the progress of the bootstrap to determine completion status or note any errors.

    After submission of the DSECluster custom resource (CR) the operator modifies the resource within the Kubernetes API adding a status field to the top-level of the resource. This status field provides valuable insight into the health of the DSECluster as one or more operators detect definition changes. status indicates everything from reconciliation phase to errors encountered while attempting to create storage. Run the following command to retrieve the high-level status for the my-cluster DSECluster object:

    `kubectl get dseclusters my-cluster`

    The output from this command is a bit terse, making it more difficult to discern detailed status. Modify the CLI command with an output parameter such as -o yaml and run it to see more verbose output that lists various status conditions.

    kubectl get dseclusters my-cluster -o yaml
  2. Access operator logs to discover more detail:

    kubectl logs -n mission-control <pod-name>

    An example <pod-name> is mission-control-controller.

    The StatefulSet controller is one of the core Kubernetes controllers that creates the pods. There is one pod per StatefulSet:

    demo-dc1-rack1-sts-0

    demo-dc1-rack2-sts-0

    demo-dc1-rack3-sts-0

What’s next

  • Explore the DSECluster reference documentation for a complete listing of all fields and values.

  • Terminate the newly created DSE Cluster.

Cluster lifecycle Creating a single-token cluster

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage