• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

DataStax Project Mission Control

    • Overview
      • Release notes
      • FAQs
      • Getting support
    • Installing DataStax Mission Control
      • Planning your install
      • Server-based Runtime Installer
        • Services setup with DataStax Mission Control Runtime Installer
      • Bring your own Kubernetes
        • Installing Control Plane
        • Installing Data Plane
    • Migrating
      • Migrating DSE Cluster to DataStax Mission Control
    • Managing
      • Managing DSE clusters
        • Configuring DSE
          • Authentication
          • Authorization
          • Securing DSE
          • DSE Unified Authorization
        • Cluster lifecycle
          • Creating a cluster
          • Creating a single-token cluster
          • Creating a multi-token cluster
          • Terminating a DSE cluster
          • Upgrading a DSE cluster
        • Datacenter lifecycle
          • Adding a DSE datacenter
          • Terminating a DSE datacenter
        • Node lifecycle
          • Adding DSE nodes
          • Terminating DSE nodes
          • Using per-node configurations
      • Managing DataStax Mission Control infrastructure
        • Adding a node to DataStax Mission Control clusters
        • Terminating a node from DataStax Mission Control clusters
        • Storage classes defined
      • Managing DataStax Mission Control resources
        • Accessing Admin Console
        • Configuring DataStax Mission Control
        • Generating a support bundle
    • Operating on DSE Clusters
      • Cleanup
      • Rebuilding
      • Replacing a node
      • Rolling restart
      • Upgrading SSTables
    • Reference
      • DSECluster manifest
      • CassandraTask manifest
  • DataStax Project Mission Control
  • Managing
  • Managing DataStax Mission Control infrastructure
  • Adding a node to DataStax Mission Control clusters

Add Kubernetes Nodes

DataStax Mission Control is current in Private Preview. It is subject to the beta agreement executed between you and DataStax. DataStax Mission Control is not intended for production use, has not been certified for production workloads, and might contain bugs and other functional issues. There is no guarantee that DataStax Mission Control will ever become generally available. DataStax Mission Control is provided on an “AS IS” basis, without warranty or indemnity of any kind.

If you are interested in trying out DataStax Mission Control please contact your DataStax account team.

Modify the DSECluster manifest (object) specification and submit that change with the kubectl command to add one or more nodes to a datacenter in a Kubernetes Cluster.

Prerequisites

  • The kubectl CLI tool.

  • Kubeconfig file` or context pointing to a Control Plane Kubernetes cluster.

Example

An existing DSECluster manifest specifying one datacenter with three DSE nodes distributed equally across three racks.

Procedure

  1. Sample DSECluster manifest named demo-dse.yaml that was used to initially create the datacenter (dc1):

    apiVersion: missioncontrol.datastax.com/v1alpha1
    kind: DSECluster
    metadata:
      name: demo
    spec:
      serverVersion: 6.8.26
      storageConfig:
        cassandraDataVolumeClaimSpec:
          storageClassName: premium-rwo
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 5Gi
      datacenters:
        - metadata:
            name: dc1
          k8sContext: east
          size: 3
          racks:
            - name: rack1
              nodeAffinityLabels:
                topology.kubernetes.io/zone: us-east1-c
            - name: rack2
              nodeAffinityLabels:
                topology.kubernetes.io/zone: us-east1-b
            - name: rack3
              nodeAffinityLabels:
                topology.kubernetes.io/zone: us-east1-d
  2. Modify the datacenters.size specification from 3 - (1 node per rack) to 6 - (3 nodes per rack):

    apiVersion: missioncontrol.datastax.com/v1alpha1
    kind: DSECluster
    metadata:
      name: demo
    spec:
      ...
      datacenters:
        - metadata:
            name: dc1
          k8sContext: east
          size: 6
          racks:
    ...
  3. Submit this change in the Control Plane cluster:

    kubectl apply -f demo-dse.cassandratask.yaml

    Three additional nodes (pods) deploy in parallel as the DSECluster increases in size from three to six nodes. Each node, however, starts serially as specified by the order of the rack definitions.

    At any given time the number of started DSE nodes in a rack cannot be more or less than the number of started nodes in all other racks by more than one.

  4. Monitor the status of the DSE nodes being created:

    kubectl get pods -l "cassandra.datastax.com/cluster"=demo

    Sample output:

    NAME                   READY   STATUS    RESTARTS   AGE
    demo-dc1-rack1-sts-0   2/2     Running   0          67m
    demo-dc1-rack1-sts-1   1/2     Running   0          110s
    demo-dc1-rack2-sts-0   2/2     Running   0          67m
    demo-dc1-rack2-sts-1   1/2     Running   0          110s
    demo-dc1-rack3-sts-0   2/2     Running   0          67m
    demo-dc1-rack3-sts-1   1/2     Running   0          110s

    The -l flag adds a label selector to filter the results. Every DSE pod has the cassandra.datastax.com/cluster label. There are six pods but only the initial three are fully ready. This is expected as the results were captured in mid-operation.

  5. Monitor the status of the CassandraDatacenter with this command:

    kubectl get cassandradatacenter dc1 -o yaml

    Click to reveal the sample output:

    status:
      cassandraOperatorProgress: Updating
      conditions:
      - lastTransitionTime: "2022-10-19T20:24:40Z"
        message: ""
        reason: ""
        status: "True"
        type: Healthy
      - lastTransitionTime: "2022-10-19T20:24:41Z"
        message: ""
        reason: ""
        status: "False"
        type: Stopped
      - lastTransitionTime: "2022-10-19T20:24:41Z"
        message: ""
        reason: ""
        status: "False"
        type: ReplacingNodes
      - lastTransitionTime: "2022-10-19T20:24:41Z"
        message: ""
        reason: ""
        status: "False"
        type: Updating
      - lastTransitionTime: "2022-10-19T20:24:41Z"
        message: ""
        reason: ""
        status: "False"
        type: RollingRestart
      - lastTransitionTime: "2022-10-19T20:24:41Z"
        message: ""
        reason: ""
        status: "False"
        type: Resuming
      - lastTransitionTime: "2022-10-19T20:24:41Z"
        message: ""
        reason: ""
        status: "False"
        type: ScalingDown
      - lastTransitionTime: "2022-10-19T20:24:41Z"
        message: ""
        reason: ""
        status: "True"
        type: Valid
      - lastTransitionTime: "2022-10-19T20:24:41Z"
        message: ""
        reason: ""
        status: "True"
        type: Initialized
      - lastTransitionTime: "2022-10-19T20:24:41Z"
        message: ""
        reason: ""
        status: "True"
        type: Ready
      - lastTransitionTime: "2022-10-19T21:24:34Z"
        message: ""
        reason: ""
        status: "True"
        type: ScalingUp
      lastServerNodeStarted: "2022-10-19T21:28:51Z"
      nodeStatuses:
        demo-dc1-rack1-sts-0:
          hostID: 2025d318-3fcc-4753-990b-3f9c388ba18a
        demo-dc1-rack1-sts-1:
          hostID: 33a0fc01-5947-471f-97a2-61237767d583
        demo-dc1-rack2-sts-0:
          hostID: 50748fb8-da1f-4add-b635-e80e282dc09b
        demo-dc1-rack2-sts-1:
          hostID: eb899ffd-0726-4fb4-bea7-c9d84d555339
        demo-dc1-rack3-sts-0:
          hostID: db86cba7-b014-40a2-b3f2-6eea21919a25
      observedGeneration: 1
      quietPeriod: "2022-10-19T20:24:47Z"
      superUserUpserted: "2022-10-19T20:24:42Z"
      usersUpserted: "2022-10-19T20:24:42Z"

    The ScalingUp condition has status: "True" indicating that the scaling up operation is in progress. Cass Operator updates it to "False" when the operation is complete.

After all DSE nodes reach the ready state the DataStax Mission Control operators create a CassandraTask to run a cleanup operation across all nodes.

Upon completion of the cleanup operation, the ScalingUp condition status is set to "False" for each node.

Managing DataStax Mission Control infrastructure Terminating a node from DataStax Mission Control clusters

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage