• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

DataStax Project Mission Control

    • Overview
      • Release notes
      • FAQs
      • Getting support
    • Installing DataStax Mission Control
      • Planning your install
      • Server-based Runtime Installer
        • Services setup with DataStax Mission Control Runtime Installer
      • Bring your own Kubernetes
        • Installing Control Plane
        • Installing Data Plane
    • Migrating
      • Migrating DSE Cluster to DataStax Mission Control
    • Managing
      • Managing DSE clusters
        • Configuring DSE
          • Authentication
          • Authorization
          • Securing DSE
          • DSE Unified Authorization
        • Cluster lifecycle
          • Creating a cluster
          • Creating a single-token cluster
          • Creating a multi-token cluster
          • Terminating a DSE cluster
          • Upgrading a DSE cluster
        • Datacenter lifecycle
          • Adding a DSE datacenter
          • Terminating a DSE datacenter
        • Node lifecycle
          • Adding DSE nodes
          • Terminating DSE nodes
          • Using per-node configurations
      • Managing DataStax Mission Control infrastructure
        • Adding a node to DataStax Mission Control clusters
        • Terminating a node from DataStax Mission Control clusters
        • Storage classes defined
      • Managing DataStax Mission Control resources
        • Accessing Admin Console
        • Configuring DataStax Mission Control
        • Generating a support bundle
    • Operating on DSE Clusters
      • Cleanup
      • Rebuilding
      • Replacing a node
      • Rolling restart
      • Upgrading SSTables
    • Reference
      • DSECluster manifest
      • CassandraTask manifest
  • DataStax Project Mission Control
  • Operating on DSE Clusters
  • Replacing a node

Replace a Node

DataStax Mission Control is current in Private Preview. It is subject to the beta agreement executed between you and DataStax. DataStax Mission Control is not intended for production use, has not been certified for production workloads, and might contain bugs and other functional issues. There is no guarantee that DataStax Mission Control will ever become generally available. DataStax Mission Control is provided on an “AS IS” basis, without warranty or indemnity of any kind.

If you are interested in trying out DataStax Mission Control please contact your DataStax account team.

Replacing a node destroys it and its data, forcing a replacement node that is clean and empty.

Run this operation when a node is defective and you need to create a new node that is identical to the node being replaced.

Performance Impact

This operation results in the complete replacement of a node with a new pod containing NO DATA, but owning the same token range as the node it is replacing. In this situation the new node bootstraps rebuilding its data from the remaining replicas within the cluster. This results in some disk pressure while the replacement node bootstraps.

Prerequisites

  • The kubectl CLI tool.

  • Kubeconfig file` or context pointing to a Control Plane Kubernetes cluster.

Workflow of user and operators

  1. User defines a replacenode CassandraTask.

  2. User submits a replace node CassandraTask to the Data Plane Kubernetes cluster where the datacenter is deployed.

  3. DC-operator detects new task custom resource definition (CRD).

  4. DC-operator iterates one rack at a time.

  5. DC-operator triggers and monitors replacement operations one pod at a time.

  6. DC-operator reports task progress and status.

  7. User requests a status report of the replace-node CassandraTask with the kubectl command, and views the status response.

Procedure

  1. Modify the replace-node-task.cassandratask.yaml file to define a replacenode CassandraTask.

    Here is a sample:

    apiVersion: control.k8ssandra.io/v1alpha1
    kind: CassandraTask
    metadata:
      name: replace-dc1
    spec:
      datacenter:
        name: dc1
        namespace: demo
      jobs:
        - name: replace-dc1
          command: replacenode
          args:
            keyspace_name: my_keyspace

    Key options:

    • metadata.name: a unique identifer within the Kubernetes namespace where the task is submitted. While the name can be any value, consider including the cluster name to prevent collision with other options.

    • spec.datacenter: a unique namespace and name combination used to determine which datacenter to target with this operation.

    • spec.jobs[0].command: MUST be replacenode for this operation.

    • Optional: spec.jobs[0].args.keyspace_name: restricts this operation to a particular keyspace. Omitting this value results in ALL keyspaces being replaced. By default all keyspaces are rebuilt.

  2. Submit the replacenode CassandraTask custom resource definition to the Data Plane Kubernetes cluster with this command:

    kubectl apply -f replace-node-task.cassandratask.yaml

    Submit the replacenode CassandraTask to the Kubernetes cluster where the specified datacenter is deployed.

    DC-level operators manage this CassandraTask. They stop the DSE node if it is running and then delete the Persistent Volume(s) (PV). Next they delete the node (pod) in which DSE is running. A new node is deployed as its replacement, and is started normally, picking up the same token range as the previous node.

  3. Monitor the node replacement progress with this command:

    kubectl get cassandratask replace-node | yq .status

    Sample output:

    ...
     - lastTransitionTime: "2022-11-01T03:28:12Z"
        message: ""
        reason: ""
        status: "True"
        type: ReplacingNodes

    The status field is set to "False" when the replacenode operation completes.

  4. Monitor the progress and view the status of the CassandraTask object by issuing this command in the Control Plane cluster:

    kubectl get cassandratask replace-node -o yaml

    Sample output:

    ...
    status:
      completionTime: "2022-11-01T03:28:33Z"
      conditions:
      - lastTransitionTime: "2022-11-01T03:28:12Z"
        status: "True"
        type: Running
      - lastTransitionTime: "2022-11-01T03:28:34Z"
        status: "True"
        type: Complete
      startTime: "2022-11-01T03:28:12Z"
      succeeded: 1

    The DC-level operators set the startTime field prior to starting the replacenode operation. It updates the completionTime field when the rebuild operation is completed.

Rebuilding Rolling restart

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage