Add a datacenter to an existing database cluster

Database administrators can add one or more datacenters to an existing database cluster to:

  • Support additional workloads.

  • Solve latency issues.

  • Expand into new markets.

  • Add capacity so that their applications remain available.

  • Support new functionality.

Use Mission Control to create one or more datacenters. Do this from an existing cluster and bootstrap one datacenter at a time. With multiple datacenters, sort them in ascending order by datacenter name and select each datacenter in the list to create until all are added to the cluster.

Prerequisites

Example

This example demonstrates how to add a datacenter to an existing database cluster across multiple regions. The setup includes the following:

  • A control plane Kubernetes cluster that manages the overall system

  • An existing data plane Kubernetes cluster in the east region running a single datacenter

  • A new data plane Kubernetes cluster in the west region where the additional datacenter will be deployed

Workflow for User and operators

  1. Submit a modified MissionControlCluster specifying a new datacenter in the west region to the control plane Kubernetes cluster.

  2. The control plane cluster-level operator picks up the modification and creates datacenter-level resources in the west region data plane Kubernetes cluster where the new nodes will be created.

  3. The west region data plane DC-level operator picks up the datacenter-level resources and creates native Kubernetes objects representing the database nodes.

  4. The west region data plane DC-level operator bootstraps one node at a time, balancing operations across racks and reporting progress.

  5. The control plane cluster-level operator updates keyspace replication settings on system keyspaces. User updates keyspace replication settings on user keyspaces.

  6. The control plane cluster-level operator runs a rebuild operation on system keyspaces.

Define and add a new datacenter to a cluster

  1. Define the new datacenter criteria, for example:

    • Target the region, Kubernetes cluster, and availability zones

    • Target the workload (core Cassandra). For DSE, core Cassandra, Search, graph).

  2. Modify the existing MissionControlCluster YAML file in the control plane cluster to add a new datacenter definition with three nodes in the west region:

    apiVersion: missioncontrol.datastax.com/v1beta2
    kind: MissionControlCluster
    metadata:
      name: demo
    spec:
    ...
        datacenters:
          - metadata:
              name: dc1
            k8sContext: east
            size: 3
    ...
          - metadata:
              name: dc2
            k8sContext: west
            size: 3
            racks:
              - name: rack1
                nodeAffinityLabels:
                  topology.kubernetes.io/zone: us-west1-c
              - name: rack2
                nodeAffinityLabels:
                  topology.kubernetes.io/zone: us-west1-b
              - name: rack3
                nodeAffinityLabels:
                  topology.kubernetes.io/zone: us-west1-a
  3. Submit the updated MissionControlCluster YAML file to Kubernetes.

    kubectl apply -f demo-dse.cassandratask.yaml

    The K8ssandraCluster is updated. Datacenter-level operators then create a CassandraDatacenter named dc2 in the west cluster.

  4. Monitor the progress of adding a datacenter with the following command:

    kubectl get k8ssandracluster demo -o yaml
    Result
    ...
    status:
      conditions:
      - lastTransitionTime: "2025-08-28T16:07:17Z"
        status: "True"
        type: CassandraInitialized
      datacenters:
        dc1:
          cassandra:
            cassandraOperatorProgress: Ready
            conditions:
            - lastTransitionTime: "2025-08-28T16:07:10Z"
              message: ""
              reason: ""
              status: "True"
              type: Healthy
            - lastTransitionTime: "2025-08-28T16:07:10Z"
              message: ""
              reason: ""
              status: "False"
              type: Stopped
            - lastTransitionTime: "2025-08-28T16:07:10Z"
              message: ""
              reason: ""
              status: "False"
              type: ReplacingNodes
            - lastTransitionTime: "2025-08-28T17:33:34Z"
              message: ""
              reason: ""
              status: "False"
              type: Updating
            - lastTransitionTime: "2025-08-28T16:07:10Z"
              message: ""
              reason: ""
              status: "False"
              type: RollingRestart
            - lastTransitionTime: "2025-08-28T16:07:10Z"
              message: ""
              reason: ""
              status: "False"
              type: Resuming
            - lastTransitionTime: "2025-08-28T16:07:10Z"
              message: ""
              reason: ""
              status: "False"
              type: ScalingDown
            - lastTransitionTime: "2025-08-28T16:07:10Z"
              message: ""
              reason: ""
              status: "True"
              type: Valid
            - lastTransitionTime: "2025-08-28T16:07:11Z"
              message: ""
              reason: ""
              status: "True"
              type: Initialized
            - lastTransitionTime: "2025-08-28T16:07:11Z"
              message: ""
              reason: ""
              status: "True"
              type: Ready
            lastServerNodeStarted: "2025-08-28T17:32:21Z"
            nodeStatuses:
              demo-dc1-rack1-sts-0:
                hostID: 772b67f5-ee00-4eab-ab84-61f430d376ea
              demo-dc1-rack2-sts-0:
                hostID: 9ecd5c6b-f062-454d-8411-7cb3a2e9283a
              demo-dc1-rack3-sts-0:
                hostID: cf3a4951-f554-43b2-9c42-290c0301d47d
            observedGeneration: 2
            quietPeriod: "2025-08-28T20:26:47Z"
            superUserUpserted: "2025-08-28T20:26:42Z"
            usersUpserted: "2025-08-28T20:26:42Z"
        dc2:
          cassandra:
            cassandraOperatorProgress: Updating
            lastServerNodeStarted: "2025-08-28T20:30:18Z"
            nodeStatuses:
              demo-dc2-rack1-sts-0:
                hostID: 53489fcd-7ac5-4e60-b231-e152efed736d
              demo-dc2-rack2-sts-0:
                hostID: 2ba9874a-3d7c-4033-8ab7-9653c48274df
      error: None
    ...

    The sample output indicates that two nodes are online at this point in the monitoring. The new CassandraDatacenter dc2 is ready when all of the Ready and Initialized conditions:status are set to True.

Operators running on the cluster automatically modify system keyspaces to include the new datacenter. Replication of user-defined keyspaces remains unchanged.

The following keyspaces are updated:

  • system_traces

  • system_distributed

  • system_auth

  • dse_leases

  • dse_perf

  • dse_security

Now users can run workloads across the east and west region datacenters.

Next steps

  1. The user configures the replication factor of any user keyspaces. In this example RF=3. If the number of nodes in the datacenter is less than 3, the RF is set equal to the number of nodes. See Cleanup nodes in a datacenter. See Changing keyspace replication strategy using CQLSH commands.

  2. Run a rebuild operation on all nodes in the newly created datacenter, using the original datacenter as the streaming source. For more information, see Rebuild a datacenter’s replicas.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2025 DataStax, an IBM Company | Privacy policy | Terms of use | Manage Privacy Choices

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com