Migrate existing Hyper-Converged Database (HCD), DataStax Enterprise (DSE), or Apache Cassandra® clusters to Mission Control using the add a new datacenter method
Migrate an existing HCD, DSE, or Cassandra cluster to Mission Control by adding a new datacenter that Mission Control manages. This approach provides a safe migration path without downtime because both the legacy and new datacenters run in parallel during the transition.
Prerequisites
-
A prepared Mission Control environment on an existing Kubernetes cluster or OpenShift cluster
-
An existing HCD, DSE, or Cassandra cluster running outside of Mission Control
-
Mission Control installed and configured on your target OpenShift or Kubernetes cluster
-
Network connectivity between the existing cluster and the new Mission Control-managed datacenter
-
Sufficient resources in the target cluster to run the new datacenter alongside the existing one during migration
-
Client applications that can be reconfigured to connect to the new datacenter
Migration workflow
The migration process follows these steps:
-
Create a new datacenter: Deploy a new datacenter in Mission Control with a unique name and link it to the existing cluster.
-
Rebuild data: Use
nodetool rebuildto stream data from the existing datacenter to the new one. -
Migrate clients: Update client applications to connect to the new Mission Control-managed datacenter.
-
Decommission old datacenter: Remove the legacy datacenter after you migrate all clients and verify data.
-
Verify: Confirm that all data and workloads function correctly on the new infrastructure.
This method ensures data consistency and allows for validation before fully committing to the new infrastructure.
Create a new datacenter in Mission Control
Create a new datacenter in Mission Control and link it to your existing cluster. Give the new datacenter a unique name that differs from any existing datacenter names.
-
Prepare a
MissionControlClustermanifest that defines the new datacenter. Include connection details for your existing cluster:apiVersion: missioncontrol.datastax.com/v1beta2 kind: MissionControlCluster metadata: name: NEW_CLUSTER_NAME spec: k8ssandra: externalDatacenters: - name: CURRENT_DC # Your existing datacenter not managed by Mission Control cassandra: serverType: "SERVER_TYPE" # Use "hcd" for HCD clusters, "dse" for DSE clusters, "cassandra" for Apache Cassandra serverVersion: "SERVER_VERSION" # Match your existing cluster version clusterName: "CURRENT_CLUSTER_NAME" # Must match existing cluster name datacenters: - metadata: name: NEW_DC # New Mission Control-managed datacenter k8sContext: TARGET_CLUSTER size: 3 racks: - name: rack1 nodeAffinityLabels: topology.kubernetes.io/zone: AVAILABILITY_ZONE_1 - name: rack2 nodeAffinityLabels: topology.kubernetes.io/zone: AVAILABILITY_ZONE_2 - name: rack3 nodeAffinityLabels: topology.kubernetes.io/zone: AVAILABILITY_ZONE_3 config: cassandraYaml: additionalSeeds: - "SEED_NODE_1" - "SEED_NODE_2" - "SEED_NODE_3" # Do not manually configure seed_provider # Mission Control automatically manages seed nodes within the new datacenterReplace the following:
-
NEW_CLUSTER_NAME: The name of your new cluster. This must match the existing cluster name. The only time this should be different fromspec.k8ssandra.cassandra.clusterNameis when the existing cluster name is not a valid Kubernetes object name. -
SERVER_TYPE: The database type:hcd,dse, orcassandra. -
SERVER_VERSION: Your database version. For example, 1.2.3 for HCD or 6.9.15 for DSE. -
CURRENT_CLUSTER_NAME: Your existing cluster name. -
CURRENT_DC: Your existing datacenter name. -
NEW_DC: Your new datacenter name. -
TARGET_CLUSTER: Your target local Kubernetes cluster context. -
AVAILABILITY_ZONE_1: Your first availability zone. For example,us-east-1a. -
AVAILABILITY_ZONE_2: Your second availability zone. For example,us-east-1b. -
AVAILABILITY_ZONE_3: Your third availability zone. For example,us-east-1c. -
SEED_NODE_1: Your first seed node IP address. -
SEED_NODE_2: Your second seed node IP address. -
SEED_NODE_3: Your third seed node IP address.-
The
clusterNamemust match your existing cluster’s name exactly. -
Use
additionalSeedsto specify seed nodes from your existing datacenter for cross-datacenter communication. -
Do NOT manually configure
seed_provider. Mission Control automatically manages seeds within the new datacenter. -
Manually configuring
seed_providerwith incorrect IPs will prevent the cluster from starting. -
Ensure network connectivity between the existing and new datacenters.
-
The new datacenter name must be unique.
-
-
-
Apply the manifest to create the new datacenter:
kubectl apply -f CLUSTER_NAME.yamlReplace
CLUSTER_NAMEwith your cluster name. -
Monitor datacenter creation:
kubectl get cassandradatacenter DATACENTER_NAME -o jsonpath='{.status.conditions[?(@.type=="Ready")].status}: {.status.cassandraOperatorProgress}'Replace
DATACENTER_NAMEwith your datacenter name.Wait until the datacenter status shows
True: Readyand all nodes are online.Use the following command to see the full status in YAML format:
kubectl get cassandradatacenter DATACENTER_NAME -o yaml
Rebuild data using nodetool
After you create the new datacenter and all nodes are online, use nodetool rebuild to stream data from the existing datacenter to the new Mission Control-managed datacenter.
-
Update keyspace replication to include the new datacenter. Connect to any node in the cluster using
cqlshand alter each keyspace:ALTER KEYSPACE KEYSPACE_NAME WITH replication = { 'class': 'NetworkTopologyStrategy', 'EXISTING_DATACENTER': 3, 'NEW_DATACENTER': 3 };Replace the following:
-
KEYSPACE_NAME: Your keyspace name -
EXISTING_DATACENTER: The name of your existing datacenter -
NEW_DATACENTER: The name of your new datacenter
-
-
Repeat the
ALTER KEYSPACEcommand until you have updated replication for all user keyspaces.Mission Control operators automatically update database system keyspaces (such as
system_authandsystem_schema). You only need to manually update your application keyspaces. -
Create a
CassandraTaskto rebuild the new datacenter from your existing datacenter:apiVersion: control.k8ssandra.io/v1alpha1 kind: CassandraTask metadata: name: TASK_NAME spec: datacenter: name: DATACENTER_NAME namespace: NAMESPACE jobs: - name: rebuild-from-legacy command: rebuild args: source_datacenter: EXISTING_DATACENTERReplace the following:
-
TASK_NAME: The name of your task -
DATACENTER_NAME: The name of your new datacenter -
NAMESPACE: The namespace where your new datacenter is located -
EXISTING_DATACENTER: The name of your existing datacenter
-
-
Submit the rebuild task:
kubectl apply -f TASK_NAME.yamlReplace
TASK_NAMEwith the name of your task. -
Monitor rebuild progress:
kubectl get cassandratask TASK_NAME -o jsonpath='{.status.conditions[?(@.type=="Complete")].status}: Succeeded={.status.succeeded}/{.status.active}'Replace
TASK_NAMEwith the name of your task.The rebuild operation streams data from the source datacenter (
dc-legacy) to all nodes in the new datacenter (dc-mc). This process might take considerable time depending on data volume.Use the following command to see the full status in YAML format:
kubectl get cassandratask TASK_NAME -o yaml -
Verify rebuild completion by checking the task status:
status: completionTime: "2026-02-23T22:15:30Z" conditions: - lastTransitionTime: "2026-02-23T22:15:30Z" status: "True" type: Complete succeeded: 3To monitor progress, run
nodetool netstats:kubectl exec -it POD_NAME -n NAMESPACE -- nodetool netstatsReplace the following:
-
POD_NAME: The name of your pod -
NAMESPACE: The namespace where your new datacenter is located
-
Migrate client applications
After you successfully rebuild data in the new datacenter, migrate your client applications to connect to the new Mission Control-managed datacenter.
|
For detailed connection configuration, including driver setup, security, and troubleshooting, see In-cluster application communication. For applications outside Kubernetes that need to use ingress, see the architecture diagrams in Application communication architecture. |
-
Identify the connection endpoints for the new datacenter:
kubectl get service -n CLUSTER_NAMESPACEReplace
CLUSTER_NAMESPACEwith the namespace of your cluster.Look for services related to your new datacenter. For example,
dc-mc-service. -
Update client application configurations to use the new datacenter endpoints. This typically involves:
-
Updating connection strings or contact points
-
Modifying load balancing policies to prefer the new datacenter
-
Updating any datacenter-aware routing logic
For in-cluster applications, use the service FQDN format:
CLUSTER_NAME-DATACENTER_NAME-service.PROJECT_SLUG.svc.cluster.local:9042For applications using the Data API, use:
CLUSTER_NAME-DATACENTER_NAME-data-api-cip.PROJECT_SLUG.svc.cluster.localFor external applications using ingress, configure your ingress controller to route to the new datacenter services. See Client database traffic for connection patterns.
-
-
Deploy updated client applications gradually:
-
Start with non-production environments.
-
Use canary deployments or blue-green strategies.
-
Monitor application performance and error rates.
-
Gradually shift traffic to the new datacenter.
-
-
Verify that clients successfully connect to and use the new datacenter:
# Verify new datacenter nodes are up and part of the cluster kubectl exec -it DATACENTER_NAME-RACK_NAME-sts-NODE_NUMBER -c cassandra -- nodetool statusReplace the following:
-
DATACENTER_NAME: The name of your new datacenter -
RACK_NAME: The rack name (for example,rack1) -
NODE_NUMBER: The node number (for example,0)
-
Decommission the old datacenter
After you successfully migrate all client applications and verify data consistency, you can decommission the old datacenter.
|
Ensure you migrate all clients and verify data before you decommission the old datacenter. This operation is irreversible. |
-
Update keyspace replication to remove the old datacenter:
ALTER KEYSPACE KEYSPACE_NAME WITH replication = { 'class': 'NetworkTopologyStrategy', 'dc-mc': 3 };Replace
KEYSPACE_NAMEwith the name of your keyspace. -
Repeat for step one for all keyspaces.
-
Run cleanup on the new datacenter to remove data that no longer belongs to it:
apiVersion: control.k8ssandra.io/v1alpha1 kind: CassandraTask metadata: name: TASK_NAME spec: datacenter: name: DATACENTER_NAME namespace: NAMESPACE jobs: - name: cleanup command: cleanupReplace the following:
-
DATACENTER_NAME: The name of your new datacenter -
NAMESPACE: The namespace of your datacenterkubectl apply -f TASK_NAME.yamlReplace
TASK_NAMEwith the name of your task.For more information, see Clean up nodes.
-
-
Decommission nodes in the old datacenter using standard HCD, DSE, or Cassandra procedures:
# On each node in the old datacenter nodetool decommission -
Remove the old datacenter from your infrastructure after you decommission all nodes.
Verify migration success
After you complete the migration, perform thorough verification:
-
Verify cluster health:
kubectl exec -it DATACENTER_NAME-RACK_NAME-sts-NODE_NUMBER -c cassandra -- nodetool statusReplace the following:
-
DATACENTER_NAME: The name of your new datacenter -
RACK_NAME: The rack name (for example,rack1) -
NODE_NUMBER: The node number (for example,0)Confirm that all nodes in the new datacenter show
UN(Up/Normal) status.
-
-
Verify data consistency by running a repair on all nodes in the new datacenter.
Mission Control includes Reaper, which is the recommended tool for running repairs. Reaper provides better repair management, progress tracking, and resource control compared to manual
nodetool repaircommands.To run a repair using Reaper:
-
Access the Reaper UI through Mission Control. For information about accessing Reaper, see Platform components and operations.
-
Create a new repair schedule or run an ad-hoc repair for your keyspaces in the new datacenter.
-
Monitor repair progress in the Reaper UI.
Alternatively, if you need to run a manual repair using
nodetool, run it on each node in the new datacenter:# Run on each node in the new datacenter kubectl exec -it DATACENTER_NAME-RACK_NAME-sts-NODE_NUMBER -c cassandra -- nodetool repair -fullReplace the following:
-
DATACENTER_NAME: The name of your new datacenter -
RACK_NAME: The rack name (for example,rack1,rack2,rack3) -
NODE_NUMBER: The node number (for example,0,1,2)-
Reaper is the preferred method for running repairs as it provides better coordination and prevents resource exhaustion.
-
If using manual
nodetool repair, run it sequentially on each node rather than in parallel to avoid overwhelming the cluster. -
The
-fullflag ensures a complete repair of all data.
-
-
-
-
Monitor application performance and error rates.
-
Verify that all expected data is accessible through client applications.
-
Check Mission Control metrics and logs for any issues:
kubectl logs -n CLUSTER_NAME -l app=cassandraReplace
CLUSTER_NAMEwith the name of your cluster.
Troubleshooting
This section provides solutions to common issues you might encounter during the migration process.
Rebuild fails or times out
If the rebuild operation fails:
-
Check network connectivity between datacenters.
-
Verify that you can access seed nodes.
-
Check for sufficient disk space on target nodes.
-
Review logs for specific error messages:
kubectl logs -n CLUSTER_NAME dc-mc-rack1-sts-0 -c cassandraReplace
CLUSTER_NAMEwith the name of your cluster. -
Restart the rebuild operation by resubmitting the
CassandraTask.
Clients cannot connect to new datacenter
If clients fail to connect:
-
Verify that service endpoints are correct.
-
Check network policies and firewall rules.
-
Confirm that authentication credentials are valid.
-
Verify that the datacenter fully initializes and becomes ready.
-
Validate connectivity manually using
cqlsh:kubectl exec -it DATACENTER_NAME-RACK_NAME-sts-NODE_NUMBER -c cassandra -- cqlsh -e "DESCRIBE KEYSPACES"Replace the following:
-
DATACENTER_NAME: The name of your new datacenter -
RACK_NAME: The rack name (for example,rack1) -
NODE_NUMBER: The node number (for example,0)This confirms that the database is accessible and responding to CQL queries.
-
New datacenter nodes fail to start
If nodes in the new datacenter remain in a non-ready state (1/2 containers) and never fully start:
-
Check if you manually configured
seed_providerin thecassandraYamlconfiguration. If so, this is likely the cause. -
Remove the manual
seed_providerconfiguration and let Mission Control manage seeds automatically:kubectl patch missioncontrolcluster CLUSTER_NAME -n NAMESPACE --type=json \ -p='[{"op": "remove", "path": "/spec/k8ssandra/cassandra/datacenters/0/config/cassandraYaml/seed_provider"}]'Replace the following:
-
CLUSTER_NAME: The name of your cluster -
NAMESPACE: The namespace where your cluster is deployed
-
-
The cluster will automatically roll out with the correct seed configuration. Monitor the rollout:
kubectl get pods -n NAMESPACE -wReplace
NAMESPACEwith the namespace where your cluster is deployed. -
Verify the cluster reaches 100% reconciliation:
kubectl get missioncontrolcluster CLUSTER_NAME -n NAMESPACE -o jsonpath='{.status.progress}: {.status.progressMessage}'Replace the following:
-
CLUSTER_NAME: The name of your cluster -
NAMESPACE: The namespace where your cluster is deployed
-
Data inconsistencies after migration
If you discover data inconsistencies:
-
Run a full repair on all nodes in the new datacenter using Reaper (recommended) or
nodetool repair:-
Using Reaper (recommended): Access the Reaper UI and run a repair for all keyspaces in the new datacenter. Reaper automatically coordinates repairs across all nodes.
-
Using nodetool: Run
nodetool repair -fullsequentially on each node:# Run on each node, one at a time kubectl exec -it DATACENTER_NAME-RACK_NAME-sts-NODE_NUMBER -c cassandra -- nodetool repair -fullReplace the following:
-
DATACENTER_NAME: The name of your new datacenter -
RACK_NAME: The rack name (for example,rack1,rack2,rack3) -
NODE_NUMBER: The node number (for example,0,1,2)Repeat for all nodes in the datacenter.
-
-
Verify keyspace replication settings are correct for both datacenters.
-
Check for any failed rebuild operations in the CassandraTask status.
-
Review application logs for write failures during migration.
-
Verify that all nodes show
UN(Up/Normal) status innodetool status.
Limitations
-
Mission Control doesn’t support in-place migration. You must create a new datacenter.
-
Both datacenters must communicate during the migration.
-
The migration requires sufficient resources to run both datacenters simultaneously.
-
Downtime might occur during client migration if you don’t carefully coordinate the process.