Add a datacenter to a cluster using a designated datacenter as a data source

Complete the following steps to add a datacenter to an existing cluster using a designated datacenter as a data source. In this procedure, a new datacenter, DC4 is added to an existing cluster with existing datacenters DC1, DC2, and DC3.

Prerequisites

Datacenter naming recommendations

This procedure requires an existing datacenter.

Ensure that your datacenter name is no more than 48 characters long, only uses alphanumeric characters, and does not contain special characters or spaces.

Do not use special characters when naming a datacenter. Using prohibited characters in a datacenter name causes server errors.

Procedure

  1. In existing datacenters, if the SimpleStrategy replication strategy is in use, change it to the NetworkTopologyStrategy replication strategy.

    1. Use ALTER KEYSPACE to change the keyspace replication strategy to NetworkTopologyStrategy for the following keyspaces:

      ALTER KEYSPACE keyspace_name
      WITH REPLICATION = {
        'class' : 'NetworkTopologyStrategy',
        'DC1' : 3
      };
      • All application keyspaces

      • system_auth: stores authentication and authorization

      • system_distributed: stores repair history

      • (Optional) system_traces: stores trace information when CQL tracing is enabled

        Do not modify the replication strategy for all other system keyspaces.

    2. Use DESCRIBE SCHEMA to check the replication strategy of keyspaces in the cluster. Ensure that any existing keyspaces use the NetworkTopologyStrategy replication strategy.

      DESCRIBE SCHEMA ;
      CREATE KEYSPACE hcd_perf WITH replication =
      {'class': 'NetworkTopologyStrategy, 'DC1': '3'}  AND durable_writes = true;
      ...
      
      CREATE KEYSPACE hcd_leases WITH replication =
      {'class': 'NetworkTopologyStrategy, 'DC1': '3'}  AND durable_writes = true;
      ...
      
      CREATE KEYSPACE HCDFS WITH replication =
      {'class': 'NetworkTopologyStrategy, 'DC1': '3'}  AND durable_writes = true;
      ...
      
      CREATE KEYSPACE hcd_security WITH replication =
      {'class': 'NetworkTopologyStrategy, 'DC1': '3'}  AND durable_writes = true;
  2. Install HCD on each node in the new datacenter. Do not start the service or restart the node.

    Use the same version of HCD on all nodes in the cluster.

  3. Configure properties in cassandra.yaml on each new node, following the configuration of the other nodes in the cluster.

    1. Configure node properties:

      • -seeds: <internal_IP_address> of each seed node

    Include at least one seed node from each datacenter. DataStax recommends more than one seed node per datacenter, in more than one rack. 3 is the most common number of seed nodes per datacenter. Do not make all nodes seed nodes.

  4. In the cassandra-rackdc.properties (GossipingPropertyFileSnitch) or cassandra-topology.properties (PropertyFileSnitch) file, assign datacenter and rack names to the IP addresses of each node, and assign a default datacenter name and rack name for unknown nodes.

    GossipingPropertyFileSnitch always loads cassandra-topology.properties when the file is present.

    It should only exist if using PropertyFileSnitch. Otherwise, it should be deleted from all nodes in all datacenters.

    # Transactional Node IP=Datacenter:Rack
    110.82.155.0=DC_Transactional:RAC1
    110.82.155.1=DC_Transactional:RAC1
    110.54.125.1=DC_Transactional:RAC2
    110.54.125.2=DC_Analytics:RAC1
    110.54.155.2=DC_Analytics:RAC2
    110.82.155.3=DC_Analytics:RAC1
    110.54.125.3=DC_Search:RAC1
    110.82.155.4=DC_Search:RAC2
    
    # default for unknown nodes
    default=dc_unknown:rac_unknown

    After making any changes in the configuration files, you must the restart the node for the changes to take effect.

  5. Make the following changes in the existing datacenters.

    1. On nodes in the existing datacenters, update the -seeds property in cassandra.yaml to include the seed nodes in the new datacenter.

    2. Add the new datacenter definition to the cassandra.yaml properties file for the type of snitch used in the cluster. If changing snitches, see Switching snitches.

  6. After you have installed and configured HCD on all nodes, start the nodes sequentially, beginning with the seed nodes. After starting each node, allow a delay of at least the value specified in ring_delay_ms before starting the next node, to prevent a cluster imbalance.

    Before starting a node, ensure that the previous node is up and running by verifying that it has a nodetool status of UN. Failing to do so will result in cluster imbalance that cannot be fixed later. Cluster imbalance can be visualized by running nodetool status <keyspace_name> and by looking at the ownership column. A properly setup cluster will report ownership values similar to each other (±1%). That is, for keyspaces where the RF per DC is equal to allocate_tokens_for_local_replication_factor.

  1. Run nodetool status to ensure that new datacenter is up and running.

    nodetool status
    Datacenter: DC1
    ===============
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    -- Address       Load       Owns Host ID           Token                Rack
    UN 10.200.175.11 474.23 KiB ?    7297d21e-a04e-... -9223372036854775808 RAC1
    Datacenter: DC2
    ===============
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    -- Address        Load       Owns Host ID           Token                Rack
    UN 10.200.175.113 518.36 KiB ?    2ff7d46c-f084-... -9223372036854775798 RAC1
    Datacenter: DC3
    ===============
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    -- Address        Load       Owns Host ID           Token                Rack
    UN 10.200.175.111 961.56 KiB ?    ac43e602-ef09-... -9223372036854775788 RAC1
    Datacenter: DC4
    ===============
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    -- Address        Load       Owns Host ID           Token                Rack
    UN 10.200.175.114 361.56 KiB ?    ac43e602-ef09-... -9223372036854775688 RAC1
  2. After all nodes are running in the cluster and the client applications are datacenter aware, use cqlsh to alter the keyspaces to add the desired replication in the new datacenter.

    ALTER KEYSPACE keyspace_name
    WITH REPLICATION = {
      'class' : 'NetworkTopologyStrategy',
      'ExistingDC1' : 3,
      'NewDC2' : 2
    };
  1. Run nodetool rebuild on each node in the new datacenter, specifying the corresponding datacenter from the source datacenter.

    nodetool rebuild -dc <source_datacenter_name>

    To specify a rack name, use a colon (:) to separate the datacenter and rack names. For example:

    nodetool rebuild -dc DC1:RAC1

    To run a nodetool rebuild command and keep it running even after exiting the shell or terminal window, use the Unix nohup command:

    nohup nodetool rebuild -dc DC1:RAC1

    To run a nodetool rebuild command in the background and log the results, use:

    nohup nodetool rebuild -dc <source_datacenter_name> > rebuild.log 2>&1 &

    <source_datacenter_name>

    Replace <source_datacenter_name> with the name of the datacenter for which you want to rebuild data.

    > rebuild.log 2>&1 &

    Redirects the output of the command to a log file named rebuild.log. It ensures that both standard output and standard errors are redirected to the same log file (2>&1). The final & at the end runs the command in the background.

    The following commands replicate data from an existing rack in datacenter DC1 to the corresponding rack in the new datacenter DC2 on each DC2 node. This spreads the streaming overhead of the rebuild across more nodes. A rebuild per rack can increase the speed of the rebuild, but possibly at the cost of an increase in user latency. To decrease user latency, concentrate the streaming overhead of the rebuild on a smaller number of nodes. Rebuild each rack in the new datacenter from the same rack in the existing datacenter. The rack specifications correspond with the rack specifications in DC1:

    1. On RAC1 nodes in DC2 run:

      nodetool rebuild -dc DC1:RAC1
    2. On RAC2 nodes in DC2 run:

      nodetool rebuild -dc DC1:RAC2
    3. On RAC3 nodes in DC2 run:

      nodetool rebuild -dc DC1:RAC3

      Rebuilds can be safely run in parallel, but has potential performance tradeoffs. The nodes in the source datacenter are streaming data, and therefore potentially impacting application performance involving that datacenter’s data. Run tests within the environment, and adjust various levels of parallelism and streaming throttling to achieve the optimal balance of speed and performance.

    4. If the load on the source datacenter is your primary concern, run nodetool rebuild -dc on only one node at a time. This reduces the load on the source datacenter at the cost of slowing the rebuild process.

    5. If the speed of the rebuild is your primary concern, you can run the command on multiple nodes simultaneously. This requires that the cluster have the capacity to handle the extra I/O and network pressure.

  2. Monitor the rebuild progress for the new datacenter using nodetool netstats and examining the size of each node.

    The nodetool rebuild command issues a JMX call to the HCD node and waits for the rebuild to finish before returning to the command line. Once the JMX call is invoked, the rebuild process continues to run on the server even if the nodetool command stops. Typically there is not significant output from the nodetool rebuild command. Instead, monitor rebuild progress using nodetool netstats, as well as examining the data size of each node.

    The data load shown in nodetool status is updated only after a given source node is done streaming, and can appear to lag behind bytes reported on disk (e.g. du). Should any streaming errors occur, ERROR messages are logged to system.log and the rebuild stops. If a temporary failure occurs, you can run nodetool rebuild again and it will automatically skip any ranges that are already successfully streamed.

  3. Adjust stream throttling on the source datacenter as required to balance out network traffic. See nodetool setinterdcstreamthroughput.

    This setting is applied to the source nodes and throttles the bandwidth used for streaming. Adding additional simultaneous rebuilds does spread the allocated bandwidth across more operations and slows the speed of all simultaneous rebuilds.

  4. To confirm all rebuilds are successful, search for finished rebuild in the system.log of each node in the new datacenter.

    In rare cases the communication between two streaming nodes may hang, leaving the rebuild operation running but with no data streaming. Monitor streaming progress using nodetool netstats. If the streams are not making any progress, restart the node where nodetool rebuild was executed and run nodetool rebuild again using the original parameters specified.

  5. If you modified the inter-datacenter streaming throughput during the rebuild process, then return it to the original setting.

  6. Start the Mission Control Repair Service, if necessary.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2025 DataStax | Privacy policy | Terms of use | Manage Privacy Choices

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com