Adding a datacenter to a cluster using a designated datacenter as a data source

Complete the following steps to add a datacenter to an existing cluster using a designated datacenter as a data source. In this procedure, a new datacenter, DC4 is added to an existing cluster with existing datacenters DC1, DC2, and DC3.

Prerequisites

Complete the prerequisite tasks outlined in Initializing a DataStax Enterprise cluster to prepare the environment.

Datacenter naming recommendations

This procedure requires an existing datacenter.

Avoid using special characters when naming a datacenter. Using prohibited characters in a datacenter name causes server errors.

Ensure that your datacenter name is no more than 48 characters long, only uses lowercase alphanumeric characters, and does not contain special characters or spaces.

Procedure

  1. In existing datacenters, if the SimpleStrategy replication strategy is in use, change it to the NetworkTopologyStrategy replication strategy.

    1. Use ALTER KEYSPACE to change the keyspace replication strategy to NetworkTopologyStrategy for the following keyspaces.

      ALTER KEYSPACE keyspace_name WITH REPLICATION =
      {'class' : 'NetworkTopologyStrategy', 'DC1' : 3};
    2. Use DESCRIBE SCHEMA to check the replication strategy of keyspaces in the cluster. Ensure that any existing keyspaces use the NetworkTopologyStrategy replication strategy.

      DESCRIBE SCHEMA ;
      CREATE KEYSPACE dse_perf WITH replication =
      {'class': 'NetworkTopologyStrategy, 'DC1': '3'}  AND durable_writes = true;
      ...
      
      CREATE KEYSPACE dse_leases WITH replication =
      {'class': 'NetworkTopologyStrategy, 'DC1': '3'}  AND durable_writes = true;
      ...
      
      CREATE KEYSPACE dsefs WITH replication =
      {'class': 'NetworkTopologyStrategy, 'DC1': '3'}  AND durable_writes = true;
      ...
      
      CREATE KEYSPACE dse_security WITH replication =
      {'class': 'NetworkTopologyStrategy, 'DC1': '3'}  AND durable_writes = true;
  2. Stop the OpsCenter Repair Service if it is running in the cluster. See Turning the Repair Service off.

  3. Install DSE on each node in the new datacenter, install DSE. Do not start the service or restart the node.

    Use the same version of DSE on all nodes in the cluster.

  4. Configure properties in cassandra.yaml on each new node, following the configuration of the other nodes in the cluster.

    If you used Lifecycle Manager to provision the nodes, configuration is performed automatically.

    Use the yaml_diff tool to review and make appropriate changes to the cassandra.yaml and dse.yaml configuration files.

    1. Configure node properties:

    2. Configure node architecture (all nodes in the datacenter must use the same type):

      Virtual node (vnode) allocation algorithm settings

      See Virtual node (vnode) configuration for more details.

      Single-token architecture settings

  5. In the cassandra-rackdc.properties (GossipingPropertyFileSnitch) or cassandra-topology.properties (PropertyFileSnitch) file, assign datacenter and rack names to the IP addresses of each node, and assign a default datacenter name and rack name for unknown nodes.

    Migration information: The GossipingPropertyFileSnitch always loads cassandra-topology.properties when the file is present. Remove the file from each node on any new datacenter, or any datacenter migrated from the PropertyFileSnitch.

    # Transactional Node IP=Datacenter:Rack
    110.82.155.0=DC_Transactional:RAC1
    110.82.155.1=DC_Transactional:RAC1
    110.54.125.1=DC_Transactional:RAC2
    110.54.125.2=DC_Analytics:RAC1
    110.54.155.2=DC_Analytics:RAC2
    110.82.155.3=DC_Analytics:RAC1
    110.54.125.3=DC_Search:RAC1
    110.82.155.4=DC_Search:RAC2
    
    # default for unknown nodes
    default=DC1:RAC1

    After making any changes in the configuration files, you must the restart the node for the changes to take effect.

  6. Make the following changes in the existing datacenters.

    1. On nodes in the existing datacenters, update the -seeds property in cassandra.yaml to include the seed nodes in the new datacenter.

    2. Add the new datacenter definition to the cassandra.yaml properties file for the type of snitch used in the cluster. If changing snitches, see Switching snitches.

  7. After you have installed and configured DataStax Enterprise on all nodes, start the nodes sequentially, beginning with the seed nodes. After starting each node, allow a delay of at least the value specified in ring_delay_ms before starting the next node, to prevent a cluster imbalance.

    Before starting a node, ensure that the previous node is up and running by verifying that it has a nodetool status of UN. Failing to do so will result in cluster imbalance that cannot be fixed later. Cluster imbalance can be visualised by running nodetool status $keyspace and by looking at the ownership column. A properly setup cluster will report ownership values similar to each other (±1%). That is, for keyspaces where the RF per DC is equal to allocate_tokens_for_local_replication_factor.

  8. Install and configure DataStax Agents on each node in the new datacenter if necessary: Installing DataStax Agents 6.8

  9. Run nodetool status to ensure that new datacenter is up and running.

    nodetool status
    Datacenter: DC1
    ===============
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address         Load       Owns    Host ID                               Token                     Rack
    UN  10.200.175.11   474.23 KiB  ?       7297d21e-a04e-4bb1-91d9-8149b03fb60a  -9223372036854775808     rack1
    Datacenter: DC2
    ===============
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address         Load       Owns    Host ID                               Token                     Rack
    UN  10.200.175.113  518.36 KiB  ?       2ff7d46c-f084-477e-aa53-0f4791c71dbc  -9223372036854775798     rack1
    Datacenter: DC3
    ===============
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address         Load       Owns    Host ID                               Token                     Rack
    UN  10.200.175.111  961.56 KiB  ?       ac43e602-ef09-4d0d-a455-3311f444198c  -9223372036854775788     rack1
    Datacenter: DC4
    ===============
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address         Load       Owns    Host ID                               Token                     Rack
    UN  10.200.175.114  361.56 KiB  ?       ac43e602-ef09-4d0d-a455-3322f444198c  -9223372036854775688     rack1
  10. Disable nodesync on all nodes in the new cluster to prevent repair work. You should also stop the OpsCenter repair service to prevent repair work for tables that do not have nodesync enabled.

    nodetool nodesyncservice disable node list
  11. After all nodes are running in the cluster and the client applications are datacenter aware, use cqlsh to alter the keyspaces to add the desired replication in the new datacenter.

    ALTER KEYSPACE keyspace_name WITH REPLICATION =
    {'class' : 'NetworkTopologyStrategy', 'ExistingDC1' : 3, 'NewDC2' : 2};

    If client applications, including DSE Search and DSE Analytics, are not properly configured, they might connect to the new datacenter before it is online. Incorrect configuration results in connection exceptions, timeouts, and/or inconsistent data.

  1. Run nodetool rebuild on each node in the new datacenter, specifying the corresponding datacenter from the source datacenter.

    nodetool rebuild -dc <source_datacenter_name>

    To specify a rack name, use a colon (:) to separate the datacenter and rack names. For example:

    nodetool rebuild -dc DC1:RACK1

    To run a nodetool rebuild command and keep it running even after exiting the shell or terminal window, use the nohup option:

    nohup nodetool rebuild -dc DC1:RACK1

    To run a nodetool rebuild command in the background and log the results, use:

    nohup nodetool rebuild -dc <source_datacenter_name> > rebuild.log 2>&1 &

    <source_datacenter_name>

    Replace <source_datacenter_name> with the name of the datacenter for which you want to rebuild data.

    > rebuild.log 2>&1 &

    Redirects the output of the command to a log file named rebuild.log. It ensures that both standard output and standard errors are redirected to the same log file (2>&1). The final & at the end runs the command in the background.

    The following commands replicate data from an existing rack in datacenter DC1 to the corresponding rack in the new datacenter DC2 on each DC2 node. This spreads the streaming overhead of the rebuild across more nodes. A rebuild per rack can increase the speed of the rebuild, but possibly at the cost of an increase in user latency. To decrease user latency, concentrate the streaming overhead of the rebuild on a smaller number of nodes. Rebuild each rack in the new datacenter from the same rack in the existing datacenter. The rack specifications correspond with the rack specifications in DC1:

    1. On DC2:RACK1 nodes run:

      nodetool rebuild -dc DC1:RACK1
    2. On DC2:RACK2 nodes run:

      nodetool rebuild -dc DC1:RACK2
    3. On DC2:RACK3 nodes run:

      nodetool rebuild -dc DC1:RACK3

      Rebuilds can be safely run in parallel, but has potential performance tradeoffs. The nodes in the source datacenter are streaming data, and therefore potentially impacting application performance involving that datacenter’s data. Run tests within the environment, and adjust various levels of parallelism and streaming throttling to achieve the optimal balance of speed and performance.

    4. If the load on the source datacenter is your primary concern, run nodetool rebuild -dc on only one node at a time. This reduces the load on the source datacenter at the cost of slowing the rebuild process.

    5. If the speed of the rebuild is your primary concern, you can run the command on multiple nodes simultaneously. This requires that the cluster have the capacity to handle the extra I/O and network pressure.

  2. Monitor the rebuild progress for the new datacenter using nodetool netstats and examining the size of each node.

    The nodetool rebuild command issues a JMX call to the DSE node and waits for rebuild to finish before returning to the command line. Once the JMX call is invoked, the rebuild process continues to run on the server even if the nodetool command stops. Typically there is not significant output from the nodetool rebuild command. Instead, monitor rebuild progress using nodetool netstats, as well as examining the data size of each node.

    The data load shown in nodetool status is updated only after a given source node is done streaming, and can appear to lag behind bytes reported on disk (e.g. du). Should any streaming errors occur, ERROR messages are logged to system.log and the rebuild stops. If a temporary failure occurs, you can run nodetool rebuild again and skip any ranges that are already successfully streamed.

  3. Adjust stream throttling on the source datacenter as required to balance out network traffic. See nodetool setinterdcstreamthroughput.

    This setting is applied to the source nodes and throttles the bandwidth used for streaming. Adding additional simultaneous rebuilds does spread the allocated bandwidth across more operations and slows the speed of all simultaneous rebuilds.

  4. To confirm all rebuilds are successful, search for finished rebuild in the system.log of each node in the new datacenter.

    In rare cases the communication between two streaming nodes may hang, leaving the rebuild operation running but with no data streaming. Monitor streaming progress using nodetool netstats. If the streams are not making any progress, restart the node where nodetool rebuild was executed and run nodetool rebuild again using the original parameters specified.

  5. If you modified the inter-datacenter streaming throughput during the rebuild process, then return it to the original setting.

  6. Re-enable nodesync on all nodes in the new cluster. You should also re-enable the OpsCenter repair service.

    nodetool nodesyncservice enable
  7. Start the DataStax Agent on each node in the new datacenter, if necessary.

  8. Start the OpsCenter Repair Service, if necessary. See Turning the Repair Service on.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com