Adding a datacenter to a cluster using a designated datacenter as a data source

Add a datacenter to an existing cluster using a designated datacenter as a data source.

Complete the following steps to add a datacenter to an existing cluster using a designated datacenter as a data source. In this procedure, a new datacenter, DC4 is added to an existing cluster with existing datacenters DC1, DC2, and DC3.

cassandra.yaml

The location of the cassandra.yaml file depends on the type of installation:
Package installations /etc/dse/cassandra/cassandra.yaml
Tarball installations installation_location/resources/cassandra/conf/cassandra.yaml

cassandra-rackdc.properties

The location of the cassandra-rackdc.properties file depends on the type of installation:
Package installations /etc/dse/cassandra/cassandra-rackdc.properties
Tarball installations installation_location/resources/cassandra/conf/cassandra-rackdc.properties

system.log

The location of the system.log file is:
  • /var/log/cassandra/system.log

cassandra-topology.properties

The location of the cassandra-topology.properties file depends on the type of installation:
Package installations /etc/dse/cassandra/cassandra-topology.properties
Tarball installation installation_location/resources/cassandra/conf/cassandra-topology.properties

dse.yaml

The location of the dse.yaml file depends on the type of installation:
Package installations /etc/dse/dse.yaml
Tarball installations installation_location/resources/dse/conf/dse.yaml

Prerequisites

Important: Complete the prerequisite tasks outlined in Initializing a DataStax Enterprise cluster to prepare the environment.

Procedure

  1. In existing datacenters, if the SimpleStrategy replication strategy is in use, change it to the NetworkTopologyStrategy replication strategy.
    1. Use ALTER KEYSPACE to change the keyspace replication strategy to NetworkTopologyStrategy for the following keyspaces.
      ALTER KEYSPACE keyspace_name WITH REPLICATION = 
      {'class' : 'NetworkTopologyStrategy', 'DC1' : 3};
      • DSE security: system_auth, dse_security
      • DSE performance: dse_perf
      • DSE analytics: dse_leases, dsefs
      • System resources: system_traces, system_distributed
      • OpsCenter (if installed)
      • All keyspaces created by users
    2. Use DESCRIBE SCHEMA to check the replication strategy of keyspaces in the cluster. Ensure that any existing keyspaces use the NetworkTopologyStrategy replication strategy.
      DESCRIBE SCHEMA ;
      CREATE KEYSPACE dse_perf WITH replication = 
      {'class': 'NetworkTopologyStrategy, 'DC1': '3'}  AND durable_writes = true;
      ...
      
      CREATE KEYSPACE dse_leases WITH replication =
      {'class': 'NetworkTopologyStrategy, 'DC1': '3'}  AND durable_writes = true;
      ...
      
      CREATE KEYSPACE dsefs WITH replication =
      {'class': 'NetworkTopologyStrategy, 'DC1': '3'}  AND durable_writes = true;
      ...
      
      CREATE KEYSPACE dse_security WITH replication =
      {'class': 'NetworkTopologyStrategy, 'DC1': '3'}  AND durable_writes = true;
  2. Stop the OpsCenter Repair Service if it is running in the cluster. See Turning the Repair Service off.
  3. Install DSE on each node in the new datacenter, install DSE. Do not start the service or restart the node.
    Important: Use the same version of DSE on all nodes in the cluster.
  4. Configure properties in cassandra.yaml on each new node, following the configuration of the other nodes in the cluster.
    Tip: Use the yaml_diff tool to review and make appropriate changes to the cassandra.yaml and dse.yaml configuration files.
    1. Configure node properties:
      • -seeds: internal_IP_address of each seed node
        Important: Include at least one seed node from each datacenter. DataStax recommends more than one seed node per datacenter, in more than one rack. 3 is the most common number of seed nodes per datacenter. Do not make all nodes seed nodes.
      • auto_bootstrap: true

        This setting has been removed from the default configuration, but, if present, should be set to true.

      • listen_address: empty

        If not set, DSE asks the system for the local address, which is associated with its host name. In some cases, DSE does not produce the correct address, which requires specifying the listen_address.

      • endpoint_snitch: snitch

        See endpoint_snitch and snitches.

        Important: Do not use the DseSimpleSnitch. The DseSimpleSnitch (default) is used only for single-datacenter deployments (or single-zone deployments in public clouds), and does not recognize datacenter or rack information.
        Snitch Configuration file
        GossipingPropertyFileSnitch cassandra-rackdc.properties file
        Amazon EC2 single-region snitch
        Amazon EC2 multi-region snitch
        Google Cloud Platform snitch
        cassandra-topology.properties file
      • If using a cassandra.yaml or dse.yaml file from a previous version, check the Upgrade Guide for removed settings.
    2. Configure node architecture (all nodes in the datacenter must use the same type):

      Virtual node (vnode) allocation algorithm settings

      Note: See Virtual node (vnode) configuration for more details.

      Single-token architecture settings

  5. In the cassandra-rackdc.properties (GossipingPropertyFileSnitch) or cassandra-topology.properties (PropertyFileSnitch) file, assign datacenter and rack names to the IP addresses of each node, and assign a default datacenter name and rack name for unknown nodes.
    Note: Migration information: The GossipingPropertyFileSnitch always loads cassandra-topology.properties when the file is present. Remove the file from each node on any new datacenter, or any datacenter migrated from the PropertyFileSnitch.
    # Transactional Node IP=Datacenter:Rack
    110.82.155.0=DC_Transactional:RAC1
    110.82.155.1=DC_Transactional:RAC1
    110.54.125.1=DC_Transactional:RAC2
    110.54.125.2=DC_Analytics:RAC1
    110.54.155.2=DC_Analytics:RAC2
    110.82.155.3=DC_Analytics:RAC1
    110.54.125.3=DC_Search:RAC1
    110.82.155.4=DC_Search:RAC2
    
    # default for unknown nodes
    default=DC1:RAC1
    Note: After making any changes in the configuration files, you must the restart the node for the changes to take effect.
  6. Make the following changes in the existing datacenters.
    1. On nodes in the existing datacenters, update the -seeds property in cassandra.yaml to include the seed nodes in the new datacenter.
    2. Add the new datacenter definition to the cassandra.yaml properties file for the type of snitch used in the cluster. If changing snitches, see Switching snitches.
  7. After you have installed and configured DataStax Enterprise on all nodes, start the seed nodes one at a time, and then start the rest of the nodes:
  8. Install and configure DataStax Agents on each node in the new datacenter if necessary:
  9. Run nodetool status to ensure that new datacenter is up and running.
    nodetool status
    Datacenter: DC1
    ===============
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address         Load       Owns    Host ID                               Token                     Rack
    UN  10.200.175.11   474.23 KiB  ?       7297d21e-a04e-4bb1-91d9-8149b03fb60a  -9223372036854775808     rack1
    Datacenter: DC2
    ===============
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address         Load       Owns    Host ID                               Token                     Rack
    UN  10.200.175.113  518.36 KiB  ?       2ff7d46c-f084-477e-aa53-0f4791c71dbc  -9223372036854775798     rack1
    Datacenter: DC3
    ===============
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address         Load       Owns    Host ID                               Token                     Rack
    UN  10.200.175.111  961.56 KiB  ?       ac43e602-ef09-4d0d-a455-3311f444198c  -9223372036854775788     rack1
    Datacenter: DC4
    ===============
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address         Load       Owns    Host ID                               Token                     Rack
    UN  10.200.175.114  361.56 KiB  ?       ac43e602-ef09-4d0d-a455-3322f444198c  -9223372036854775688     rack1
  10. After all nodes are running in the cluster and the client applications are datacenter aware, use cqlsh to alter the keyspaces to add the desired replication in the new datacenter.
    ALTER KEYSPACE keyspace_name WITH REPLICATION = 
    {'class' : 'NetworkTopologyStrategy', 'ExistingDC1' : 3, 'NewDC2' : 2};
    Warning: If client applications, including DSE Search and DSE Analytics, are not properly configured, they might connect to the new datacenter before it is online. Incorrect configuration results in connection exceptions, timeouts, and/or inconsistent data.
  11. Run nodetool rebuild on each node in the new datacenter, specifying the corresponding datacenter/rack from the source datacenter.
    nodetool rebuild -dc source_datacenter_name:source_datacenter_rack_name
    The following commands replicate data from an existing datacenter DC1 to the new datacenter DC2 on each DC2 node. The rack specifications correspond with the rack specifications in DC1:
    On DC2:RACK1 nodes run:
    nodetool rebuild -dc DC1:RACK1
    On DC2:RACK2 nodes run:
    nodetool rebuild -dc DC1:RACK2
    On DC2:RACK3 nodes run:
    nodetool rebuild -dc DC1:RACK3
    1. Use nodetool rebuild -dc on one or more nodes at the same time. Run on one node at a time to reduce the impact on the source datacenter.
    2. Alternatively, run the command on multiple nodes simultaneously when the cluster can handle the extra I/O and network pressure.
      Rebuild can be safely run in parallel, but has potential performance tradeoffs. The nodes in in the source datacenter will be streaming data, so application performance involving that datacenter's data will be potentially impacted. Run tests within a the environment, adjusting various levels of parallelism and streaming throttling to strike the optimal balance of speed and performance.
  12. Monitor the rebuild progress for the new datacenter using nodetool netstats and examining the size of each node.
    The nodetool rebuild command issues a JMX call to the DSE node and waits for rebuild to finish before returning to the command line. Once the JMX call is invoked, the rebuild process will continue on the server regardless of the nodetool rebuild process (the rebuild will continue to run if nodetool dies.) There is not typically significant output from the nodetool rebuild command itself. Instead, rebuild progress should be monitored via nodetool netstats, as well as examining the data size of each node.
    Note: The data load shown in nodetool status will only be updated after a given source node is done streaming, so it will appear to lag behind bytes reported on disk (e.g. du). If any streaming errors occur, ERROR messages will be logged to system.log and the rebuild will stop. In the event of temporary failure, nodetool rebuild can be re-run and skips any ranges that were already successfully streamed.
  13. Adjust stream throttling on the source datacenter as required to balance out network traffic. See nodetool setstreamthroughput.
  14. Confirm that all rebuilds are successful by searching for finished rebuild in the system.log of each node in the new datacenter.
    Note: In rare cases the communication between two streaming nodes may hang, leaving the rebuild operation alive but with no data streaming. Monitor streaming progress using nodetool netstats, and, if the streams are not making any progress, restart the node where nodetool rebuild was executed and re-run nodetool rebuild with the same parameters used originally.
  15. Start the DataStax Agent on each node in the new datacenter if necessary.
  16. Start the OpsCenter Repair Service if necessary. See Turning the Repair Service on.