Adding a datacenter to a cluster using a designated datacenter as a data source
Complete the following steps to add a datacenter to an existing cluster using a designated datacenter as a data source. In this procedure, a new datacenter, DC4 is added to an existing cluster with existing datacenters DC1, DC2, and DC3.
system.log
The location of the system.log file is:
/var/log/cassandra/system.log
Where is the cassandra.yaml file?
The location of the cassandra.yaml file depends on the type of installation:
| Installation Type | Location |
|---|---|
Package installations + Installer-Services installations |
|
Tarball installations + Installer-No Services installations |
|
Where is the dse.yaml file?
The location of the dse.yaml file depends on the type of installation:
| Installation Type | Location |
|---|---|
Package installations + Installer-Services installations |
|
Tarball installations + Installer-No Services installations |
|
Where is the system.log file?
The location of the system.log file is /var/log/cassandra/system.log
Where is the cassandra-topology.properties file?
The location of the cassandra-topology.properties file depends on the type of installation:
| Installation Type | Location |
|---|---|
Package installations + Installer-Services installations |
|
Tarball installations + Installer-No Services installations |
|
Where is the cassandra-rackdc.properties file?
The location of the cassandra-rackdc.properties depends on the type of installation:
| Installation Type | Location |
|---|---|
Package installations + Installer-Services installations |
|
Tarball installations + Installer-No Services |
|
Prerequisites
|
Complete the prerequisite tasks outlined in Initializing a DataStax Enterprise cluster to prepare the environment. |
Datacenter naming recommendations
This procedure requires an existing datacenter.
|
Avoid using special characters when naming a datacenter. Using prohibited characters in a datacenter name will cause server errors. Ensure that your datacenter name is no more than 48 characters long, only uses lowercase alphanumeric characters, and does not contain special characters or spaces. |
Procedure
-
Configure every keyspace using
SimpleStrategyto use theNetworkTopologyStrategyreplication strategy, including (but not restricted to) the following keyspaces.If
SimpleStrategywas used previously, this step is required to configureNetworkTopologyStrategy.-
Use ALTER KEYSPACE to change the keyspace replication strategy to
NetworkTopologyStrategyfor the following keyspaces.ALTER KEYSPACE keyspace_name WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'ExistingDC1' : 3};-
DSE security:
system_auth,dse_security -
DSE performance:
dse_perf -
DSE analytics:
dse_leases, dsefs -
System resources:
system_traces,system_distributed -
OpsCenter (if installed)
-
All keyspaces created by users
-
-
Use
DESCRIBE SCHEMAto check the replication strategy of keyspaces in the cluster. Ensure that any existing keyspaces use theNetworkTopologyStrategyreplication strategy.DESCRIBE SCHEMA ;CREATE KEYSPACE dse_perf WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '3'} AND durable_writes = true; ... CREATE KEYSPACE dse_leases WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '3'} AND durable_writes = true; ... CREATE KEYSPACE dsefs WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '3'} AND durable_writes = true; ... CREATE KEYSPACE dse_security WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '3'} AND durable_writes = true;
-
-
Stop the OpsCenter Repair Service if it is running in the cluster. See Turning the Repair Service off.
-
In the new datacenter, install DSE on each new node. Do not start the service or restart the node.
Use the same version of DSE on all nodes in the cluster.
-
Configure properties in
cassandra.yamlon each new node, following the configuration of the other nodes in the cluster.Use the
yaml_difftool to review and make appropriate changes to thecassandra.yamlanddse.yamlconfiguration files.-
Configure node properties:
-
-seeds:internal_IP_addressof each seed nodeInclude at least one seed node from each datacenter. DataStax recommends more than one seed node per datacenter, in more than one rack. Do not make all nodes seed nodes.
-
auto_bootstrap:trueThis setting has been removed from the default configuration, but, if present, should be set to
true. -
listen_address:emptyIf not set, DSE asks the system for the local address, which is associated with its host name. In some cases, DSE does not produce the correct address, which requires specifying the
listen_address. -
endpoint_snitch:snitchSee
endpoint_snitchand snitches.Do not use the
DseSimpleSnitch(default). TheDseSimpleSnitch(default) is used only for single-datacenter deployments (or single-zone deployments in public clouds), and does not recognize datacenter or rack information.Snitch configuration files Snitch Configuration file -
If using a
cassandra.yamlordse.yamlfile from a previous version, check the Upgrade Guide for removed settings.
-
-
Configure node architecture (all nodes in the datacenter must use the same type):
Virtual node (vnode) allocation algorithm settings
-
Set
num_tokensto 8 (recommended). -
Set
allocate_tokens_for_local_replication_factorto the target replication factor for keyspaces in the new datacenter. If the keyspace RF varies, alternate the settings to use all the replication factors.-
Comment out the
initial_tokenproperty.DataStax recommends not using vnodes with DSE Search. However, if you decide to use vnodes with DSE Search, do not use more than 8 vnodes and ensure that
allocate_tokens_for_local_replication_factoroption incassandra.yamlis correctly configured for your environment.For more information, refer to Virtual node (vnode) configuration.
Single-token architecture settings
-
-
Generate the initial token for each node and set this value for the
initial_tokenproperty.See Adding or replacing single-token nodes for more information.
-
Comment out both
num_tokensandallocate_tokens_for_local_replication_factor.
-
-
-
In the
cassandra-rackdc.properties(GossipingPropertyFileSnitch) orcassandra-topology.properties(PropertyFileSnitch) file, assign datacenter and rack names to the IP addresses of each node, and assign a default datacenter name and rack name for unknown nodes.Migration information: The
GossipingPropertyFileSnitchalways loadscassandra-topology.propertieswhen the file is present. Remove the file from each node on any new cluster, or any cluster migrated from thePropertyFileSnitch.# Transactional Node IP=Datacenter:Rack 110.82.155.0=DC_Transactional:RAC1 110.82.155.1=DC_Transactional:RAC1 110.54.125.1=DC_Transactional:RAC2 110.54.125.2=DC_Analytics:RAC1 110.54.155.2=DC_Analytics:RAC2 110.82.155.3=DC_Analytics:RAC1 110.54.125.3=DC_Search:RAC1 110.82.155.4=DC_Search:RAC2 # default for unknown nodes default=DC1:RAC1After making any changes in the configuration files, you must the restart the node for the changes to take effect.
-
Make the following changes in the existing datacenters.
-
On nodes in the existing datacenters, update the
-seedsproperty in cassandra.yaml to include the seed nodes in the new datacenter. -
Add the new datacenter definition to the cassandra.yaml properties file for the type of snitch used in the cluster. If changing snitches, see Switching snitches.
-
-
After you have installed and configured DSE on all nodes, start the seed nodes one at a time, and then start the rest of the nodes:
-
Package installations: Starting DataStax Enterprise as a service
-
Tarball installations: Starting DataStax Enterprise as a stand-alone process
-
-
Install and configure DataStax Agents on each node in the new datacenter if necessary: installing DataStax Agents.
-
Run
nodetool statusto ensure that new datacenter is up and running.nodetool statusDatacenter: DC1 =============== Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Owns Host ID Token Rack UN 10.200.175.11 474.23 KiB ? 7297d21e-a04e-4bb1-91d9-8149b03fb60a -9223372036854775808 rack1 Datacenter: DC2 =============== Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Owns Host ID Token Rack UN 10.200.175.113 518.36 KiB ? 2ff7d46c-f084-477e-aa53-0f4791c71dbc -9223372036854775798 rack1 Datacenter: DC3 =============== Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Owns Host ID Token Rack UN 10.200.175.111 961.56 KiB ? ac43e602-ef09-4d0d-a455-3311f444198c -9223372036854775788 rack1 Datacenter: DC4 =============== Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Owns Host ID Token Rack UN 10.200.175.114 361.56 KiB ? ac43e602-ef09-4d0d-a455-3322f444198c -9223372036854775688 rack1 -
After all nodes are running in the cluster and the client applications are datacenter aware, use
cqlshto alter the keyspaces to add the desired replication in the new datacenter.ALTER KEYSPACE keyspace_name WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'ExistingDC1' : 3, 'NewDC2' : 2};If client applications, including DSE Search and DSE Analytics, are not properly configured, they might connect to the new datacenter before it is online. Incorrect configuration results in connection exceptions, timeouts, and/or inconsistent data.
-
Run
nodetool rebuildon each node in the new datacenter, specifying the corresponding datacenter/rack from the source datacenter.nodetool rebuild -dc source_datacenter_name:source_datacenter_rack_name
The following commands replicate data from an existing datacenter DC1 to the new datacenter DC2 on each DC2 node. The rack specifications correspond with the rack specifications in DC1:
On DC2:RACK1 nodes run:
nodetool rebuild -dc DC1:RACK1On DC2:RACK2 nodes run:
nodetool rebuild -dc DC1:RACK2On DC2:RACK3 nodes run:
nodetool rebuild -dc DC1:RACK3-
Use
nodetool rebuild -dcon one or more nodes at the same time. Run on one node at a time to reduce the impact on the source datacenter. -
Alternatively, run the command on multiple nodes simultaneously when the cluster can handle the extra I/O and network pressure.
Rebuild can be safely run in parallel, but has potential performance tradeoffs. The nodes in the source datacenter are streaming data, so application performance involving that datacenter’s data can be impacted. Run tests within a the environment, adjusting various levels of parallelism and streaming throttling to strike the optimal balance of speed and performance.
-
-
Monitor the rebuild progress for the new datacenter using
nodetool netstatsand examining the size of each node.The
nodetool rebuildcommand issues a JMX call to the node and waits for rebuild to finish before returning to the command line. Once the JMX call is invoked, the rebuild process continues on the server regardless of the nodetool rebuild process (the rebuild continues to run if nodetool dies.) There is not typically significant output from the nodetool rebuild command itself. Instead, rebuild progress should be monitored vianodetool netstats, as well as examining the data size of each node.The data load shown in
nodetool statusis only updated after a given source node is done streaming, so it appears to lag behind bytes reported on disk. If any streaming errors occur,ERRORmessages are logged tosystem.logand the rebuild stops. In the event of temporary failure,nodetool rebuildcan be re-run and skips any ranges that were already successfully streamed. -
Adjust stream throttling on the source datacenter as required to balance out network traffic. See
nodetool setstreamthroughput. -
Confirm that all rebuilds are successful by searching for
finished rebuildin thesystem.logof each node in the new datacenter.In rare cases the communication between two streaming nodes may hang, leaving the rebuild operation alive but with no data streaming. Monitor streaming progress using
nodetool netstats, and, if the streams are not making any progress, restart the node wherenodetool rebuildwas executed and re-runnodetool rebuildwith the same parameters used originally. -
Start the DataStax Agent on each node in the new datacenter if necessary.
-
Start the OpsCenter Repair Service if necessary. See Turning the Repair Service on.