Add a datacenter to a cluster using a designated datacenter as a data source
Complete the following steps to add a datacenter to an existing cluster using a designated datacenter as a data source. In this procedure, a new datacenter, DC4 is added to an existing cluster with existing datacenters DC1, DC2, and DC3.
Prerequisites
-
Complete the prerequisite tasks outlined in Initialize a DataStax Enterprise (DSE) cluster to prepare the environment.
-
This procedure requires an existing datacenter.
Avoid using special characters when naming a datacenter. Using prohibited characters in a datacenter name causes server errors.
Ensure that your datacenter name is no more than 48 characters long, only uses lowercase alphanumeric characters, and does not contain special characters or spaces.
Procedure
-
If your existing datacenters use the
SimpleStrategyreplication strategy, change it to theNetworkTopologyStrategyreplication strategy:-
Use
ALTER KEYSPACEto change the keyspace replication strategy toNetworkTopologyStrategyfor the following keyspaces:-
DSE security:
system_auth,dse_security -
DSE performance:
dse_perf -
DSE analytics:
dse_leases, dsefs -
System resources:
system_traces,system_distributed -
OpsCenter keyspace (if installed)
-
All keyspaces created by users
For example:
ALTER KEYSPACE keyspace_name WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'DC1' : 3};
-
-
Use
DESCRIBE SCHEMAto check the replication strategy of keyspaces in the cluster. Ensure that any existing keyspaces use theNetworkTopologyStrategyreplication strategy.DESCRIBE SCHEMA;Result
CREATE KEYSPACE dse_perf WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '3'} AND durable_writes = true; ... CREATE KEYSPACE dse_leases WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '3'} AND durable_writes = true; ... CREATE KEYSPACE dsefs WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '3'} AND durable_writes = true; ... CREATE KEYSPACE dse_security WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '3'} AND durable_writes = true;
-
-
Stop the OpsCenter Repair Service if it is running in the cluster.
-
Install DSE on each node in the new datacenter.
Don’t start the service or restart the node.
Use the same version of DataStax Enterprise (DSE) on all nodes in the cluster.
-
Configure properties in
cassandra.yamlon each new node, following the configuration of the other nodes in the cluster.If you used Lifecycle Manager to provision the nodes, configuration is performed automatically.
For manual configuration, use the
yaml_difftool to review and make appropriate changes to thecassandra.yamlanddse.yamlconfiguration files.-
Configure node properties:
-
-seeds: The internal IP address of each seed node.Include at least one seed node from each datacenter. DataStax recommends more than one seed node per datacenter, in more than one rack.
3is the most common number of seed nodes per datacenter. Do not make all nodes seed nodes. -
auto_bootstrap: This setting has been removed from the default configuration, but, if present, should be set totrue. -
cluster_name: On the new datacenter nodes, thecluster_namekey in thecassandra.yamlconfiguration file must be set to the existing cluster’scluster_name. This is required for the new datacenter nodes to join the existing cluster. If this is not set, the new datacenter nodes will not join the existing cluster. -
listen_address: Typically, you can leave this empty (not set). If not set, DSE asks the system for the local address, which is associated with its host name. In some cases, DSE doesn’t produce the correct address, which requires specifying thelisten_address. -
endpoint_snitch: Provide the snitch configuration.Don’t use the default
DseSimpleSnitch. TheDseSimpleSnitchis used only for single-datacenter deployments (or single-zone deployments in public clouds). It doesn’t recognize datacenter or rack information.For the GossipingPropertyFileSnitch, Amazon EC2 single-region snitch, Amazon EC2 multi-region snitch, and Google Cloud Platform snitch, configure the datacenter and rack information in the cassandra-rackdc.properties file. For the PropertyFileSnitch, configure the datacenter and rack information in the cassandra-topology.properties file.
-
If using a
cassandra.yamlordse.yamlfile from a previous version, check the upgrade guide for your previous and current version for removed settings.
-
-
Configure node architecture. All nodes in the datacenter must use the same type.
-
Virtual node (vnode) allocation algorithm settings
-
Single-token architecture settings
-
Set
num_tokensto 8 (recommended). -
Set
allocate_tokens_for_local_replication_factorto the target replication factor for keyspaces in the new datacenter. If the keyspace replication factor varies, alternate the settings to use all the replication factors. -
Comment out the
initial_tokenproperty.
See Virtual node (vnode) configuration for more details.
-
Generate the initial token for each node, and then set that value in the
initial_tokenproperty. See Adding or replacing single-token nodes for more information. -
Comment out
num_tokensandallocate_tokens_for_local_replication_factor.
-
-
-
Depending on your snitch type, edit the appropriate configuration file to assign datacenter and rack names to the IP addresses of each node, and assign a default datacenter name and rack name for unknown nodes.
# Transactional Node IP=Datacenter:Rack 110.82.155.0=DC_Transactional:RAC1 110.82.155.1=DC_Transactional:RAC1 110.54.125.1=DC_Transactional:RAC2 110.54.125.2=DC_Analytics:RAC1 110.54.155.2=DC_Analytics:RAC2 110.82.155.3=DC_Analytics:RAC1 110.54.125.3=DC_Search:RAC1 110.82.155.4=DC_Search:RAC2 # default for unknown nodes default=DC1:RAC1For the
PropertyFileSnitch, these are set in thecassandra-topology.properties. For theGossipingPropertyFileSnitch, these are set in thecassandra-rackdc.properties.-
The
GossipingPropertyFileSnitchalways loadscassandra-topology.propertieswhen the file is present. Remove the file from each node on any new datacenter and from any datacenter migrated from thePropertyFileSnitch. -
After making any changes in the configuration files, you must restart the node for the changes to take effect.
-
-
Make the following changes in the existing datacenters:
-
On nodes in the existing datacenters, update the
-seedsproperty incassandra.yamlto include the seed nodes in the new datacenter. -
Add the new datacenter definition to the
cassandra.yamlproperties file for the type of snitch used in the cluster. If changing snitches, see Switching snitches.
-
-
After you have installed and configured DataStax Enterprise (DSE) on all nodes, start the nodes sequentially, beginning with the seed nodes.
After starting each node, allow a delay of at least the duration of
ring_delay_msbefore starting the next node to prevent cluster imbalance.Before starting a node, ensure that the previous node is up and running by verifying that it
nodetool statusreturnsUN(UpandNormal). Failing to do so can result in cluster imbalance that cannot be fixed later.Cluster imbalance can be visualised by running
nodetool status KEYSPACE_NAMEand checking theOwnershipcolumn in the response. A properly configured cluster reports ownership values similar to each other, within 1 percent, for keyspaces where the replication factor per DC is equal toallocate_tokens_for_local_replication_factor.-
Package installations: Start DataStax Enterprise as a service
-
Tarball installations: Start DataStax Enterprise as a standalone process
-
-
Install and configure DataStax Agents on each node in the new datacenter if necessary.
-
Run
nodetool statusto ensure that the new datacenter is up and running:nodetool statusResult
Datacenter: DC1 =============== Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Owns Host ID Token Rack UN 10.200.175.11 474.23 KiB ? 7297d21e-a04e-4bb1-91d9-8149b03fb60a -9223372036854775808 rack1 Datacenter: DC2 =============== Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Owns Host ID Token Rack UN 10.200.175.113 518.36 KiB ? 2ff7d46c-f084-477e-aa53-0f4791c71dbc -9223372036854775798 rack1 Datacenter: DC3 =============== Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Owns Host ID Token Rack UN 10.200.175.111 961.56 KiB ? ac43e602-ef09-4d0d-a455-3311f444198c -9223372036854775788 rack1 Datacenter: DC4 =============== Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Owns Host ID Token Rack UN 10.200.175.114 361.56 KiB ? ac43e602-ef09-4d0d-a455-3322f444198c -9223372036854775688 rack1 -
Disable nodesync on all nodes in the new cluster to prevent repair work. You should also stop the OpsCenter repair service to prevent repair work for tables that do not have nodesync enabled.
nodetool nodesyncservice disable node list -
After all nodes are running in the cluster and the client applications are datacenter-aware, use
cqlshto alter the keyspaces and set the desired replication factor in the new datacenter:ALTER KEYSPACE keyspace_name WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'ExistingDC1' : 3, 'NewDC2' : 2}; -
Run
nodetool rebuildon each node in the new datacenter, specifying the corresponding datacenter from the source datacenter.In the following examples, replace
<source_datacenter_name>with the name of the datacenter for which you want to rebuild data.nodetool rebuild -dc <source_datacenter_name>To specify a rack name, use a colon to separate the datacenter and rack names. For example:
nodetool rebuild -dc DC1:RACK1To run a
nodetool rebuildcommand and keep it running even after exiting the shell or terminal window, use thenohupoption:nohup nodetool rebuild -dc DC1:RACK1To run a
nodetool rebuildcommand in the background and log the results, use the following syntax:nohup nodetool rebuild -dc <source_datacenter_name> > rebuild.log 2>&1 &The argument
> rebuild.log 2>&1 &Redirects the output of the command to a log file namedrebuild.log. It ensures that both standard output and standard errors are redirected to the same log file (2>&1). The final&at the end runs the command in the background.The following commands replicate data from an existing rack in datacenter DC1 to the corresponding rack in the new datacenter DC2 on each DC2 node. This spreads the streaming overhead of the rebuild across more nodes. A rebuild per rack can increase the speed of the rebuild, but possibly at the cost of an increase in user latency. To decrease user latency, concentrate the streaming overhead of the rebuild on a smaller number of nodes. Rebuild each rack in the new datacenter from the same rack in the existing datacenter. The rack specifications correspond with the rack specifications in DC1.
-
On DC2:RACK1 nodes run:
nodetool rebuild -dc DC1:RACK1 -
On DC2:RACK2 nodes run:
nodetool rebuild -dc DC1:RACK2 -
On DC2:RACK3 nodes run:
nodetool rebuild -dc DC1:RACK3
Rebuilds can be safely run in parallel, but this has potential performance tradeoffs. The nodes in the source datacenter are streaming data, and therefore potentially impacting application performance involving that datacenter’s data. Run tests within the environment, and adjust various levels of parallelism and streaming throttling to achieve the optimal balance of speed and performance.
If the load on the source datacenter is your primary concern, run
nodetool rebuild -dcon only one node at a time. This reduces the load on the source datacenter at the cost of slowing the rebuild process.If the speed of the rebuild is your primary concern, you can run the command on multiple nodes simultaneously. This requires that the cluster have the capacity to handle the extra I/O and network pressure.
-
-
Monitor the rebuild progress for the new datacenter using
nodetool netstatsand examining the size of each node.The
nodetool rebuildcommand issues a JMX call to the DSE node and waits for rebuild to finish before returning to the command line. Once the JMX call is invoked, the rebuild process continues to run on the server even if thenodetoolcommand stops. Typically there is not significant output from thenodetool rebuildcommand. Instead, monitor rebuild progress usingnodetool netstats, as well as examining the data size of each node.The data load shown in
nodetool statusis updated only after a given source node is done streaming, and can appear to lag behind bytes reported on disk (e.g.du). Should any streaming errors occur,ERRORmessages are logged tosystem.logand the rebuild stops. If a temporary failure occurs, you can runnodetool rebuildagain and skip any ranges that are already successfully streamed. -
Adjust stream throttling on the source datacenter as required to balance out network traffic. See nodetool setinterdcstreamthroughput.
This setting is applied to the source nodes and throttles the bandwidth used for streaming. Adding additional simultaneous rebuilds does spread the allocated bandwidth across more operations and slows the speed of all simultaneous rebuilds.
-
To confirm all rebuilds are successful, search for
finished rebuildin thesystem.logof each node in the new datacenter.In rare cases the communication between two streaming nodes may hang, leaving the rebuild operation running but with no data streaming. Monitor streaming progress using
nodetool netstats. If the streams are not making any progress, restart the node wherenodetool rebuildwas executed and runnodetool rebuildagain using the original parameters specified. -
If you modified the inter-datacenter streaming throughput during the rebuild process, then return it to the original setting.
-
Re-enable nodesync on all nodes in the new cluster. You should also re-enable the OpsCenter repair service.
nodetool nodesyncservice enable -
Start the DataStax Agent on each node in the new datacenter, if necessary.
-
Start the OpsCenter Repair Service, if necessary.