Adding a datacenter to a cluster
Add a datacenter to an existing cluster.
Complete the following steps to add a datacenter to an existing cluster.
cassandra-topology.properties
The location of the cassandra-topology.properties file depends on the type of installation:Package installations | /etc/dse/cassandra/cassandra-topology.properties |
Tarball installation | installation_location/resources/cassandra/conf/cassandra-topology.properties |
dse.yaml
The location of the dse.yaml file depends on the type of installation:Package installations | /etc/dse/dse.yaml |
Tarball installations | installation_location/resources/dse/conf/dse.yaml |
cassandra-rackdc.properties
The location of the cassandra-rackdc.properties file depends on the type of installation:Package installations | /etc/dse/cassandra/cassandra-rackdc.properties |
Tarball installations | installation_location/resources/cassandra/conf/cassandra-rackdc.properties |
cassandra.yaml
The location of the cassandra.yaml file depends on the type of installation:Package installations | /etc/dse/cassandra/cassandra.yaml |
Tarball installations | installation_location/resources/cassandra/conf/cassandra.yaml |
Prerequisites
Important: Complete the prerequisite tasks outlined in Initializing a DataStax Enterprise cluster to prepare the environment.
If the new datacenter uses existing nodes from another
datacenter or cluster, complete the following steps to ensure that old data will not
interfere with the new cluster:
- If the nodes are behind a firewall, open the required ports for internal/external communication.
- Decommission each node that will be added to the new datacenter.
- Clear the data from DataStax Enterprise (DSE) to completely remove application directories.
- Install DSE on each node.
Procedure
-
Complete the following steps to prevent client applications from prematurely
connecting to the new datacenter, and to ensure that the consistency level for reads or writes
does not query the new datacenter:
Warning: If client applications, including DSE Search and DSE Analytics, are not properly configured, they might connect to the new datacenter before it is online. Incorrect configuration results in connection exceptions, timeouts, and/or inconsistent data.
- Configure client applications to use the DCAwareRoundRobinPolicy.
- Direct clients to an existing datacenter. Otherwise, clients might try to access the new datacenter, which might not have any data.
-
If using the
QUORUM
consistency level, change toLOCAL_QUORUM
. -
If using the
ONE
consistency level, set toLOCAL_ONE
.
See the programming instructions for your driver.
-
Configure every keyspace using SimpleStrategy to use the NetworkTopologyStrategy
replication strategy, including (but not restricted to) the following
keyspaces.
If SimpleStrategy was used previously, this step is required to configure NetworkTopologyStrategy.
-
In the new datacenter, install DSE on each
new node. Do not start the service or restart the node.
Important: Use the same version of DSE on all nodes in the cluster.
-
Configure properties in
cassandra.yaml on each new node, following the
configuration of the other nodes in the cluster.
Tip: Use the yaml_diff tool to review and make appropriate changes to the cassandra.yaml and dse.yaml configuration files.
-
In the
cassandra-rackdc.properties
(GossipingPropertyFileSnitch) or
cassandra-topology.properties
(PropertyFileSnitch) file, assign datacenter and rack names to the IP addresses
of each node, and assign a default datacenter name and rack name for unknown
nodes.
Note: Migration information: The GossipingPropertyFileSnitch always loads cassandra-topology.properties when the file is present. Remove the file from each node on any new cluster, or any cluster migrated from the PropertyFileSnitch.
# Transactional Node IP=Datacenter:Rack 110.82.155.0=DC_Transactional:RAC1 110.82.155.1=DC_Transactional:RAC1 110.54.125.1=DC_Transactional:RAC2 110.54.125.2=DC_Analytics:RAC1 110.54.155.2=DC_Analytics:RAC2 110.82.155.3=DC_Analytics:RAC1 110.54.125.3=DC_Search:RAC1 110.82.155.4=DC_Search:RAC2 # default for unknown nodes default=DC1:RAC1
Note: After making any changes in the configuration files, you must the restart the node for the changes to take effect. -
Make the following changes in the existing
datacenters.
-
On nodes in the existing datacenters, update the
-seeds
property in cassandra.yaml to include the seed nodes in the new datacenter. - Add the new datacenter definition to the cassandra.yaml properties file for the type of snitch used in the cluster. If changing snitches, see Switching snitches.
-
On nodes in the existing datacenters, update the
-
After you have installed and configured DataStax Enterprise on all nodes, start
the seed nodes one at a time, and then start the rest of the nodes:
- Package installations: Starting DataStax Enterprise as a service
- Tarball installations: Starting DataStax Enterprise as a stand-alone process
- Rotate starting DSE through the racks until all the nodes are up.
-
After all nodes are running in the cluster and the client applications are
datacenter aware, use cqlsh to alter the keyspaces to add the desired
replication in the new datacenter.
ALTER KEYSPACE keyspace_name WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'ExistingDC1' : 3, 'NewDC2' : 2};
Warning: If client applications, including DSE Search and DSE Analytics, are not properly configured, they might connect to the new datacenter before it is online. Incorrect configuration results in connection exceptions, timeouts, and/or inconsistent data. -
Run nodetool
rebuild on each node in the new datacenter, specifying the datacenter
to rebuild from. This step replicates the data to the new datacenter in the
cluster.
nodetool rebuild -- datacenter_name
CAUTION: You must specify an existing datacenter in the command line, or the new nodes will appear to rebuild successfully, but might not contain all anticipated data.Requests to the new datacenter with
LOCAL_ONE
orONE
consistency levels can fail if the existing datacenters are not completely in-sync.-
Use
nodetool rebuild
on one or more nodes at the same time. Run on one node at a time to reduce the impact on the existing cluster. - Alternatively, run the command on multiple nodes simultaneously when the cluster can handle the extra I/O and network pressure.
-
Use
-
Check that your cluster is up and running:
dsetool status
Note: If DSE has problems starting, look for starting DSE troubleshooting and other articles in the Support Knowledge Center.
Results
DC: Cassandra Workload: Cassandra Graph: no
==============================================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 110.82.155.0 21.33 KB 256 33.3% a9fa31c7-f3c0-... RAC1
UN 110.82.155.1 21.33 KB 256 33.3% f5bb416c-db51-... RAC1
UN 110.54.125.1 21.33 KB 256 16.7% b836748f-c94f-... RAC1
DC: Analytics
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Owns Host ID Tokens Rack
UN 110.54.125.2 28.44 KB 13.0.% e2451cdf-f070- ... -922337.... RAC1
UN 110.82.155.2 44.47 KB 16.7% f9fa427c-a2c5- ... 30745512... RAC1
UN 110.82.155.3 54.33 KB 23.6% b9fc31c7-3bc0- ..- 45674488... RAC1
DC: Solr
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Owns Host ID Tokens Rack
UN 110.54.125.3 15.44 KB 50.2.% e2451cdf-f070- ... 9243578.... RAC1
UN 110.82.155.4 18.78 KB 49.8.% e2451cdf-f070- ... 10000 RAC1