Initializing multiple datacenters per workload type
In this scenario, a mixed workload cluster has more than one datacenter for each type of workload. For example, the following ten-node cluster is spans five datacenters, whereas a single datacenter cluster has only one datacenter for each node type.
-
DC1 = 2 DSE Analytics nodes
-
DC2 = 2 Transactional nodes
-
DC3 = 2 DSE Search nodes
-
DC4 = 2 DSE Analytics nodes
-
DC5 = 2 Transactional nodes
The ten-node cluster spans two racks across five datacenters.
Applications in each datacenter will use a default consistency level of LOCAL_QUORUM
.
One node per rack will serve as a seed node.
Node | IP address | Type | Seed | |
---|---|---|---|---|
node0 |
110.82.155.0 |
Transactional |
✓ |
RAC1 |
node1 |
110.82.155.1 |
Transactional |
RAC1 |
|
node2 |
110.54.125.1 |
Transactional |
RAC2 |
|
node3 |
110.55.120.1 |
Transactional |
RAC1 |
|
node4 |
110.54.125.2 |
Analytics |
RAC1 |
|
node5 |
110.54.155.2 |
Analytics |
✓ |
RAC2 |
node6 |
110.82.155.3 |
Analytics |
RAC1 |
|
node7 |
110.55.120.2 |
Analytics |
RAC1 |
|
node8 |
110.54.125.3 |
Search |
RAC1 |
|
node9 |
110.82.155.4 |
Search |
RAC2 |
Prerequisites
Complete the prerequisite tasks outlined in Initializing a DataStax Enterprise cluster to prepare the environment. |
If the new datacenter uses existing nodes from another datacenter or cluster, complete the following steps to ensure that old data will not interfere with the new cluster:
-
If the nodes are behind a firewall, open the required ports for internal/external communication.
-
Decommission each node that will be added to the new datacenter.
-
Clear the data from DataStax Enterprise (DSE) to completely remove application directories.
-
Install DSE on each node. Do not start the service.
Procedure
-
Complete the following steps to prevent client applications from prematurely connecting to the new datacenter, and to ensure that the consistency level for reads or writes does not query the new datacenter:
If client applications, including DSE Search and DSE Analytics, are not properly configured, they might connect to the new datacenter before it is online. Incorrect configuration results in connection exceptions, timeouts, and/or inconsistent data.
-
Configure client applications to use the DCAwareRoundRobinPolicy.
-
Direct clients to an existing datacenter. Otherwise, clients might try to access the new datacenter, which might not have any data.
-
If using the
QUORUM
consistency level, change toLOCAL_QUORUM
. -
If using the
ONE
consistency level, set toLOCAL_ONE
.
See the programming instructions for your driver.
-
-
In existing datacenters, if the SimpleStrategy replication strategy is in use, change it to the NetworkTopologyStrategy replication strategy.
-
Use
ALTER KEYSPACE
to change the keyspace replication strategy to NetworkTopologyStrategy for the following keyspaces.ALTER KEYSPACE keyspace_name WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'DC1' : 3};
-
DSE security:
system_auth
,dse_security
-
DSE performance:
dse_perf
-
DSE analytics:
dse_leases, dsefs
-
System resources:
system_traces
,system_distributed
-
OpsCenter (if installed)
-
All keyspaces created by users
-
-
Use
DESCRIBE SCHEMA
to check the replication strategy of keyspaces in the cluster. Ensure that any existing keyspaces use the NetworkTopologyStrategy replication strategy.DESCRIBE SCHEMA ;
CREATE KEYSPACE dse_perf WITH replication = {'class': 'NetworkTopologyStrategy, 'DC1': '3'} AND durable_writes = true; ... CREATE KEYSPACE dse_leases WITH replication = {'class': 'NetworkTopologyStrategy, 'DC1': '3'} AND durable_writes = true; ... CREATE KEYSPACE dsefs WITH replication = {'class': 'NetworkTopologyStrategy, 'DC1': '3'} AND durable_writes = true; ... CREATE KEYSPACE dse_security WITH replication = {'class': 'NetworkTopologyStrategy, 'DC1': '3'} AND durable_writes = true;
-
-
Install DSE on each node in the new datacenter, install DSE. Do not start the service or restart the node.
Use the same version of DSE on all nodes in the cluster.
-
Configure properties in
cassandra.yaml
on each new node, following the configuration of the other nodes in the cluster.If you used Lifecycle Manager to provision the nodes, configuration is performed automatically.
Use the yaml_diff tool to review and make appropriate changes to the
cassandra.yaml
anddse.yaml
configuration files.-
Configure node properties:
-
-seeds
: <internal_IP_address> of each seed nodeInclude at least one seed node from each datacenter. DataStax recommends more than one seed node per datacenter, in more than one rack.
3
is the most common number of seed nodes per datacenter. Do not make all nodes seed nodes. -
auto_bootstrap
: <true>This setting has been removed from the default configuration, but, if present, should be set to
true
. -
listen_address
: <empty>If not set, DSE asks the system for the local address, which is associated with its host name. In some cases, DSE does not produce the correct address, which requires specifying the
listen_address
. -
endpoint_snitch
: <snitch>See endpoint_snitch and snitches.
Do not use the DseSimpleSnitch. The DseSimpleSnitch (default) is used only for single-datacenter deployments (or single-zone deployments in public clouds), and does not recognize datacenter or rack information.
Snitch Configuration file -
If using a
cassandra.yaml
ordse.yaml
file from a previous version, check the Upgrade Guide for removed settings.
-
-
Configure node architecture (all nodes in the datacenter must use the same type):
Virtual node (vnode) allocation algorithm settings
-
Set num_tokens to 8 (recommended).
-
Set allocate_tokens_for_local_replication_factor to the target replication factor for keyspaces in the new datacenter. If the keyspace RF varies, alternate the settings to use all the replication factors.
-
Comment out the initial_token property.
See Virtual node (vnode) configuration for more details.
Single-token architecture settings
-
Generate the initial token for each node and set this value for the initial_token property.
See Adding or replacing single-token nodes for more information.
-
Comment out both num_tokens and allocate_tokens_for_local_replication_factor.
-
-
-
In the
cassandra-rackdc.properties
(GossipingPropertyFileSnitch) orcassandra-topology.properties
(PropertyFileSnitch) file, assign datacenter and rack names to the IP addresses of each node, and assign a default datacenter name and rack name for unknown nodes.Migration information: The GossipingPropertyFileSnitch always loads
cassandra-topology.properties
when the file is present. Remove the file from each node on any new datacenter, or any datacenter migrated from the PropertyFileSnitch.# Transactional Node IP=Datacenter:Rack 110.82.155.0=DC_Transactional:RAC1 110.82.155.1=DC_Transactional:RAC1 110.54.125.1=DC_Transactional:RAC2 110.54.125.2=DC_Analytics:RAC1 110.54.155.2=DC_Analytics:RAC2 110.82.155.3=DC_Analytics:RAC1 110.54.125.3=DC_Search:RAC1 110.82.155.4=DC_Search:RAC2 # default for unknown nodes default=DC1:RAC1
After making any changes in the configuration files, you must the restart the node for the changes to take effect.
-
Make the following changes in the existing datacenters.
-
On nodes in the existing datacenters, update the
-seeds
property incassandra.yaml
to include the seed nodes in the new datacenter. -
Add the new datacenter definition to the
cassandra.yaml
properties file for the type of snitch used in the cluster. If changing snitches, see Switching snitches.
-
-
After you have installed and configured DataStax Enterprise on all nodes, start the nodes sequentially, beginning with the seed nodes. After starting each node, allow a delay of at least the value specified in
ring_delay_ms
before starting the next node, to prevent a cluster imbalance.Before starting a node, ensure that the previous node is up and running by verifying that it has a
nodetool status
ofUN
. Failing to do so will result in cluster imbalance that cannot be fixed later. Cluster imbalance can be visualised by runningnodetool status $keyspace
and by looking at the ownership column. A properly setup cluster will report ownership values similar to each other (±1%). That is, for keyspaces where the RF per DC is equal toallocate_tokens_for_local_replication_factor
.See allocate_tokens_for_local_replication_factor for more infomation:
-
Package installations: Starting DataStax Enterprise as a service
-
Tarball installations: Starting DataStax Enterprise as a stand-alone process
-
-
Rotate starting DSE through the racks until all the nodes are up.
-
After all nodes are running in the cluster and the client applications are datacenter aware, use cqlsh to alter the keyspaces to add the desired replication in the new datacenter.
ALTER KEYSPACE keyspace_name WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'ExistingDC1' : 3, 'NewDC2' : 2};
If client applications, including DSE Search and DSE Analytics, are not properly configured, they might connect to the new datacenter before it is online. Incorrect configuration results in connection exceptions, timeouts, and/or inconsistent data.
-
Run nodetool rebuild on each node in the new datacenter, specifying the datacenter to rebuild from. This step replicates the data to the new datacenter in the cluster.
nodetool rebuild -- <datacenter_name>
You must specify an existing datacenter in the command line, or the new nodes will appear to rebuild successfully, but might not contain all anticipated data.
Requests to the new datacenter with
LOCAL_ONE
orONE
consistency levels can fail if the existing datacenters are not completely in-sync.-
Use
nodetool rebuild
on one or more nodes at the same time. Run on one node at a time to reduce the impact on the existing cluster. -
Alternatively, run the command on multiple nodes simultaneously when the cluster can handle the extra I/O and network pressure.
-
-
Check that the new cluster is up and running:
dsetool status
If DSE has problems starting, look for starting DSE troubleshooting and other articles in the Support Knowledge Center.
-
Complete 3 through 11 to add the remaining datacenters to the cluster.
Results
The datacenters in the cluster are now replicating with each other.
DC: Cassandra Workload: Cassandra Graph: no
==============================================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 110.82.155.0 21.33 KB 256 50.2% a9fa31c7-f3c0-... RAC1
UN 110.82.155.1 21.33 KB 256 49.8% f5bb416c-db51-... RAC1
DC: Analytics
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Owns Host ID Tokens Rack
UN 110.54.125.2 28.44 KB 50.2.% e2451cdf-f070- ... -922337.... RAC1
UN 110.82.155.2 44.47 KB 49.8% f9fa427c-a2c5- ... 30745512... RAC2
DC: Solr
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Owns Host ID Tokens Rack
UN 110.54.125.3 15.44 KB 50.2.% e2451cdf-f070- ... 9243578.... RAC1
UN 110.82.155.4 18.78 KB 49.8.% e2451cdf-f070- ... 10000 RAC2
DC: Cassandra2 Workload: Cassandra Graph: no
==============================================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 110.54.125.1 21.33 KB 256 16.7% b836748f-c94f-... RAC2
UN 110.55.120.1 21.33 KB 256 16.7% b354798g-c94f-... RAC2
DC: Analytics2
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Owns Host ID Tokens Rack
UN 110.82.155.3 54.33 KB 50.2% b9fc31c7-3bc0- ..- 45674488... RAC1
UN 110.55.120.2 54.33 KB 49.8% b8gd45e4-3bc0- ..- 45674488... RAC2