A deployment scenario with a mixed workload cluster has more than one data center for
each type of node.
In this scenario, a mixed workload cluster has more than one data center for each
type of node. For example, if the cluster has 4 Hadoop nodes, 4 Cassandra nodes, and
2 Solr nodes, the cluster could have 5 data centers: 2 data centers for Hadoop
nodes, 2 data centers for Cassandra nodes, and 1 data center for the Solr nodes. A
single data-center cluster has only one
data center for each type of node.
In Cassandra, a data center can be a physical data center or
virtual data center. Different workloads should use separate data centers,
either physical or virtual. Using separate data centers prevents Cassandra
transactions from being impacted by other workloads and keeps requests close to
each other for lower latency. Replication is set by data center amd depending on
the replication factor, data can be written to multiple data centers. However,
data centers should never span physical locations. Uses for multiple data center
deployments include:
- Isolating replicas from external infrastructure failures, such as networking
between data centers and power outages.
- Distributing data replication across multiple, geographically dispersed
nodes.
- Between different locations in a physical data center.
- Between public cloud providers and on-premise managed data centers.
- Use separate physical or virtual data centers to prevent the slow down of a
real-time analytics jobs by other analytics jobs on live data.
For more information about replication:
Prerequisites
To configure a multi-node cluster with multiple data centers:
- A good understanding of how Cassandra works. Be sure to read at least Understanding the architecture, Data Replication, and Cassandra's rack feature.
- DataStax Enterprise is installed on each node.
- Choose a name for the cluster.
- For a mixed-workload cluster, determine the purpose of
each node.
- Determine the snitch and replication strategy. The GossipingPropertyFileSnitch and NetworkTopologyStrategy are
recommended for production environments.
- Get the IP address of each node.
- Determine which nodes are seed nodes. Do not make all
nodes seed nodes. Please read Internode communications
(gossip).
- Develop a naming convention for each data center and rack, for example: DC1, DC2
or 100, 200 and RAC1, RAC2 or R101, R102.
- Other possible configuration settings are described in
the cassandra.yaml configuration file and
property files such as cassandra-rackdc.properties.
- Set virtual nodes correctly for the type of data
center. DataStax recommends using virtual nodes only on data centers running
Cassandra real-time workloads. See Virtual nodes.
Procedure
This configuration example describes installing a 6 node cluster spanning 2 data
centers. The default consistency level is QUORUM.
-
Suppose you install DataStax Enterprise on these nodes:
- node0 10.168.66.41 (seed1)
- node1 10.176.43.66
- node2 10.168.247.41
- node3 10.176.170.59 (seed2)
- node4 10.169.61.170
- node5 10.169.30.138
-
If the nodes are behind a firewall, open the required ports for
internal/external communication. See Configuring
firewall port access.
-
If DataStax Enterprise is running, stop the nodes and clear the data:
- Installer-Services and Package installations:
$ sudo service dse stop
$ sudo rm -rf /var/lib/cassandra/* ## Clears the data from the default directories
- Installer-No Services and Tarball installations:
From the install
directory:
$ sudo bin/dse cassandra-stop
$ sudo rm -rf /var/lib/cassandra/* ## Clears the data from the default directories
-
Set the properties in the cassandra.yaml file for each
node, located in:
- Installer-Services and Package installations:
/etc/dse/cassandra/cassandra.yaml
- Installer-No Services and Tarball installations:
install_location/resources/cassandra/conf/cassandra.yaml
Important: After making any changes
in the cassandra.yaml file, you must restart the node
for the changes to take effect.
Properties to set:
Note: If the nodes in the cluster are identical in
terms of disk layout, shared libraries, and so on, you can use the same copy
of the cassandra.yaml file on all of them.
- num_tokens: 256 for Cassandra
nodes
- num_tokens: 1 for Hadoop and
Solr nodes
- -seeds: internal_IP_address of
each seed node
- listen_address: empty
If not
set, Cassandra asks the system for the local address, the one associated
with its host name. In some cases Cassandra doesn't produce the correct
address and you must specify the listen_address.
- auto_bootstrap: false
Add the
bootstrap setting only when
initializing a fresh cluster with no data.
- endpoint_snitch: snitch
For
more information, see endpoint_snitch and About Snitches.
- If you are using a cassandra.yaml from a previous
version, remove the following options, as they are no longer supported by
DataStax
Enterprise:
## Replication strategy to use for the auth keyspace.
auth_replication_strategy: org.apache.cassandra.locator.SimpleStrategy
auth_replication_options:
replication_factor: 1
Example:
You must include at least one seed node from each data center. It is a best
practice to have more than one seed node per data center.
cluster_name: 'MyDemoCluster'
num_tokens: 256
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "10.168.66.41,10.176.170.59"
listen_address:
-
In the
cassandra-topology.properties or
cassandra-rackdc.properties file, use your naming
convention to assign data center and rack names to the IP addresses of each
node, and assign a default data center name and rack name for unknown
nodes.
- Installer-Services and Package installations:
/etc/dse/cassandra
- Installer-No Services and Tarball installations:
install_location/resources/cassandra/conf
Example:
# Cassandra Node IP=Data Center:Rack
10.168.66.41=DC1:RAC1
10.176.43.66=DC2:RAC1
10.168.247.41=DC1:RAC1
10.176.170.59=DC2:RAC1
10.169.61.170=DC1:RAC1
10.169.30.138=DC2:RAC1
# default for unknown nodes
default=DC1:RAC1
-
After you have installed and configured DataStax Enterprise on all nodes, start
the seed nodes one at a time, and then start the rest of the nodes:
Note: If the node has restarted because of automatic restart, you must stop the
node and clear the data directories, as described above.
-
Check that your cluster is up and running:
- Installer-Services and Package installations:
$ nodetool status
- Installer-No Services and Tarball installations:
$ install_location/bin/nodetool
status
Results
Datacenter: DC1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 10.168.66.41 45.96 KB 256 27.4% c885aac7-f2c0-... RAC1
UN 10.168.247.41 66.34 KB 256 36.6% fa31416c-db22-... RAC1
UN 10.169.61.170 55.72 KB 256 33.0% f488367f-c14f-... RAC1
Datacenter: DC2
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 10.176.43.66 45.96 KB 256 27.4% f9fa31c7-f3c0-... RAC1
UN 10.176.170.59 66.34 KB 256 36.6% a5bb526c-db51-... RAC1
UN 10.169.30.138 55.72 KB 256 33.0% b836478f-c49f-... RAC1