A deployment scenario with a mixed workload cluster has more than one data center for
each type of node.
In this scenario, a mixed workload cluster has more than one data center for each
type of node. For example, if the cluster has 4 Hadoop nodes, 4 Cassandra nodes, and
2 Solr nodes, the cluster could have 5 data centers: 2 data centers for Hadoop
nodes, 2 data centers for Cassandra nodes, and 1 data center for Solr nodes. A
single data center cluster has only 1 data center for each type of node
In multiple data center deployments, data replication can be distributed across
multiple, geographically dispersed data centers; between different physical racks in
a data center; or between public cloud providers and on-premise managed data
centers. Data replicates across the data centers automatically and transparently. No
ETL work is necessary to move data between different systems or servers. You can
configure the number of copies in each data center and Cassandra handles the rest,
replicating the data for you. For more information about replication:
To configure a single data center cluster, see Single data center deployment.
Prerequisites
To configure a multi-node cluster with multiple data centers:
- DataStax Enterprise is installed on each node.
- Choose a name for the cluster.
- For a mixed-workload cluster, determine the purpose of
each node.
- Get the IP address of each node.
- Determine which nodes are seed nodes. (Seed nodes provide
the means for all the nodes to find each other and learn the topology of the
ring.)
- Develop a naming convention for each data center and rack, for example: DC1, DC2
or 100, 200 and RAC1, RAC2 or R101, R102.
- Other possible configuration settings are described in
the cassandra.yaml configuration
file.
- Set virtual nodes correctly for the type of data
center. DataStax recommends using virtual nodes only on data centers running
Cassandra real-time workloads. See Virtual nodes.
Procedure
This configuration example describes installing a 6 node cluster spanning 2 data
centers. The default consistency level is QUORUM.
-
Suppose you install DataStax Enterprise on these nodes:
- node0 10.168.66.41 (seed1)
- node1 10.176.43.66
- node2 10.168.247.41
- node3 10.176.170.59 (seed2)
- node4 10.169.61.170
- node5 10.169.30.138
-
If the nodes are behind a firewall, open the required ports for
internal/external communication. See Configuring
firewall port access.
-
If DataStax Enterprise is running, stop the nodes and clear the data:
- Packaged
installs:
$ sudo service dse stop
$ sudo rm -rf /var/lib/cassandra/* ## Clears the data from the default directories
- Tarball installs:
From the install
directory:
$ sudo bin/dse cassandra-stop
$ sudo rm -rf /var/lib/cassandra/* ## Clears the data from the default directories
-
Set the properties in the cassandra.yaml file for each
node.
Important: After making any changes
in the cassandra.yaml file, you must restart the node
for the changes to take effect.
Location:
- Packaged installs:
/etc/dse/cassandra/cassandra.yaml
- Tarball installs:
install_location/resources/cassandra/conf/cassandra.yaml
Properties to set:
Note: If the nodes in the cluster are identical in
terms of disk layout, shared libraries, and so on, you can use the same copy
of the cassandra.yaml file on all of them.
- num_tokens: 256 for Cassandra nodes
- num_tokens: 1 for Hadoop and Solr nodes
- -seeds: internal_IP_address of
each seed node
- listen_address: empty
If not
set, Cassandra asks the system for the local address, the one associated
with its hostname. In some cases Cassandra doesn't produce the correct
address and you must specify the listen_address.
- auto_bootstrap: false (Add this
setting only when initializing a fresh cluster with no data.)
- If you are using a cassandra.yaml from a previous
version, remove the following options, as they are no longer supported by
DataStax
Enterprise:
## Replication strategy to use for the auth keyspace.
auth_replication_strategy: org.apache.cassandra.locator.SimpleStrategy
auth_replication_options:
replication_factor: 1
Example:
You must include at least one seed node from each data center. It is a best
practice to have more than one seed node per data center.
cluster_name: 'MyDemoCluster'
num_tokens: 256
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "10.168.66.41,10.176.170.59"
listen_address:
-
If necessary, change the dse.yaml file on each node to specify the
snitch to be delegated by the DseDelegateSnitch. For more information about
snitches, see the About Snitches.
- Packaged installs: /etc/dse/dse.yaml
- Tarball installs: install_location/resources/dse/conf/dse.yaml
Example of specifying the PropertyFileSnitch:
delegated_snitch: org.apache.cassandra.locator.PropertyFileSnitch
-
In the cassandra-topology.properties file, use your naming
convention to assign data center and rack names to the IP addresses of each
node, and assign a default data center name and rack name for unknown
nodes.
- Packaged installs: /etc/dse/cassandra/cassandra-topology.properties
- Tarball installs: install_location
Example:
# Cassandra Node IP=Data Center:Rack
10.168.66.41=DC1:RAC1
10.176.43.66=DC2:RAC1
10.168.247.41=DC1:RAC1
10.176.170.59=DC2:RAC1
10.169.61.170=DC1:RAC1
10.169.30.138=DC2:RAC1
# default for unknown nodes
default=DC1:RAC1
-
After you have installed and configured DataStax Enterprise on all nodes, start
the seed nodes one at a time, and then start the rest of the nodes:
Note: If the node has restarted because of automatic restart, you must stop the
node and clear the data directories, as described above.
-
Check that your cluster is up and running:
- Packaged installs: $ nodetool status
- Tarball installs:
$
install_location/bin/nodetool
status