Initializing a multiple node cluster (multiple datacenters)
A deployment scenario for a Cassandra cluster with multiple datacenters.
This topic contains information for deploying a Cassandra cluster with multiple data centers.
This example describes installing a six node cluster spanning two datacenters. Each node is configured to use the GossipingPropertyFileSnitch (multiple rack aware) and 256 virtual nodes (vnodes).
In Cassandra, the term datacenter is a grouping of nodes. datacenter is synonymous with replication group, that is, a grouping of nodes configured together for replication purposes.
Prerequisites
- A good understanding of how Cassandra works. Be sure to read at least Understanding the architecture, Data replication, and Cassandra's rack feature.
- Install Cassandra on each node.
- Choose a name for the cluster.
- Get the IP address of each node.
- Determine which nodes will be seed nodes. Do not make all nodes seed nodes. Please read Internode communications (gossip).
- Determine the snitch and replication strategy. The GossipingPropertyFileSnitch and NetworkTopologyStrategy are recommended for production environments.
- If using multiple datacenters, determine a naming convention for each data center and rack, for example: DC1, DC2 or 100, 200 and RAC1, RAC2 or R101, R102. Choose the name carefully; renaming a datacenter is not possible.
- Other possible configuration settings are described in The cassandra.yaml configuration file and property files such as cassandra-rackdc.properties.
Procedure
-
Suppose you install Cassandra on these nodes:
node0 10.168.66.41 (seed1) node1 10.176.43.66 node2 10.168.247.41 node3 10.176.170.59 (seed2) node4 10.169.61.170 node5 10.169.30.138
Note: It is a best practice to have more than one seed node per datacenter. - If you have a firewall running in your cluster, you must open certain ports for communication between the nodes. See Configuring firewall port access.
-
If Cassandra is running, you must stop the server and clear the data:
Doing this removes the default cluster_name (Test Cluster) from the system table. All nodes must use the same cluster name.
Package installations:
Tarball installations:
-
Set the properties in the cassandra.yaml file
for each node:
Note: After making any changes in the cassandra.yaml file, you must restart the node for the changes to take effect.Properties to set:
- num_tokens: recommended value: 256
- -seeds:
internal IP address of each seed node
Seed nodes do not bootstrap, which is the process of a new node joining an existing cluster. For new clusters, the bootstrap process on seed nodes is skipped.
- listen_address:
If not set, Cassandra asks the system for the local address, the one associated with its hostname. In some cases Cassandra doesn't produce the correct address and you must specify the listen_address.
- endpoint_snitch: GossipingPropertyFileSnitch (See endpoint_snitch.) If you are changing snitches, see Switching snitches.
Note: If the nodes in the cluster are identical in terms of disk layout, shared libraries, and so on, you can use the same copy of the cassandra.yaml file on all of them.Example:
cluster_name: 'MyCassandraCluster' num_tokens: 256 seed_provider: - class_name: org.apache.cassandra.locator.SimpleSeedProvider parameters: - seeds: "10.168.66.41,10.176.170.59" listen_address: endpoint_snitch: GossipingPropertyFileSnitch
Note: Include at least one node from each datacenter. -
In the cassandra-rackdc.properties file, assign the
datacenter and rack names you determined in the Prerequisites. For
example:
Nodes 0 to 2
## Indicate the rack and dc for this node dc=DC1 rack=RAC1
Nodes 3 to 5
## Indicate the rack and dc for this node dc=DC2 rack=RAC1
-
After you have installed and configured Cassandra on all nodes, start the seed
nodes one at a time, and then start the rest of the nodes.
Note: If the node has restarted because of automatic restart, you must first stop the node and clear the directories, as described above.
Package installations:
sudo service cassandra start
Tarball installations:
cd install_location
cdbin/cassandra
-
To check that the ring is up and running, run:
Package installations:
nodetool status
Tarball installations:
cd install_location
bin/nodetool status
Each node should be listed and it's status and state should be UN (Up Normal).
The location of the cassandra.yaml file depends on the type of installation:Package installations /etc/cassandra/cassandra.yaml Tarball installations install_location/resources/cassandra/conf/cassandra.yaml