Initializing a multiple node cluster (single datacenter)
A deployment scenario for an Apache Cassandra cluster with a single datacenter.
This topic contains information for deploying an Apache Cassandra™ cluster with a single datacenter. If you're new to Cassandra, and haven't set up a cluster, see Planning and testing cluster deployments.
Prerequisites
- A good understanding of how Cassandra works. At minimum, be sure to read Understanding the architecture, especially the Data replication section, and Cassandra's rack feature.
- Install Cassandra on each node.
- Choose a name for the cluster.
- Get the IP address of each node.
- Determine which nodes will be seed nodes. Do not make all nodes seed nodes. Please read Internode communications (gossip).
- Determine the snitch and replication strategy. The GossipingPropertyFileSnitch and NetworkTopologyStrategy are recommended for production environments.
- Determine a naming convention for each rack. For example, good names are RAC1, RAC2 or R101, R102.
- The cassandra.yaml configuration file, and property files such as cassandra-rackdc.properties, give you more configuration options. See the Configuration section for more information.
This example describes installing a 6 node cluster spanning 2 racks in a single datacenter. Each node is already configured to use the GossipingPropertyFileSnitch and 256 virtual nodes (vnodes).
In Cassandra, "datacenter" is synonymous with "replication group". Both terms refer to a set of nodes configured as a group for replication purposes.
Procedure
-
Suppose you install Cassandra on these nodes:
node0 110.82.155.0 (seed1) node1 110.82.155.1 node2 110.82.155.2 node3 110.82.156.3 (seed2) node4 110.82.156.4 node5 110.82.156.5
Note: It is a best practice to have more than one seed node per datacenter. - If you have a firewall running in your cluster, you must open certain ports for communication between the nodes. See Configuring firewall port access.
-
If Cassandra is running, you must stop the server and clear the data:
Doing this removes the default cluster_name (Test Cluster) from the system table. All nodes must use the same cluster name.
Package installations:
Tarball installations:
-
Set the properties in the cassandra.yaml file
for each node:
Note: After making any changes in the cassandra.yaml file, you must restart the node for the changes to take effect.Properties to set:
- cluster_name:
- num_tokens: recommended value: 256
- -seeds:
internal IP address of each seed node
In new clusters. Seed nodes don't perform bootstrap (the process of a new node joining an existing cluster.)
- listen_address:
If the node is a seed node, this address must match an IP address in the seeds list. Otherwise, gossip communication fails because it doesn't know that it is a seed.
If not set, Cassandra asks the system for the local address, the one associated with its hostname. In some cases Cassandra doesn't produce the correct address and you must specify the listen_address.
- rpc_address:listen address for client connections
- endpoint_snitch: name of snitch (See endpoint_snitch.) If you are changing snitches, see Switching snitches.
Note: If the nodes in the cluster are identical in terms of disk layout, shared libraries, and so on, you can use the same cassandra.yaml file on all of them.Example:
cluster_name: 'MyCassandraCluster' num_tokens: 256 seed_provider: - class_name: org.apache.cassandra.locator.SimpleSeedProvider parameters: - seeds: "110.82.155.0,110.82.155.3" listen_address: rpc_address: 0.0.0.0 endpoint_snitch: GossipingPropertyFileSnitch
If rpc_address is set to a wildcard address (0.0.0.0
), then broadcast_rpc_address must be set, or the service won't even start. -
In the cassandra-rackdc.properties file, assign the datacenter and rack names you determined in the Prerequisites. For example:
# indicate the rack and dc for this node dc=DC1 rack=RAC1
-
The
GossipingPropertyFileSnitch
always loads cassandra-topology.properties when that file is present. Remove the file from each node on any new cluster or any cluster migrated from thePropertyFileSnitch
. -
After you have installed and configured Cassandra on all nodes, DataStax
recommends starting the seed nodes one at a time, and then starting the rest of
the nodes.
Note: If the node has restarted because of automatic restart, you must first stop the node and clear the directories, as described above.Package installations:
sudo service cassandra start #Starts Cassandra
Tarball installations:cd install_location
bin/cassandra
-
To check that the ring is up and running, run:
Package installations:
nodetool status
Tarball installations:cd install_location
bin/nodetool status
The output should list each node, and show its status as
UN
(Up Normal).The location of the cassandra.yaml file depends on the type of installation:Cassandra package installations /etc/cassandra/cassandra.yaml Cassandra tarball installations install_location/cassandra/conf/cassandra.yaml