Initializing a multiple node cluster (multiple datacenters)

A deployment scenario for a Cassandra cluster with multiple datacenters.

This topic contains information for deploying a Cassandra cluster with multiple data centers.

This example describes installing a six node cluster spanning two datacenters. Each node is configured to use the GossipingPropertyFileSnitch (multiple rack aware) and 256 virtual nodes (vnodes).

In Cassandra, the term datacenter is a grouping of nodes. datacenter is synonymous with replication group, that is, a grouping of nodes configured together for replication purposes.


Each node must be correctly configured before starting the cluster. You must determine or perform the following before starting the cluster:


  1. Suppose you install Cassandra on these nodes:
    node0 (seed1)
    node3 (seed2)
    Note: It is a best practice to have more than one seed node per datacenter.
  2. If you have a firewall running in your cluster, you must open certain ports for communication between the nodes. See Configuring firewall port access.
  3. If Cassandra is running, you must stop the server and clear the data:

    Doing this removes the default cluster_name (Test Cluster) from the system table. All nodes must use the same cluster name.

    Package installations:

    1. Stop Cassandra:
      sudo service cassandra stop
    2. Clear the data:
      sudo rm -rf /var/lib/cassandra/*

    Tarball installations:

    1. Stop Cassandra:
      ps auwx | grep cassandra
      sudo kill pid
    2. Clear the data:
      sudo rm -rf install_location/data/*
  4. Set the properties in the cassandra.yaml file for each node:
    Note: After making any changes in the cassandra.yaml file, you must restart the node for the changes to take effect.
    Properties to set:
    • num_tokens: recommended value: 256
    • -seeds: internal IP address of each seed node

      Seed nodes do not bootstrap, which is the process of a new node joining an existing cluster. For new clusters, the bootstrap process on seed nodes is skipped.

    • listen_address:

      If not set, Cassandra asks the system for the local address, the one associated with its hostname. In some cases Cassandra doesn't produce the correct address and you must specify the listen_address.

    • endpoint_snitch: GossipingPropertyFileSnitch (See endpoint_snitch.) If you are changing snitches, see Switching snitches.
    Note: If the nodes in the cluster are identical in terms of disk layout, shared libraries, and so on, you can use the same copy of the cassandra.yaml file on all of them.


    cluster_name: 'MyCassandraCluster'
    num_tokens: 256
      - class_name: org.apache.cassandra.locator.SimpleSeedProvider
             - seeds:  ","
    endpoint_snitch: GossipingPropertyFileSnitch
    Note: Include at least one node from each datacenter.
  5. In the file, assign the datacenter and rack names you determined in the Prerequisites. For example:

    Nodes 0 to 2

    ## Indicate the rack and dc for this node

    Nodes 3 to 5

    ## Indicate the rack and dc for this node
  6. After you have installed and configured Cassandra on all nodes, start the seed nodes one at a time, and then start the rest of the nodes.
    Note: If the node has restarted because of automatic restart, you must first stop the node and clear the directories, as described above.

    Package installations:

    sudo service cassandra start

    Tarball installations:

    cd install_location
  7. To check that the ring is up and running, run:

    Package installations:

    nodetool status

    Tarball installations:

    cd install_location
    bin/nodetool status

    Each node should be listed and it's status and state should be UN (Up Normal).

    The location of the cassandra.yaml file depends on the type of installation:
    Package installations /etc/cassandra/cassandra.yaml
    Tarball installations install_location/resources/cassandra/conf/cassandra.yaml