Steps for adding or replacing nodes in single-token architecture clusters.
This topic applies only to clusters using single-token architecture, not vnodes.
Cassandra allows you to add capacity to a cluster by introducing new nodes to the cluster in stages and by adding an entire datacenter. When a new node joins an existing cluster, it needs to know:
- Its position in the ring and the range of data it is responsible for, which is assigned by the initial_token and the partitioner.
- The seed nodes to contact for learning about the cluster and establish the gossip process.
- The name of the cluster it is joining and how the node should be addressed within the cluster.
- Any other non-default settings made to cassandra.yaml on your existing cluster.
When you add one or more nodes to a cluster, you must calculate the tokens for the new nodes. Use one of the following approaches:
- Add capacity by doubling the cluster size
- Adding capacity by doubling (or tripling or quadrupling) the number of nodes is less complicated when assigning tokens. Existing nodes can keep their existing token assignments, and new nodes are assigned tokens that bisect (or trisect) the existing token ranges. For example, when you generate tokens for six nodes, three of the generated token values will be the same as if you generated for three nodes. To clarify, you first obtain the token values that are already in use, and then assign the newly calculated token values to the newly added nodes.
- Recalculate new tokens for all nodes and move nodes around the ring
- When increases capacity by a non-uniform number of nodes, you must recalculate tokens for the entire cluster, and then use nodetool move to assign the new tokens to the existing nodes. After all nodes are restarted with their new token assignments, run a nodetool cleanup to remove unused keys on all nodes. These operations are resource intensive and should be done during low-usage times.
When configuring the new node in Cassandra2.1 or later, you must supply a value for the initial_token property. If you leave this property with no value, Cassandra assigns this node a random token range. Adding nodes with random token range assignments results in a badly unbalanced ring.
- Install Cassandra on the new nodes, but do not start them.
- Calculate the tokens for the nodes based on the expansion strategy you are using the Token Generating Tool.
- Set the cassandra.yaml for the new nodes.
- Set the initial_token according to your token calculations.
- Start Cassandra on each new node. Allow two minutes between node initializations. You can monitor the startup and data streaming process using nodetool netstats.
- After the new nodes are fully bootstrapped, assign the new
initial_token property value to the nodes that required new
tokens, and then run
nodetool move new_token,one node at a time.
- After all nodes have their new tokens assigned, run nodetool cleanup one node at a time for each node. Wait for cleanup
to complete before doing the next node. This step removes the keys that no
longer belong to the previously existing nodes. Note: Cleanup may be safely postponed for low-usage hours.
Before starting this procedure, please read the guidelines in Adding Capacity to an Existing Cluster above.
- Ensure that you are using NetworkTopologyStrategy for all of your keyspaces.
- For each new node, edit the configuration properties in the
- Set the
initial_token. Be sure to offset the tokens in the new datacenter, see Generating tokens.
- Set the
- Set any other non-default settings.
- Set the seed lists. Every node in the cluster must have the same list of seeds and include at least one node from each datacenter. Typically one to three seeds are used per datacenter.
- Update either the
cassandra-rackdc.properties(GossipingPropertyFileSnitch) on all servers to include the new nodes. You do not need to restart.The location of the conf directory depends on the type of installation:
- Package installations: /etc/cassandra/conf
- Tarball installations: install_location/conf
- Ensure that your client does not auto-detect the new nodes so that they aren't
contacted by the client until explicitly directed. For example in Hector, set
- If using a QUORUM consistency level for reads or writes, check the LOCAL_QUORUM or EACH_QUORUM consistency level to make sure that the level meets the requirements for multiple datacenters.
- Start the new nodes.
- After all nodes are running in the cluster:
- Confirm that the node is dead using the nodetool
ring command on any live node in the cluster.
The nodetool ring command shows a Down status for the token value of the dead node:
- Install Cassandra on the replacement node.
- Remove any preexisting Cassandra data on the replacement
$ sudo rm -rf /var/lib/cassandra/*
auto_bootstrap: true. (If
auto_bootstrapis not in the cassandra.yaml file, it automatically defaults to
- Set the
initial_tokenin the cassandra.yaml file to the value of the dead node's token -1. Using the value from the above graphic, this is
- Configure any non-default settings in the node's cassandra.yaml to match your existing cluster.
- Start the new node.
- After the new node has finished bootstrapping, check that it is marked up using
- Run nodetool repair on each keyspace to ensure
the node is fully consistent. For
$ nodetool repair -h 10.46.123.12 keyspace_name
- Remove the dead node.