Replace a dead node in a single-token architecture cluster
Steps for replacing nodes in single-token architecture clusters, not vnodes.
|
Only add new nodes to the cluster. A new node is a system that HCD has never started. The node must have absolutely NO PREVIOUS DATA in the Adding nodes previously used for testing, or that have been removed from another cluster, merges the older data and its incompatible schema into the cluster and may cause data loss or corruption. |
The output of the nodetool status command provides a two-letter output for each node.
The output indicates the status and the state of nodes.
For example, UN for a node that is Up (its status) and in a Normal state.
Different releases of HCD provide different information in the state field when the status is D (Down).
Let’s first clarify what to expect when a node status is stopped.
A node is in a stopped state if the command nodetool drain has been issued on the node itself, or if the disk policy was set to disk_failure_policy: stop, and the policy has been triggered due to disk issues. A stopped state means that the HCD process is still running and it still responds to JMX commands, but the gossip (port 7000) and client connections (port 9042) are stopped.
Replace a dead node in a single-token cluster
-
Run
nodetool statusto verify the node’s status and state. In particular, for the node to be replaced:-
HCD must not be running on the node; that is, the HCD Java process is stopped or the host itself is offline.
-
The node should be seen in a normal (N) state from other nodes. It should not be marked as joining (J) or leaving (L) the cluster.
If a node status is D (down) the state can be one of:
-
N - Normal
-
L - Leaving
-
J - Joining
-
M - Moving
-
S - Stopped
If a node enters in a stopped state, then the state+status of the node is shown as DS on the node itself and DN from all the other nodes.
-
-
-
Record the existing
initial_tokensetting from the dead node’scassandra.yaml. -
If the dead node was a seed node, change the cluster’s seed node configuration on each node:
-
In the
cassandra.yamlfile for each node, remove the IP address of the dead node from the- seedslist in the seed-provider property. -
If the cluster needs a new seed node to replace the dead node, add the new node’s IP address to the
- seedslist of the other nodes.Making every node a seed node is not recommended because of increased maintenance and reduced gossip performance. Gossip optimization is not critical, but it is recommended to use a small seed list (approximately three nodes per datacenter).
-
-
On an existing node, gather setting information for the new node from the
cassandra.yamlfile:-
cluster_name -
endpoint_snitch -
Other non-default settings: Use the diff tool to compare current settings with default settings.
-
-
Gather rack and datacenter information:
-
If the cluster uses the PropertyFileSnitch, record the rack and data assignments listed in the
cassandra-topology.propertiesfile, or copy the file to the new node. -
If the cluster uses the GossipingPropertyFileSnitch, Configuring the Amazon EC2 single-region snitch, Configuring Amazon EC2 multi-region snitch, or Configuring the Google Cloud Platform snitch, record the rack and datacenter assignments in the dead node’s
cassandra-rackdc.propertiesfile.
-
-
Add values to the following properties in
cassandra.yamlfile from the information gathered earlier:-
auto_bootstrap: If this setting exists and is set tofalse, set it totrue. (This setting is not included in the defaultcassandra.yamlconfiguration file.) -
If the new node is a seed node, make sure it is not listed in its own
- seedslist.
-
-
Add the rack and datacenter configuration:
-
If the cluster uses the GossipingPropertyFileSnitch, Configuring the Amazon EC2 single-region snitch, and Configuring Amazon EC2 multi-region snitch or Configuring the Google Cloud Platform snitch:
-
Add the dead node’s rack and datacenter assignments to the
cassandra-rackdc.propertiesfile on the replacement node.Do not remove the entry for the dead node’s IP address yet.
-
Delete the
cassandra-topology.propertiesfile.
-
-
If the cluster uses the PropertyFileSnitch:
-
Copy the
cassandra-topology.propertiesfile from an existing node, or add the settings to the local copy. -
Edit the file to add an entry with the new node’s IP address and the dead node’s rack and datacenter assignments.
-
-
-
Start the new node with the required options:
Package installations:
-
Add the following option to
jvm-server.options:-Dcassandra.replace_address_first_boot=<address_of_dead_node>
-
After the node bootstraps, remove
replace_address_first_boot(if specified) fromjvm-server.options. Tarball installations: -
Add the following parameter to the start up command line:
sudo bin/hcd cassandra -Dcassandra.replace_address_first_boot=<address_of_dead_node>
-
-
Run
nodetool statusto verify that the new node has bootstrapped successfully.Tarball path:
<installation_location>/resources/cassandra/bin -
In environments that use the PropertyFileSnitch, wait at least 72 hours and then, on each node, remove the old node’s IP address from the
cassandra-topology.propertiesfile.This ensures that old node’s information is removed from gossip. If removed from the property file too soon, problems may result. Use nodetool gossipinfo to check the gossip status. The node is still in gossip until LEFT status disappears.
The cassandra-rackdc.properties file does not contain IP information; therefore this step is not required when using other snitches, such as
GossipingPropertyFileSnitch.