Clean up nodes
The cleanup
operation runs nodetool cleanup for either all or specific keyspaces on all nodes in the specified datacenter.
Create the CassandraTask
custom resource that defines a cleanup
operation in the Data Plane
Kubernetes cluster where the target cassandraDatacenter
resource is deployed.
Mission Control detects the cleanup task custom resource definition (CRD), iterates one rack at a time, and triggers and monitors cleanup operations one pod at a time. Mission Control reports task progress and status.
DataStax Enterprise (DSE) does not automatically remove data from nodes that lose part of their partition range to a newly added node.
After adding a node, run nodetool cleanup
on the source node and on neighboring nodes that shared the same sub-range.
This prevents the database from including the old data in order to rebalance the load on that node.
nodetool cleanup
temporarily increases disk space use proportional to the size of the largest SSTable and triggers disk I/O.
Failure to run |
Performance impact
This nodetool cleanup
operation forces all SSTables to compact on a node, which removes data that is no longer replicated to this node.
As with all compactions this leads to an increase in disk operations and potential for latency.
Depending on the amount of data present on the node and the query workload, you may want to schedule this cleanup operation during off-peak hours.
Prerequisites
-
A prepared environment on either bare-metal/VM or an existing Kubernetes cluster.
Clean up nodes in a cluster
Decide to clean up one, multiple, or all nodes.
Choose User Interface (UI) or Command Line Interface (CLI) steps.
-
UI - one node
-
UI - all nodes
-
UI - multiple nodes
-
CLI
-
In the Home Clusters dialog, click the target cluster namespace.
-
In the Nodes section of the Overview tab, click the overflow menu icon (3 dots) on the datacenter row of your target node.
-
Click Cleanup from the list.
To review cleanup operation notifications, see Monitor cleanup activity status.
-
In the Home Clusters dialog, click the target cluster namespace.
-
In the Nodes section of the Overview tab, click the Name checkbox. This selects all node row checkboxes.
-
Click Bulk Actions.
-
In the Bulk Actions dialog:
-
Make sure that the Action type is
Cleanup
. -
Make sure that your target datacenter is selected.
-
Update the Rack field as needed.
-
Click Run.
-
To review cleanup operation notifications, see Monitor cleanup activity status.
-
In the Nodes section of the Overview tab, click the row checkbox for each of your target datacenters.
-
Click Bulk Actions.
-
In the Bulk Actions dialog:
-
Make sure that the Action type is
Cleanup
. -
Make sure that your target datacenter is selected.
-
Update the Rack field as needed.
-
Click Run.
-
To review cleanup operation notifications, see Monitor cleanup activity status.
With an existing Kubernetes cluster, this example shows how to clean up one datacenter that has nine (9) nodes (pods) distributed across three (3) racks.
-
Modify the
cleanup-dc1.cassandratask.yaml
file, using this sample as a guide:apiVersion: control.k8ssandra.io/v1alpha1 kind: CassandraTask metadata: name: cleanup-dc1 spec: datacenter: name: dc1 namespace: demo jobs: - name: cleanup-dc1 command: cleanup args: keyspace_name: my_keyspace
Key options:
-
metadata.name
: a unique identifier within the Kubernetes namespace where the task is submitted. While the name can be any value, consider including the cluster name to prevent collision with other options. -
spec.datacenter
: a uniquenamespace
andname
combination used to determine which datacenter to target with this operation. -
spec.jobs[0].command
: MUST becleanup
for this operation. -
Optional:
spec.jobs[0].args.keyspace_name
: restricts this operation to a particular keyspace. Omitting this value results in ALL keyspaces being cleaned up. By default all keyspaces are rebuilt.Although the
jobs
parameter is an array, only one entry is permitted. Specifying more than one job results in the task automatically failing.
-
-
Submit the
cleanup
CassandraTask
custom resource definition on the Kubernetes cluster where the specified datacenter is deployed:kubectl apply -f cleanup-dc1.cassandratask.yaml
The DC-level operators perform a rolling cleanup operation, one node at a time. The order is determined lexicographically (also known as dictionary order), starting with rack names and then continuing with node (pod) names.
As the cleanup operation begins, if it discovers a node in the process of being terminated and recreated, regardless of the reason, then the operation fails. In such an event, the DC-level operators retry the cleanup operation.
To review cleanup operation notifications, see Monitor cleanup activity status.
Monitor cleanup activity status
-
UI
-
CLI
-
In the main navigation, click Activities.
-
See Status notifications regarding the progress of the cleanup activity.
A status of SUCCESS indicates the cleanup operation completed without issue. Timestamps are issued for the Start and End of the cleanup activity.
-
Monitor the cleanup operation progress with this
kubectl
command:kubectl get cassandratask cleanup-dc1 | yq .status
Sample results
... status: completionTime: "2022-10-13T21:06:55Z" conditions: - lastTransitionTime: "2022-10-13T21:05:23Z" status: "True" type: Running - lastTransitionTime: "2022-10-13T21:06:55Z" status: "True" type: Complete startTime: "2022-10-13T21:05:23Z" succeeded: 9
The DC-level operators set the
startTime
field prior to starting thecleanup
operation. They update thecompletionTime
field when thecleanup
operation is completed.The sample output indicates that the task is completed with the
type: Complete
status condition set toTrue
. Thesucceeded: 9
field indicates that nine (9) nodes (or pods) completed the requested task successfully. Afailed
field tracks a running count of pods that failed thecleanup
operation.