Clean up nodes
The cleanup operation runs nodetool cleanup for either all or specific keyspaces on all nodes in the specified datacenter.
Create the CassandraTask custom resource that defines a cleanup operation in the data plane Kubernetes cluster where the target cassandraDatacenter resource is deployed.
Mission Control detects the cleanup task custom resource definition (CRD), iterates one rack at a time, and triggers and monitors cleanup operations one pod at a time. Mission Control reports task progress and status.
DataStax Enterprise (DSE) does not automatically remove data from nodes that lose part of their partition range to a newly added node.
After adding a node, run nodetool cleanup on the source node and on neighboring nodes that shared the same sub-range.
This prevents the database from including the old data to rebalance the load on that node.
nodetool cleanup temporarily increases disk space use proportional to the size of the largest SSTable and triggers disk I/O.
|
Failure to run |
Performance impact
This nodetool cleanup operation forces all SSTables to compact on a node, which removes data that is no longer replicated to this node.
As with all compactions this leads to an increase in disk operations and potential for latency.
Depending on the amount of data present on the node and the query workload, you might want to schedule this cleanup operation during off-peak hours.
Prerequisites
-
A prepared environment on either bare-metal/VM or an existing Kubernetes cluster.
If your cluster has client-to-node encryption enabled, configure nodetool to use TLS before running cleanup operations.
For more information, see Configure and use nodetool with TLS.
Clean up nodes in a cluster
Decide to clean up one, multiple, or all nodes.
Choose UI or CLI steps.
-
UI - one node
-
UI - all nodes
-
UI - multiple nodes
-
CLI
-
In the Home Clusters dialog, click the target cluster namespace.
-
In the Nodes section of the Overview tab, click More Options on the datacenter row of your target node.
-
Click Cleanup from the list.
To review cleanup operation notifications, see Monitor cleanup activity status.
-
In the Home Clusters dialog, click the target cluster namespace.
-
In the Nodes section of the Overview tab, click the Name checkbox. This selects all node row checkboxes.
-
Click Bulk Actions.
-
In the Bulk Actions dialog:
-
Make sure that the Action type is
Cleanup. -
Make sure that your target datacenter is selected.
-
Update the Rack field as needed.
-
Click Run.
-
To review cleanup operation notifications, see Monitor cleanup activity status.
-
In the Nodes section of the Overview tab, click the row checkbox for each of your target datacenters.
-
Click Bulk Actions.
-
In the Bulk Actions dialog:
-
Make sure that the Action type is
Cleanup. -
Make sure that your target datacenter is selected.
-
Update the Rack field as needed.
-
Click Run.
-
To review cleanup operation notifications, see Monitor cleanup activity status.
With an existing Kubernetes cluster, this example shows how to clean up one datacenter that has nine nodes (pods) distributed across three racks.
-
Modify the
cleanup-dc1.cassandratask.yamlfile, using this sample as a guide:apiVersion: control.k8ssandra.io/v1alpha1 kind: CassandraTask metadata: name: cleanup-dc1 spec: datacenter: name: dc1 namespace: demo jobs: - name: cleanup-dc1 command: cleanup args: keyspace_name: my_keyspaceKey options:
-
metadata.name: a unique identifier within the Kubernetes namespace where the task is submitted. While the name can be any value, consider including the cluster name to prevent collision with other options. -
spec.datacenter: a uniquenamespaceandnamecombination used to determine which datacenter to target with this operation. -
spec.jobs[0].command: MUST becleanupfor this operation. -
Optional:
spec.jobs[0].args.keyspace_name: restricts this operation to a particular keyspace. Omitting this value results in ALL keyspaces being cleaned up. By default all keyspaces are rebuilt.Although the
jobsparameter is an array, only one entry is permitted. Specifying more than one job results in the task automatically failing.
-
-
Submit the
cleanupCassandraTaskcustom resource definition on the Kubernetes cluster where the specified datacenter is deployed:kubectl apply -f cleanup-dc1.cassandratask.yamlThe DC-level operators perform a rolling cleanup operation, one node at a time. The order is determined lexicographically (also known as dictionary order), starting with rack names and then continuing with node (pod) names.
As the cleanup operation begins, if it discovers a node in the process of being terminated and recreated, regardless of the reason, then the operation fails. In such an event, the DC-level operators retry the cleanup operation.
To review cleanup operation notifications, see Monitor cleanup activity status.
Monitor cleanup activity status
-
UI
-
CLI
-
In the main navigation, click Activities.
-
See Status notifications regarding the progress of the cleanup activity.
A status of SUCCESS indicates the cleanup operation completed without issue. Timestamps are issued for the Start and End of the cleanup activity.
-
Monitor the cleanup operation progress with this
kubectlcommand:kubectl get cassandratask cleanup-dc1 | yq .statusResult
... status: completionTime: "2022-10-13T21:06:55Z" conditions: - lastTransitionTime: "2022-10-13T21:05:23Z" status: "True" type: Running - lastTransitionTime: "2022-10-13T21:06:55Z" status: "True" type: Complete startTime: "2022-10-13T21:05:23Z" succeeded: 9The DC-level operators set the
startTimefield prior to starting thecleanupoperation. They update thecompletionTimefield when thecleanupoperation is completed.The sample output indicates that the task is completed with the
type: Completestatus condition set toTrue. Thesucceeded: 9field indicates that nine (9) nodes (or pods) completed the requested task successfully. Afailedfield tracks a running count of pods that failed thecleanupoperation.