Add Kubernetes Nodes
DataStax Mission Control is currently in Public Preview. DataStax Mission Control is not intended for production use, has not been certified for production workloads, and might contain bugs and other functional issues. There is no guarantee that DataStax Mission Control will ever become generally available. DataStax Mission Control is provided on an “AS IS” basis, without warranty or indemnity of any kind. If you are interested in trying out DataStax Mission Control please join the Public Preview. |
Modify the MissionControlCluster
manifest (object) specification and submit that change with the kubectl
command to add one or more nodes to a datacenter in a Kubernetes Cluster.
Prerequisites
-
The
kubectl
CLI tool. -
Kubeconfig file
orcontext
pointing to aControl Plane
Kubernetes cluster.
Example
An existing MissionControlCluster
manifest specifying one datacenter with three DSE nodes distributed equally across three racks.
Procedure
-
Sample
MissionControlCluster
manifest namedexample.missioncontrolcluster.yaml
that was used to initially create the datacenter (dc1):apiVersion: missioncontrol.datastax.com/v1beta1 kind: MissionControlCluster metadata: name: demo spec: k8ssandra: cassandra: serverVersion: 6.8.26 storageConfig: cassandraDataVolumeClaimSpec: storageClassName: premium-rwo accessModes: - ReadWriteOnce resources: requests: storage: 5Gi datacenters: - metadata: name: dc1 k8sContext: east size: 3 racks: - name: rack1 nodeAffinityLabels: topology.kubernetes.io/zone: us-east1-c - name: rack2 nodeAffinityLabels: topology.kubernetes.io/zone: us-east1-b - name: rack3 nodeAffinityLabels: topology.kubernetes.io/zone: us-east1-d
-
Modify the
datacenters.size
specification from3
- (1 node per rack) to6
- (3 nodes per rack):apiVersion: missioncontrol.datastax.com/v1beta1 kind: MissionControlCluster metadata: name: demo spec: ... datacenters: - metadata: name: dc1 k8sContext: east size: 6 racks: ...
-
Submit this change in the
Control Plane
cluster:kubectl apply -f demo-dse.cassandratask.yaml
Three additional nodes (pods) deploy in parallel as the
MissionControlCluster
object increases in size from three to six nodes. Each node, however, starts serially as specified by the order of the rack definitions.At any given time the number of started DSE nodes in a rack cannot be more or less than the number of started nodes in all other racks by more than one.
-
Monitor the status of the DSE nodes being created:
kubectl get pods -l "cassandra.datastax.com/cluster"=demo
Sample output:
NAME READY STATUS RESTARTS AGE demo-dc1-rack1-sts-0 2/2 Running 0 67m demo-dc1-rack1-sts-1 1/2 Running 0 110s demo-dc1-rack2-sts-0 2/2 Running 0 67m demo-dc1-rack2-sts-1 1/2 Running 0 110s demo-dc1-rack3-sts-0 2/2 Running 0 67m demo-dc1-rack3-sts-1 1/2 Running 0 110s
The
-l
flag adds a label selector to filter the results. Every DSE pod has thecassandra.datastax.com/cluster
label. There are six pods but only the initial three are fully ready. This is expected as the results were captured in mid-operation. -
Monitor the status of the CassandraDatacenter with this command:
kubectl get cassandradatacenter dc1 -o yaml
Click to reveal the sample output:
Details
status: cassandraOperatorProgress: Updating conditions: - lastTransitionTime: "2022-10-19T20:24:40Z" message: "" reason: "" status: "True" type: Healthy - lastTransitionTime: "2022-10-19T20:24:41Z" message: "" reason: "" status: "False" type: Stopped - lastTransitionTime: "2022-10-19T20:24:41Z" message: "" reason: "" status: "False" type: ReplacingNodes - lastTransitionTime: "2022-10-19T20:24:41Z" message: "" reason: "" status: "False" type: Updating - lastTransitionTime: "2022-10-19T20:24:41Z" message: "" reason: "" status: "False" type: RollingRestart - lastTransitionTime: "2022-10-19T20:24:41Z" message: "" reason: "" status: "False" type: Resuming - lastTransitionTime: "2022-10-19T20:24:41Z" message: "" reason: "" status: "False" type: ScalingDown - lastTransitionTime: "2022-10-19T20:24:41Z" message: "" reason: "" status: "True" type: Valid - lastTransitionTime: "2022-10-19T20:24:41Z" message: "" reason: "" status: "True" type: Initialized - lastTransitionTime: "2022-10-19T20:24:41Z" message: "" reason: "" status: "True" type: Ready - lastTransitionTime: "2022-10-19T21:24:34Z" message: "" reason: "" status: "True" type: ScalingUp lastServerNodeStarted: "2022-10-19T21:28:51Z" nodeStatuses: demo-dc1-rack1-sts-0: hostID: 2025d318-3fcc-4753-990b-3f9c388ba18a demo-dc1-rack1-sts-1: hostID: 33a0fc01-5947-471f-97a2-61237767d583 demo-dc1-rack2-sts-0: hostID: 50748fb8-da1f-4add-b635-e80e282dc09b demo-dc1-rack2-sts-1: hostID: eb899ffd-0726-4fb4-bea7-c9d84d555339 demo-dc1-rack3-sts-0: hostID: db86cba7-b014-40a2-b3f2-6eea21919a25 observedGeneration: 1 quietPeriod: "2022-10-19T20:24:47Z" superUserUpserted: "2022-10-19T20:24:42Z" usersUpserted: "2022-10-19T20:24:42Z"
The
ScalingUp
condition has status:True
indicating that the scaling up operation is in progress. Cass Operator updates it to "False" when the operation is complete.
After all DSE nodes reach the ready state the DataStax Mission Control operators create a CassandraTask to run a cleanup operation across all nodes.
Upon completion of the cleanup operation, the ScalingUp
condition status is set to False
for each node.