Upgrade Cassandra or DSE clusters in Kubernetes

Steps to upgrade Cassandra or DSE clusters in Kubernetes.

To upgrade Cassandra or DSE clusters in Kubernetes, modify and apply the revised configuration.

Replace prior cluster configuration

If you previously created a CassandraDatacenter configuration and want to define a different one in your Kubernetes cluster where Cass Operator is running and the Storage class is defined, here's an example. First, remove the existing CassandraDatacenter configuration. Example:
kubectl -n cass-operator delete -f https://raw.githubusercontent.com/datastax/cass-operator/v1.6.0/operator/example-cassdc-yaml/dse-6.8.x/example-cassdc-minimal.yaml
cassandradatacenter.cassandra.datastax.com "dc1" deleted
Next, create the new CassandraDatacenter configuration. Notice how in addition to the release number difference, the GitHub repo also changed starting with Cass Operator v1.7.0. Example:
kubectl -n cass-operator create -f https://raw.githubusercontent.com/k8ssandra/cass-operator/v1.7.1/operator/example-cassdc-yaml/dse-6.8.x/example-cassdc-three-rack-three-node.yaml
cassandradatacenter.cassandra.datastax.com/dc1 created

Option to use forceUpgradeRacks

For scenarios where a pod ultimately does not start, use forceUpgradeRacks via edited CassandraDatacenter configuration YAML to restart the rack that is hosting the pod.

In this example, a three-rack, three-node CassandraDatacenter was created with the following command and its referenced sample configuration YAML:
kubectl -n cass-operator create -f https://raw.githubusercontent.com/k8ssandra/cass-operator/v1.7.1/operator/example-cassdc-yaml/dse-6.8.x/example-cassdc-three-rack-three-node.yaml

Assume a scenario where the configured cluster2-dc1-rack1-sts-0 pod does not start, as reported in Google Cloud Console, in the Kubernetes Engine section and the Workloads tab for your cluster.

On a local machine where you've already established a connection with a Kubernetes project and cluster, use a command to invoke an editor for the target configuration. Example for namespace cass-operator, datacenter cassdc, and cluster dc1:
kubectl -n cass-operator edit cassdc dc1
In the editing session, add entries to identify the rack hosting the pod that would not start. In this case, the three racks were defined previously when the datacenter was created via example-cassdc-three-rack-three-node.yaml. This editing example adds the two-line forceUpgradeRacks entry:
   - rack1
   - name: rack1
   - name: rack2
   - name: rack3
When you save the edited cassdc configuration, Cass Operator directly applies the requested forceUpgradeRacks action to the target rack in the cluster. Upon completion, Cass Operator removes the forceUpgradeRacks entry from the cassdc YAML. After allowing time for the pods to restart, check the pods status again to see if the upgraded rack solved the issue. Example:
kubectl -n cass-operator get pod
NAME                             READY   STATUS    RESTARTS   AGE
cass-operator-78c9999797-gdnwd   1/1     Running   0          18h
cluster2-dc1-rack1-sts-0         2/2     Running   0          4m23s
cluster2-dc1-rack2-sts-0         2/2     Running   0          4m23s
cluster2-dc1-rack3-sts-0         2/2     Running   0          4m23s

What's next?

If you need to uninstall Cass Operator and related resources from Kubernetes, see the next topic.