Provision a database cluster
With Mission Control, you can provision a database cluster directly in the Mission Control UI or through the kubectl
command line tool.
Mission Control reconciles MissionControlCluster
resources defined either through the UI or CLI against any currently deployed database instances.
These definitions describe the desired state of your database clusters.
Using your definitions, Mission Control automates the process of provisioning and configuring resources across your control and data planes.
Prerequisites
-
UI simple mode
-
UI expert mode
-
CLI
You need the following:
-
A Mission Control instance deployed in control plane mode on either bare-metal/VM or an existing Kubernetes cluster. For installation instructions, see the Mission Control installation overview.
-
A Mission Control project where you want to add the new database cluster. See Manage projects for details.
-
Access to the Mission Control UI.
You need the following:
-
A Mission Control instance deployed in control plane mode on either bare-metal/VM or an existing Kubernetes cluster. For installation instructions, see the Mission Control installation overview.
-
A Mission Control project where you want to add the new database cluster. See Manage projects for details.
-
Working knowledge of the Kubernetes API and YAML configuration.
-
Access to the Mission Control UI.
You need the following:
-
Working knowledge of Kubernetes.
-
Access to a Kubernetes cluster.
-
kubectl
installed and configured.
Provision and define a cluster
You can provision a database cluster using the Mission Control UI or CLI. The UI offers simple mode and expert mode. DataStax recommends simple mode if you are new to Kubernetes or Mission Control. It allows you to provision a cluster with a few clicks.
DataStax recommends expert mode if you are familiar with the Kubernetes API and YAML configuration. It allows you to define a cluster with more granular control.
DataStax recommends the CLI if you are familiar with the kubectl
command line tool.
For best practices, see Best practices for database clusters. |
-
UI simple mode
-
UI expert mode
-
CLI
To provision a database cluster using simple mode, do the following:
-
In the Mission Control UI, select a project, and then click Create Cluster.
-
Enter a meaningful, human-readable Cluster Name.
The Cluster Name can be any string of characters, including international, alphanumeric, punctuation—dashes, spaces, underscores, upper or lower case.
Cluster names are permanent. You can’t change them after you create the cluster. The name uniquely identifies the cluster across all projects and all environments to prevent a logical cluster from inadvertently joining another.
-
Select a cluster Type.
-
Enter a valid Version number.
-
Leave the Image field blank. It is for advanced users.
-
To define the Datacenter configuration, do the following:
-
Enter a meaningful, human-readable Datacenter Name.
Datacenter names are permanent. You can’t change them after you create the cluster. The datacenter name:
-
Must start with an alphanumeric character.
-
Must be a single word.
-
Can be any capitalization: upper, lower, or mixed-case.
-
Can include dashes and underscores.
-
Must not include spaces.
-
-
Optional: Add the configuration property and its corresponding value in the Add cassandra.yaml Setting sub-section if you require a non-standard Cassandra configuration.
-
Select the Data Plane Context where you want to deploy the cluster.
By default, Mission Control deploys a database cluster to the control plane. If a data plane is deployed on another Kubernetes cluster, you can choose to deploy the database cluster to that context. For more information, see the Planning guide.
-
Enter a Rack Name for the first rack, for example,
rack1
.Rack names are permanent. You can’t change them after you create the cluster. The rack name:
-
Must start with an alphanumeric character.
-
Must be a single word.
-
Can be any capitalization: upper, lower, or mixed-case.
-
Can have dashes and underscores.
-
Must not include spaces.
Database pods, or nodes, are scheduled using node affinity.
-
-
Add the
mission-control.datastax.com/role=database
label to the rack configuration to ensure database pods are scheduled on database worker nodes only, not on platform worker nodes.-
Label:
mission-control.datastax.com/role
-
Value:
database
DataStax recommends a minimum of three nodes for production clusters to support replication in a datacenter for high availability. With three replicas in a datacenter, this configuration can tolerate a failure of one node when using a strong consistency of
LOCAL_QUORUM
.To add another rack, select Add Rack and configure it as you did in the previous steps. Make sure that you add the node affinity label.
-
-
For Nodes Per Rack, allocate at least one database node to the rack.
-
Optional: To create a multi-datacenter cluster, select Add Datacenter and configure it as above.
-
For Resource Requests, enter the minimum available resources required. DataStax recommends that you allocate the following minimum amounts of memory:
-
4 GB of RAM for development environments and 8 GB for nodes with Vector Search enabled.
-
32 GB of RAM to production nodes and 64 GB for nodes with Vector Search enabled.
-
500 GB of storage for production nodes.
.Select the Storage Class you configured for your environment.DataStax recommends a class backed by NVMe SSDs.
-
-
Enter the Storage Amount to allocate.
-
-
To add Security Settings, do the following:
-
Select the Require authentication to access cluster option.
-
Enter a Superuser Name.
-
Enter a Superuser Password.
-
Select the Enable internode encryption option.
The superuser role is required to provision other roles such as operators and service accounts.
DataStax recommends that you secure your clusters by enabling authentication and internode encryption, especially for production environments.
-
-
To configure Backup/Restore options, do the following:
-
Optional: Enter a Prefix to use as the name of the top-level folder in the Backup bucket.
If you don’t enter a value, Mission Control uses the cluster name.
-
Select your Backup Configuration.
-
-
Under Advanced Settings, for Heap Amount, enter an amount using the following as a guide:
System memory
Heap
8 GB
4 GB
32 GB
8-24 GB
64 GB
31 GB
-
Select Create Cluster.
-
Optional: Track the progress of the database pods:
kubectl get pods -n mission-control
Mission Control assigns database pod names prefixed by the cluster name. Each node completes a standard bootstrap sequence in approximately 2-3 minutes. Once operational and ready to accept client requests, each pod displays
2/2
containers asREADY
with aSTATUS
ofRunning
. -
Optional: Inspect pods that are not ready:
kubectl describe pod -n mission-control
POD_NAME
Replace POD_NAME with the name of your pod.
DataStax recommends expert mode only if you are familiar with the Kubernetes API and YAML configuration.
After you create or update a cluster in expert mode, you cannot edit it in simple mode. |
For custom resource definitions (CRDs), see the Mission Control Custom Resource Definition (CRD) reference.
To provision a database cluster in Mission Control using expert mode, do the following:
-
In the Mission Control UI, select a project, and then click Create Cluster.
-
Click Expert. The Create Cluster page displays YAML configuration options.
-
Edit the YAML configuration to define the cluster.
As you make changes, autocomplete suggestions appear for some fields:
-
When you add a new array item (
-
), autocomplete provides intelligent suggestions based on the schema. -
For arrays of objects, autocomplete suggests valid property names based on the object’s schema.
-
For arrays of enum strings, autocomplete suggests predefined values from the enum list.
-
-
Click Create Cluster.
When you use expert mode to copy your YAML definition and create a new cluster on another installation, you must omit the |
Given that your data plane clusters have either the appropriate compute capacity or the capability to auto-scale, define a simple MissionControlCluster
YAML file and invoke kubectl
to create a running cluster.
Create a cluster by completing the following define and submit tasks.
Review the automatic reconciliation workflow, and then monitor the reconciliation status with one kubectl
command.
To define a new MissionControlCluster
, start with creating a new YAML file that defines the topology and configuration for the new cluster.
This file is an instance of a MissionControlCluster
Kubernetes Custom Resource (CR), and it describes the target end-state for the cluster.
What follows is a minimal example of a MissionControlCluster
instance which creates a three-node database cluster.
Each node has five GB of storage available for data and requests 32 GB of RAM.
-
On a local machine, create a manifest file named
database-cluster.yaml
to describe the cluster topology. -
Copy the following code into the file:
apiVersion: missioncontrol.datastax.com/v1beta2 kind: MissionControlCluster metadata: name: CLUSTER_NAME namespace: PROJECT_SLUG spec: encryption: internodeEncryption: enabled: true k8ssandra: auth: true cassandra: serverVersion: SERVER_VERSION serverType: SERVER_TYPE storageConfig: cassandraDataVolumeClaimSpec: storageClassName: default accessModes: - ReadWriteOnce resources: requests: storage: 1024Gi config: cassandraYaml: dynamic_snitch: false server_encryption_options: internode_encryption: all jvmOptions: additionalJvmServerOptions: heapSize: 31Gi resources: limits: cpu: "32" memory: 128Gi requests: cpu: "28" memory: 128Gi datacenters: - metadata: name: dc1 datacenterName: dc1 stopped: false size: 3 racks: - name: rack1 nodeAffinityLabels: mission-control.datastax.com/role: database - name: rack2 nodeAffinityLabels: mission-control.datastax.com/role: database - name: rack3 nodeAffinityLabels: mission-control.datastax.com/role: database
Replace the following:
-
CLUSTER_NAME: The name of the cluster
-
PROJECT_SLUG: The name of the project
-
SERVER_VERSION: The version of the database
-
SERVER_TYPE: The type of the database server:
hcd
,dse
, oross
-
-
Change the
storageClassName
to a preferred value, matching the ones available in the installation, or leave the default value. To determine which storage classes are available in the environment, run:kubectl get sc
-
Optional: Append the
hostNetwork
section at the same level as theconfig
section in thedatabase-cluster.yaml
file if you use VMs with a Mission Control embedded Kubernetes runtime:... networking: hostNetwork: true config: ...
This makes the deployed services directly available on the network.
-
Apply the manifest:
kubectl apply -f MANIFEST_FILENAME.yaml
Replace
MANIFEST_FILENAME.yaml
with the name of your file.Check that the pods representing the nodes appear:
kubectl get pods -n mission-control
Result
NAME READY STATUS RESTARTS AGE cass-operator-controller-manager-6487b8fb6c-xkjjx 1/1 Running 0 41m k8ssandra-operator-55b44544d6-n8gs8 1/1 Running 0 41m mission-control-controller-manager-54c64975cd-nvcm7 1/1 Running 0 41m test-dc1-default-sts-0 0/2 Pending 0 7s test-dc1-default-sts-1 0/2 Pending 0 7s test-dc1-default-sts-2 0/2 Pending 0 7s
Each node must go through the standard bootstrapping process, which takes approximately 2-3 minutes. Upon completion, the nodes should display
2/2
under READY andRunning
under STATUS:NAME READY STATUS RESTARTS AGE cass-operator-controller-manager-6487b8fb6c-xkjjx 1/1 Running 0 50m k8ssandra-operator-55b44544d6-n8gs8 1/1 Running 0 50m mission-control-controller-manager-54c64975cd-nvcm7 1/1 Running 0 50m test-dc1-default-sts-0 2/2 Running 0 9m6s test-dc1-default-sts-1 2/2 Running 0 9m6s test-dc1-default-sts-2 2/2 Running 0 9m6s
If any pods list their STATUS as
Pending
, there might be resource availability issues. Run the following command to check the pod status:kubectl describe pod
POD_NAME
Replace
POD_NAME
with the name of your pod.The cluster is operational when all of the nodes indicate
2/2
under READY andRunning
under STATUS.Now that the database cluster is up and running, connect to it using the previously downloaded
cqlsh
binary with vector index support. Mission Control is secured by default and generates a unique superuser after disabling the defaultcassandra
account. -
Discover the username of this generated superuser by accessing the
<CLUSTER_NAME-superuser
secret in the Kubernetes cluster in themission-control
namespace. Run the following command:kubectl get secret/test-superuser -n mission-control -o jsonpath='{.data.username}' | base64 -d; echo
Result
test-superuser
-
Read the username’s password:
kubectl get secret/test-superuser -n mission-control -o jsonpath='{.data.password}' | base64 -d; echo
Result
PaSsw0rdFORsup3ruser
-
Connect to the cluster:
-
Embedded Kubernetes cluster
-
External Kubernetes cluster
Because host networking is enabled, connect to any of the nodes through its Internet Protocol (IP) address or hostname using
cqlsh
with the correct Superuser credentials. Port 9042 must be accessible fromcqlsh
:cqlsh --username test-superuser --password
SUPERUSER_PASSWORD
ip-175-32-24-217Replace
SUPERUSER_PASSWORD
with the password of the superuser.Result
Connected to test at ip-175-32-24-217:9042 [cqlsh 6.0.0 | Cassandra 4.0.7-c556d537c707 | CQL spec 3.4.5 | Native protocol v5] Use HELP for help. test-superuser@cqlsh>
-
Port forward the service that exposes the cluster’s CQL port:
kubectl port-forward svc/test-dc1-service 9042:9042 -n mission-control
-
Connect using
cqlsh
pointing atlocalhost
:cqlsh --username test-superuser --password `**SUPERUSER_PASSWORD**` 127.0.0.1
Replace
SUPERUSER_PASSWORD
with the password of the superuser.Result
Connected to test at 127.0.0.1:9042. [cqlsh 6.0.0 | Cassandra 4.0.7-c556d537c707 | CQL spec 3.4.5 | Native protocol v5] Use HELP for help. test-superuser@cqlsh>
-
Automatic reconciliation steps for MissionControlCluster
resources
The following steps describe the automated process for informational purposes only; no user intervention is required.
-
Cluster-level operators detect a new
MissionControlCluster
custom resource through the Kubernetes API within the control plane. -
Cluster-level operators identify which control plane or data plane clusters should receive datacenters defined within the
MissionControlCluster
. In this example theeast
data plane cluster is specified so datacenter-level resources are created and reconciled there. If you omit the data plane identifier, resources are deployed within the control plane. -
Datacenter-level operators within the data plane clusters detect new datacenter-level custom resources (CRs) through the Kubernetes API.
-
Datacenter-level operators generate and submit rack-level resources (StatefulSets) to their local Kubernetes API.
-
Built-in Kubernetes reconciliation loops detect the new rack-level resources and begin creating pods and storage resources representing the underlying HCD, DSE, or Cassandra nodes.
-
Status of resource creation rises to operators at the datacenter and cluster levels.
-
When all pods are up and running the cluster-level operator signals the datacenter-level operators to begin bootstrap operations of DSE within the created and running pods.
-
As pods come online their status is escalated and operations continue until all nodes are up and running with services discoverable via the Kubernetes API.
Monitor bootstrap progress
Monitor the progress of the bootstrap to determine completion status or note any errors.
After you submit the MissionControlCluster
custom resource (CR), the operator modifies the resource within the Kubernetes API by adding a status
field to the top-level of the resource.
This status
field provides valuable insight into the health of the MissionControlCluster
as one or more operators detect definition changes.
status
indicates everything from the reconciliation phase to errors encountered while attempting to create storage.
-
Run the following command to retrieve the descriptive status for the
test
MissionControlCluster
object:kubectl describe mccluster/CLUSTER_NAME
Replace
CLUSTER_NAME
with the name of your cluster.You can specify
MissionControlCluster
,missioncontrolcluster
, or the short-formmccluster
. Additionally, all of the names can be plural.Sample results
Name: CLUSTER_NAME Namespace: default Labels: <none> Annotations: <none> API Version: missioncontrol.datastax.com/v1beta2 Kind:
MissionControlCluster
Metadata: Creation Timestamp: 2023-10-30T11:09:33Z Finalizers: missioncontrol.datastax.com/finalizer Generation: 1 Resource Version: 105388250 UID: 57e956f8-1f87-422f-a7f8-b9ec87b956c4 Spec: Create Issuer: true Encryption: Internode Encryption: Certs: Cert Template: Issuer Ref: Name: Secret Name: Create Certs: true Enabled: true k8ssandra: Auth: true Cassandra: Datacenters: Dse Workloads: Metadata: Name: dc1 Pods: Services: Additional Seed Service: All Pods Service: Dc Service: Node Port Service: Seed Service: Per Node Config Init Container Image: <name>/yq:4 Per Node Config Map Ref: Racks: Name: rack1 Size: 3 Stopped: false Metadata: Pods: Services: Additional Seed Service: All Pods Service: Dc Service: Node Port Service: Seed Service: Per Node Config Init Container Image: <name>/yq:4 Resources: Requests: Memory: 32Gi Server Type: dse Server Version: 6.9.2 Storage Config: Cassandra Data Volume Claim Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 5Gi Storage Class Name: standard Superuser Secret Ref: Name: my-cluster-superuser Secrets Provider: internal Status: Conditions: Last Transition Time: 2023-10-30T14:08:08Z Message: Reason: UpdatingIssuers Status: False Type: UpdatingIssuers Last Transition Time: 2023-10-30T14:08:08Z Message: Reason: UpdatingCertificates Status: False Type: UpdatingCertificates Last Transition Time: 2023-10-30T14:08:08Z Message: Reason: UpdatingReplicatedSecrets Status: False Type: UpdatingReplicatedSecrets Last Transition Time: 2023-10-30T11:16:38Z Message: Reason: UpdatingCluster Status: False Type: UpdatingCluster Last Transition Time: 2023-10-30T14:08:08Z Message: Ready Reason: Ready Status: True Type: Ready Events: <none> -
Access operator logs to discover more detail:
kubectl logs -n mission-control POD_NAME
Replace
POD_NAME
with the name of your pod, for example:mission-control-controller
.The
StatefulSet
controller is one of the core Kubernetes controllers that create the pods. The number of pods perStatefulSet
is calculated by dividing the number of nodes in the datacenter by the number of racks.For example, a three-node cluster with three racks has one pod per
StatefulSet
, and a nine node cluster with three racks has three pods perStatefulSet
.
Next steps
-
Explore the
MissionControlCluster
reference documentation for a complete listing of all fields and values. -
Use the CQL shell to connect to the cluster.
-
Use the Data API to add data to the cluster.
-
Terminate the newly-created cluster.