Create a DataStax Enterprise (DSE) database cluster
Creating a DataStax Enterprise (DSE) cluster on Mission Control is a simple task.
Given that your Data Plane
clusters have either the appropriate
compute capacity or the capability to auto-scale, define a simple MissionControlCluster
YAML file and invoke kubectl
to create a running DSE cluster.
Mission Control manages or reconciles MissionControlCluster
manifests defined either through the User Interface (UI) or
through the Command Line Interface (CLI).
When a MissionControlCluster
resource is created the Mission Control automation picks up the changes and reconciles them into
established containers running across the Data Planes
.
Prerequisites
-
Because Mission Control organizes clusters by projects, an existing project is required. See create project.
-
A prepared environment on either bare-metal/VM or an existing Kubernetes cluster.
-
Mission Control provides a UI through the IP address of any node using port
30880
on theControl-Plane
cluster. For example, issuehttps://10.0.0.3:30880
from a web browser, where10.0.0.3
is a qualifying node’s IP address.
Define and create a cluster
Create a cluster by completing the following define and submit tasks. Review the automatic reconciliation workflow, and then monitor the reconciliation status with one kubectl
command.
To define a new MissionControlCluster
start with creating a new YAML file that defines the topology and configuration for our new cluster. This file is an instance of a MissionControlCluster
Kubernetes Custom Resource (CR) and it describes the target end-state for the cluster. What follows is a minimal example of a MissionControlCluster
instance which creates a 3-node DSE cluster running version 6.8.26. Each node has 5 GB of storage available for data and requests 32 GB of RAM. See capacity planning documentation for system requirements.
Sample (partial) MissionControlCluster
manifest (object):
apiVersion: missioncontrol.datastax.com/v1beta1
kind: MissionControlCluster
metadata:
name: my-cluster
spec:
k8ssandra:
cassandra:
serverVersion: 6.8.26
serverType: dse
storageConfig:
cassandraDataVolumeClaimSpec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
networking:
hostNetwork: true
datacenters:
- metadata:
name: dc1
size: 3
resources:
requests:
memory: 32Gi
...
During Mission Control installation, the user interface shares various configuration settings to review or set. Use the reference catalog to guide your configuration decisions.
-
Specify certain parameters in this CR file.
-
The
apiVersion
andkind
parameters indicate what type of resource this file represents. In this example,kind
is aMissionControlCluster
resource with anapiVersion
ofv1beta1
. -
This YAML specification outlines
metadata
associated with this cluster. At a minimum you must specify aname
for your cluster. This value is used in thecluster_name
parameter ofcassandra.yaml
.Each name must be unique within a Kubernetes namespace. Submitting two clusters with the same name results in the first cluster being overwritten by the second.
-
Other fields that may be present in the
metadata
includeannotations
orlabels
to provide additional ancillary data. At this time Mission Control does not use any of these fields, but they may be leveraged by automation within the user’s environment. -
After the
metadata
block review thespec
, or specification, section.spec
is the declaration of our target end-state for the cluster. Instead of describing the various steps to create a cluster you simply define what you want your cluster to look like and Mission Control handles reconciling existing or missing resources towards that end-state.See the
MissionControlCluster
reference for a list of options and their descriptions.
-
-
The given
MissionControlCluster
is saved to disk asmy-cluster.`MissionControlCluster
.yaml`.Any filename is valid here. Using
<resource_name>.<kind>.yaml
allows you to easily differentiate multiple files in a given directory. -
Submit the
MissionControlCluster
YAML file to the Mission ControlControl Plane
Kubernetes cluster withkubectl
.kubectl
acts as a Kubernetes API client and handles calls to the Kubernetes API server. Advanced users may choose to leverage programmatic clients or GitOps tooling such as Flux instead of the imperative nature of thekubectl
CLI.Submission of the object is handled with the
kubectl apply
sub-command.For example:
kubectl apply -f my-cluster.dsecluster.yaml
This reads the file specified with the
-f
flag and submits it to theControl Plane
Kubernetes cluster. Should an object exist within the Kubernetes API sharing the same namespace and name it is updated to match the local file. When no file exists a new entry is created. As the newMissionControlCluster
object becomes available within the Kubernetes API, Mission Control detects the new resource and immediately begins reconciliation.
Mission Control automatic reconciliation steps for MissionControlCluster
resources
The following steps describe the automated process for informational purposes only; no user intervention is required.
-
Cluster-level operators detect a new
MissionControlCluster
custom resource through the Kubernetes API within theControl Plane
. -
Cluster-level operators identify which
Data Plane
clusters should receive datacenters defined within theMissionControlCluster
. In this example theeast
Data Plane cluster is specified so datacenter-level resources are created and reconciled there. -
Datacenter-level operators within the
Data Plane
clusters detect new datacenter-level custom resource definition (CRD) via the Kubernetes API -
Datacenter-level operators generate and submit rack-level resources (StatefulSets) to their local Kubernetes API.
-
Built-in Kubernetes reconciliation loops detect the new rack-level resources and begin creating pods and storage resources representing the underlying DSE nodes.
-
Status of resource creation rises to operators at the datacenter and cluster levels.
-
When all pods are up and running the cluster-level operator signals the datacenter-level operators to begin bootstrap operations of DSE within the created and running pods.
-
As pods come online their status is escalated and operations continue until all 3 nodes are up and running with services discoverable via the Kubernetes API.
Monitor bootstrap progress
-
Monitor the progress of the bootstrap to determine completion status or note any errors.
After submission of the
MissionControlCluster
custom resource (CR) the operator modifies the resource within the Kubernetes API adding astatus
field to the top-level of the resource. Thisstatus
field provides valuable insight into the health of theMissionControlCluster
as one or more operators detect definition changes.status
indicates everything from reconciliation phase to errors encountered while attempting to create storage. Run the following command to retrieve the descriptive status for themy-cluster
MissionControlCluster
object:kubectl describe mccluster/my-cluster
Sample results
Name: my-cluster Namespace: default Labels: <none> Annotations: <none> API Version: missioncontrol.datastax.com/v1beta1 Kind: `MissionControlCluster` Metadata: Creation Timestamp: 2023-10-30T11:09:33Z Finalizers: missioncontrol.datastax.com/finalizer Generation: 1 Resource Version: 105388250 UID: 57e956f8-1f87-422f-a7f8-b9ec87b956c4 Spec: Create Issuer: true Encryption: Internode Encryption: Certs: Cert Template: Issuer Ref: Name: Secret Name: Create Certs: true Enabled: true k8ssandra: Auth: true Cassandra: Datacenters: Dse Workloads: Metadata: Name: dc1 Pods: Services: Additional Seed Service: All Pods Service: Dc Service: Node Port Service: Seed Service: Per Node Config Init Container Image: <name>/yq:4 Per Node Config Map Ref: Racks: Name: rack1 Size: 3 Stopped: false Metadata: Pods: Services: Additional Seed Service: All Pods Service: Dc Service: Node Port Service: Seed Service: Per Node Config Init Container Image: <name>/yq:4 Resources: Requests: Memory: 32Gi Server Type: dse Server Version: 6.8.26 Storage Config: Cassandra Data Volume Claim Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 5Gi Storage Class Name: standard Superuser Secret Ref: Name: my-cluster-superuser Secrets Provider: internal Status: Conditions: Last Transition Time: 2023-10-30T14:08:08Z Message: Reason: UpdatingIssuers Status: False Type: UpdatingIssuers Last Transition Time: 2023-10-30T14:08:08Z Message: Reason: UpdatingCertificates Status: False Type: UpdatingCertificates Last Transition Time: 2023-10-30T14:08:08Z Message: Reason: UpdatingReplicatedSecrets Status: False Type: UpdatingReplicatedSecrets Last Transition Time: 2023-10-30T11:16:38Z Message: Reason: UpdatingCluster Status: False Type: UpdatingCluster Last Transition Time: 2023-10-30T14:08:08Z Message: Ready Reason: Ready Status: True Type: Ready Events: <none>
-
Access operator logs to discover more detail:
kubectl logs -n mission-control <pod-name>
An example <pod-name> is
mission-control-controller
.The
StatefulSet
controller is one of the core Kubernetes controllers that creates the pods. There is one pod perStatefulSet
:demo-dc1-rack1-sts-0
demo-dc1-rack2-sts-0
demo-dc1-rack3-sts-0
Next steps
-
Explore the
MissionControlCluster
reference documentation for a complete listing of all fields and values. -
Terminate the newly created DSE cluster.