Quickstart: Local Mission Control installation
This guide walks you through installing Mission Control on a local Kubernetes cluster using kind (Kubernetes in Docker). Use this setup for development, testing, and learning Mission Control features before deploying to production environments.
|
This quickstart is designed for local development and testing only. For production deployments, see Install Mission Control with Helm. |
Prerequisites
-
Docker or Podman installed and running
-
kubectl version 1.28 or later installed
-
Helm version 3.8 or later installed
-
kind version 0.20 or later installed
-
At least 16GB of available RAM
-
At least 50GB of available disk space
-
A Mission Control license ID for accessing container images
|
Contact IBM Support for a Mission Control license ID. Only accounts with paid Hyper-Converged Database (HCD) or DataStax Enterprise (DSE) plans can submit support tickets. For information about DataStax products and subscription plans, see the DataStax products page. |
Create a local Kubernetes cluster
Create a multi-node kind cluster with separate node pools for platform and database workloads.
-
Create a cluster configuration file:
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 name: mission-control nodes: - role: control-plane - role: worker labels: mission-control.datastax.com/role: platform - role: worker labels: mission-control.datastax.com/role: platform - role: worker labels: mission-control.datastax.com/role: platform - role: worker labels: mission-control.datastax.com/role: database - role: worker labels: mission-control.datastax.com/role: database - role: worker labels: mission-control.datastax.com/role: database -
Create the cluster:
kind create cluster --config kind-cluster-config.yaml -
Verify that the cluster is running:
kubectl get nodesThe output shows six nodes with appropriate labels:
ResultNAME STATUS ROLES AGE VERSION mission-control-control-plane Ready control-plane 2m v1.27.3 mission-control-worker Ready <none> 2m v1.27.3 mission-control-worker2 Ready <none> 2m v1.27.3 mission-control-worker3 Ready <none> 2m v1.27.3 mission-control-worker4 Ready <none> 2m v1.27.3 mission-control-worker5 Ready <none> 2m v1.27.3To verify the node labels, run:
kubectl get nodes --show-labelsResultNAME STATUS ROLES AGE VERSION LABELS mission-control-control-plane Ready control-plane 2m v1.27.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,... mission-control-worker Ready <none> 2m v1.27.3 mission-control.datastax.com/role=platform,... mission-control-worker2 Ready <none> 2m v1.27.3 mission-control.datastax.com/role=platform,... mission-control-worker3 Ready <none> 2m v1.27.3 mission-control.datastax.com/role=platform,... mission-control-worker4 Ready <none> 2m v1.27.3 mission-control.datastax.com/role=database,... mission-control-worker5 Ready <none> 2m v1.27.3 mission-control.datastax.com/role=database,... mission-control-worker6 Ready <none> 2m v1.27.3 mission-control.datastax.com/role=database,...
Install cert-manager
Mission Control uses cert-manager to handle the issuance and automation of TLS certificates, making it a prerequisite for Mission Control’s installation.
Before installing Mission Control, install and configure cert-manager to ensure proper cleanup of certificate secrets when Mission Control clusters are deleted.
Mission Control deletes the certificate objects when you delete a MissionControlCluster resource, which then deletes all the certificate secrets.
However, `cert-manager’s default settings leave the generated secrets behind when you delete the upstream certificates.
To ensure proper cleanup of certificate secrets when you delete Mission Control clusters, configure cert-manager to add ownership references on the secrets it generates.
- Helm installation
-
helm install \ cert-manager jetstack/cert-manager \ --namespace cert-manager \ --create-namespace \ --version VERSION \ --set 'extraArgs[0]=--enable-certificate-owner-ref=true'Replace
VERSIONwith the version ofcert-manageryou want to install. DataStax recommends version 1.16.1. - Kustomize installation
-
Add the following argument to the
cert-managerdeployment in thecert-manager-controllercontainer:- '--enable-certificate-owner-ref=true' - OpenShift installation
-
The OpenShift
cert-managerOperator doesn’t automatically delete certificate secrets when you removeCertificateresources. You must clean up secret resources manually withoccommands.-
Find the name of the
Certificateresources you want to delete:oc get certificate CERTIFICATE_NAME -n NAMESPACE -o yaml | grep "secretName"Replace the following:
-
CERTIFICATE_NAME: The name of theCertificateresource -
NAMESPACE: The namespace of theCertificateresource
-
-
Check if any resources are using the secret:
oc get all -n **NAMESPACE** -o custom-columns=KIND:.kind,NAME:.metadata.name --all-namespaces | xargs -L1 oc get -n **NAMESPACE** -o yaml | grep -B 2 -A 5 **SECRET_NAME**Replace the following:
-
SECRET_NAME: The name of the secret resource -
NAMESPACE: The namespace of the secret resource
-
-
If no other resources are using the secret, delete it:
oc delete secret **SECRET_NAME** -n **NAMESPACE**Replace the following:
-
SECRET_NAME: The name of the secret resource -
NAMESPACE: The namespace of the secret resource
-
-
Wait for cert-manager to be ready:
kubectl wait --for=condition=ready pod -l app.kubernetes.io/instance=cert-manager -n cert-manager --timeout=300s
Deploy S3-compatible storage
Mission Control requires S3-compatible object storage for backups and observability data. This guide uses SeaweedFS as a lightweight S3-compatible storage solution.
-
Create the SeaweedFS deployment:
apiVersion: v1 kind: Namespace metadata: name: seaweedfs --- apiVersion: v1 kind: ConfigMap metadata: name: seaweedfs-s3-config namespace: seaweedfs data: s3.json: | { "identities": [ { "name": "anonymous", "credentials": [ { "accessKey": "any", "secretKey": "any" } ], "actions": [ "Admin", "Read", "List", "Tagging", "Write" ] } ] } --- apiVersion: v1 kind: Service metadata: name: seaweedfs namespace: seaweedfs spec: type: ClusterIP ports: - name: s3 port: 8333 targetPort: 8333 selector: app: seaweedfs --- apiVersion: apps/v1 kind: StatefulSet metadata: name: seaweedfs namespace: seaweedfs spec: serviceName: seaweedfs replicas: 1 selector: matchLabels: app: seaweedfs template: metadata: labels: app: seaweedfs spec: initContainers: - name: init-config image: chrislusf/seaweedfs:latest command: - sh - -c - | mkdir -p /data/s3 cp /config/s3.json /data/s3/config.json volumeMounts: - name: data mountPath: /data - name: s3-config mountPath: /config containers: - name: seaweedfs image: chrislusf/seaweedfs:latest args: - server - -s3 - -s3.config=/data/s3/config.json - -dir=/data - -volume.max=0 ports: - containerPort: 8333 name: s3 volumeMounts: - name: data mountPath: /data - name: s3-config mountPath: /config volumes: - name: s3-config configMap: name: seaweedfs-s3-config volumeClaimTemplates: - metadata: name: data spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 10Gi -
Apply the configuration:
kubectl apply -f seaweedfs-deployment.yaml -
Wait for SeaweedFS to be ready:
kubectl wait --for=condition=ready pod -l app=seaweedfs -n seaweedfs --timeout=300s -
Create required S3 buckets:
apiVersion: batch/v1 kind: Job metadata: name: create-s3-buckets namespace: seaweedfs spec: template: spec: restartPolicy: OnFailure containers: - name: create-buckets image: amazon/aws-cli:latest env: - name: AWS_ACCESS_KEY_ID value: "any" - name: AWS_SECRET_ACCESS_KEY value: "any" command: - sh - -c - | aws --endpoint-url=http://seaweedfs.seaweedfs.svc.cluster.local:8333 s3 mb s3://mission-control-backups || true aws --endpoint-url=http://seaweedfs.seaweedfs.svc.cluster.local:8333 s3 mb s3://loki-chunks || true aws --endpoint-url=http://seaweedfs.seaweedfs.svc.cluster.local:8333 s3 mb s3://mimir-blocks || trueReplace the following:
-
seaweedfs.seaweedfs.svc.cluster.localwith your SeaweedFS service name -
AWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEYwith your actual credentials
-
-
Apply the bucket creation job:
kubectl apply -f create-s3-buckets.yaml
Install Mission Control
Install Mission Control using Helm with a custom values file.
-
Create a
values.yamlfile:controlPlane: true disableCertManagerCheck: false nodeSelector: mission-control.datastax.com/role: platform allowOperatorsOnDatabaseNodes: false client: manageCrds: true ui: enabled: true service: type: ClusterIP https: enabled: true nodeSelector: mission-control.datastax.com/role: platform resources: requests: memory: "256Mi" cpu: "100m" limits: memory: "512Mi" cpu: "500m" dex: nodeSelector: mission-control.datastax.com/role: platform resources: requests: memory: "128Mi" cpu: "50m" limits: memory: "256Mi" cpu: "200m" config: enablePasswordDB: true staticPasswords: - email: admin hash: "BCRYPT_HASH" username: admin userID: "USER_ID" grafana: enabled: true nodeSelector: mission-control.datastax.com/role: platform resources: requests: memory: "256Mi" cpu: "100m" limits: memory: "512Mi" cpu: "500m" loki: enabled: true loki: storage: bucketNames: chunks: loki-chunks s3: accessKeyId: any endpoint: http://seaweedfs.seaweedfs.svc.cluster.local:8333 insecure: true region: us-east-1 s3ForcePathStyle: true secretAccessKey: any type: s3 read: nodeSelector: mission-control.datastax.com/role: platform persistence: enabled: true size: 5Gi replicas: 1 resources: requests: memory: "256Mi" cpu: "100m" limits: memory: "512Mi" cpu: "500m" write: nodeSelector: mission-control.datastax.com/role: platform persistence: enabled: true size: 5Gi replicas: 1 resources: requests: memory: "256Mi" cpu: "100m" limits: memory: "512Mi" cpu: "500m" backend: nodeSelector: mission-control.datastax.com/role: platform replicas: 1 resources: requests: memory: "256Mi" cpu: "100m" limits: memory: "512Mi" cpu: "500m" extraArgs: - '-config.expand-env=true' mimir: enabled: true mimir: structuredConfig: blocks_storage: backend: s3 s3: endpoint: seaweedfs.seaweedfs.svc.cluster.local:8333 bucket_name: mimir-blocks access_key_id: any secret_access_key: any insecure: true nginx: nodeSelector: mission-control.datastax.com/role: platform resources: requests: memory: "128Mi" cpu: "50m" limits: memory: "256Mi" cpu: "200m" k8ssandra-operator: disableCrdUpgraderJob: true nodeSelector: mission-control.datastax.com/role: platform resources: requests: memory: "256Mi" cpu: "100m" limits: memory: "512Mi" cpu: "500m" cass-operator: disableCertManagerCheck: true nodeSelector: mission-control.datastax.com/role: platform resources: requests: memory: "256Mi" cpu: "100m" limits: memory: "512Mi" cpu: "500m"Replace the following:
-
BCRYPT_HASH: The bcrypt hash of your admin password. You can generate this with the following command:echo yourPassword | htpasswd -BinC 10 admin | cut -d: -f2 -
USER_ID: A unique identifier for the admin user (for example,admin-001)
-
-
Add the Mission Control Helm repository:
helm repo add mission-control HELM_REPO_URL helm repo updateReplace
HELM_REPO_URLwith the Helm repository URL provided by DataStax. Contact IBM Support for repository access. -
Install Mission Control:
helm install mission-control mission-control/mission-control \ --namespace mission-control \ --create-namespace \ --values values.yaml \ --wait \ --timeout 10m -
Verify the installation:
kubectl get pods -n mission-controlAll pods should show
Runningstatus.
Access the Mission Control UI
Access the Mission Control UI using port forwarding.
-
Create a port forward to the UI service:
kubectl port-forward -n mission-control svc/mission-control-ui 9091:8080 -
Open your browser and navigate to
https://localhost:9091. -
Accept the self-signed certificate warning.
-
Log in with the credentials you configured in the Dex static passwords section.
Create a database cluster
Create a test database cluster to verify your Mission Control installation.
-
Create an image pull secret for HCD images:
kubectl create secret docker-registry mission-control-registry \ --docker-server=proxy.replicated.com \ --docker-username=LICENSE_ID \ --docker-password=LICENSE_ID \ -n NAMESPACEReplace the following:
-
LICENSE_ID: Your Mission Control license ID -
NAMESPACE: The namespace where you want to create the cluster
-
-
Create a
K8ssandraClusterresource:apiVersion: k8ssandra.io/v1alpha1 kind: K8ssandraCluster metadata: name: test-cluster namespace: NAMESPACE spec: cassandra: serverVersion: "1.2.4" serverImage: "proxy.replicated.com/proxy/datastax-hcd/hcd-server" datacenters: - metadata: name: dc1 size: 1 storageConfig: cassandraDataVolumeClaimSpec: storageClassName: standard accessModes: - ReadWriteOnce resources: requests: storage: 5Gi config: cassandraYaml: num_tokens: 16 resources: requests: memory: "1Gi" cpu: "500m" limits: memory: "2Gi" cpu: "1000m" imagePullSecrets: - name: mission-control-registryReplace
NAMESPACEwith your target namespace. -
Apply the cluster configuration:
kubectl apply -f test-cluster.yaml -
Monitor cluster creation:
kubectl get k8ssandracluster -n NAMESPACE -wReplace
NAMESPACEwith your target namespace.The cluster reaches
Readystatus when all nodes are running.
Troubleshooting
You might encounter the following issues during cluster creation.
Authentication fails
If you can’t log in to the UI, do the following:
-
Check the Dex pod logs:
kubectl logs -n mission-control -l app.kubernetes.io/name=dex -
Verify that the Dex configuration includes
enablePasswordDB: trueand valid static passwords. -
Restart the Dex pod:
kubectl delete pod -n mission-control -l app.kubernetes.io/name=dexRestart the UI pod:
kubectl delete pod -n mission-control -l app.kubernetes.io/name=mission-control-ui
Image pull failures
If database pods fail with ImagePullBackOff, do the following:
-
Verify that the image pull secret exists:
kubectl get secret mission-control-registry -n NAMESPACE -
Check the secret is referenced in the
K8ssandraClusterspec underimagePullSecrets. -
Verify that your license ID is valid.
Resource exhaustion
If pods fail to schedule or the cluster becomes unresponsive, do the following:
-
Check node resource usage:
kubectl top nodes -
Reduce database cluster size to 1 node.
-
Reduce resource requests in your
values.yamlfile. -
Restart the kind cluster:
kind delete cluster --name mission-control kind create cluster --config kind-cluster-config.yaml
Port forward disconnects
If the port forward to the UI disconnects frequently, do the following:
-
Run the port forward in the background:
kubectl port-forward -n mission-control svc/mission-control-ui 9091:8080 & -
Use a NodePort service instead:
kubectl patch svc mission-control-ui -n mission-control -p '{"spec":{"type":"NodePort"}}' -
Get the NodePort:
kubectl get svc mission-control-ui -n mission-control -
Access the UI using the Docker host IP and NodePort.
API server timeouts
If kubectl commands timeout with connection errors, do the following:
-
Check the control plane status:
docker ps --filter name=mission-control-control-planeOr with Podman:
podman ps --filter name=mission-control-control-plane -
Restart the control plane container if needed:
docker restart mission-control-control-planeOr with Podman:
podman restart mission-control-control-plane -
Wait 30-60 seconds for the API server to become responsive.
Pods stuck in Pending
If database pods remain in Pending state, do the following:
-
Check pod events:
kubectl describe pod POD_NAME -n NAMESPACE -
Common causes:
-
Insufficient resources: Reduce memory and CPU requests in the
K8ssandraClusterspec -
Missing node labels: Verify that database nodes have the
mission-control.datastax.com/role: databaselabel -
Storage issues: Check that the StorageClass exists and can provision volumes
-
-
Reduce resource requests if needed:
kubectl patch k8ssandracluster CLUSTER_NAME -n NAMESPACE \ --type='json' \ -p='[{"op": "replace", "path": "/spec/cassandra/resources/requests/memory", "value": "1Gi"}, {"op": "replace", "path": "/spec/cassandra/resources/requests/cpu", "value": "500m"}]'
Replace the following:
-
CLUSTER_NAME: The name of yourK8ssandraCluster -
NAMESPACE: The namespace where the cluster is deployed
SeaweedFS connection failures
If Mission Control components can’t connect to SeaweedFS, do the following:
-
Verify that SeaweedFS is running:
kubectl get pods -n seaweedfs -
Check the SeaweedFS service:
kubectl get svc -n seaweedfs -
Test S3 connectivity from within the cluster:
kubectl run -it --rm debug --image=amazon/aws-cli --restart=Never -- \ aws --endpoint-url=http://seaweedfs.seaweedfs.svc.cluster.local:8333 \ s3 ls -
Verify that buckets exist:
kubectl logs -n seaweedfs job/create-s3-buckets
Certificate errors
If you encounter certificate validation errors, do the following:
-
Verify that
cert-manageris running:kubectl get pods -n cert-manager -
Check certificate status:
kubectl get certificates -n mission-control -
View certificate details:
kubectl describe certificate CERTIFICATE_NAME -n mission-control -
If certificates fail to issue, check
cert-managerlogs:kubectl logs -n cert-manager -l app=cert-manager
Clean up
Delete the kind cluster when you finish testing the local Mission Control installation:
kind delete cluster --name mission-control
This removes all Mission Control components and database clusters.