Quickstart: Local Mission Control installation

This guide walks you through installing Mission Control on a local Kubernetes cluster using kind (Kubernetes in Docker). Use this setup for development, testing, and learning Mission Control features before deploying to production environments.

This quickstart is designed for local development and testing only. For production deployments, see Install Mission Control with Helm.

Prerequisites

  • Docker or Podman installed and running

  • kubectl version 1.28 or later installed

  • Helm version 3.8 or later installed

  • kind version 0.20 or later installed

  • At least 16GB of available RAM

  • At least 50GB of available disk space

  • A Mission Control license ID for accessing container images

Contact IBM Support for a Mission Control license ID. Only accounts with paid Hyper-Converged Database (HCD) or DataStax Enterprise (DSE) plans can submit support tickets. For information about DataStax products and subscription plans, see the DataStax products page.

Create a local Kubernetes cluster

Create a multi-node kind cluster with separate node pools for platform and database workloads.

  1. Create a cluster configuration file:

    kind: Cluster
    apiVersion: kind.x-k8s.io/v1alpha4
    name: mission-control
    nodes:
      - role: control-plane
      - role: worker
        labels:
          mission-control.datastax.com/role: platform
      - role: worker
        labels:
          mission-control.datastax.com/role: platform
      - role: worker
        labels:
          mission-control.datastax.com/role: platform
      - role: worker
        labels:
          mission-control.datastax.com/role: database
      - role: worker
        labels:
          mission-control.datastax.com/role: database
      - role: worker
        labels:
          mission-control.datastax.com/role: database
  2. Create the cluster:

    kind create cluster --config kind-cluster-config.yaml
  3. Verify that the cluster is running:

    kubectl get nodes

    The output shows six nodes with appropriate labels:

    Result
    NAME                            STATUS   ROLES           AGE   VERSION
    mission-control-control-plane   Ready    control-plane   2m    v1.27.3
    mission-control-worker          Ready    <none>          2m    v1.27.3
    mission-control-worker2         Ready    <none>          2m    v1.27.3
    mission-control-worker3         Ready    <none>          2m    v1.27.3
    mission-control-worker4         Ready    <none>          2m    v1.27.3
    mission-control-worker5         Ready    <none>          2m    v1.27.3

    To verify the node labels, run:

    kubectl get nodes --show-labels
    Result
    NAME                            STATUS   ROLES           AGE   VERSION   LABELS
    mission-control-control-plane   Ready    control-plane   2m    v1.27.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,...
    mission-control-worker          Ready    <none>          2m    v1.27.3   mission-control.datastax.com/role=platform,...
    mission-control-worker2         Ready    <none>          2m    v1.27.3   mission-control.datastax.com/role=platform,...
    mission-control-worker3         Ready    <none>          2m    v1.27.3   mission-control.datastax.com/role=platform,...
    mission-control-worker4         Ready    <none>          2m    v1.27.3   mission-control.datastax.com/role=database,...
    mission-control-worker5         Ready    <none>          2m    v1.27.3   mission-control.datastax.com/role=database,...
    mission-control-worker6         Ready    <none>          2m    v1.27.3   mission-control.datastax.com/role=database,...

Install cert-manager

Mission Control uses cert-manager to handle the issuance and automation of TLS certificates, making it a prerequisite for Mission Control’s installation.

Before installing Mission Control, install and configure cert-manager to ensure proper cleanup of certificate secrets when Mission Control clusters are deleted.

Mission Control deletes the certificate objects when you delete a MissionControlCluster resource, which then deletes all the certificate secrets. However, `cert-manager’s default settings leave the generated secrets behind when you delete the upstream certificates.

To ensure proper cleanup of certificate secrets when you delete Mission Control clusters, configure cert-manager to add ownership references on the secrets it generates.

Helm installation
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version VERSION \
  --set 'extraArgs[0]=--enable-certificate-owner-ref=true'

Replace VERSION with the version of cert-manager you want to install. DataStax recommends version 1.16.1.

Kustomize installation

Add the following argument to the cert-manager deployment in the cert-manager-controller container:

- '--enable-certificate-owner-ref=true'
OpenShift installation

The OpenShift cert-manager Operator doesn’t automatically delete certificate secrets when you remove Certificate resources. You must clean up secret resources manually with oc commands.

  1. Find the name of the Certificate resources you want to delete:

    oc get certificate CERTIFICATE_NAME -n NAMESPACE -o yaml | grep "secretName"

    Replace the following:

    • CERTIFICATE_NAME: The name of the Certificate resource

    • NAMESPACE: The namespace of the Certificate resource

  2. Check if any resources are using the secret:

    oc get all -n **NAMESPACE** -o custom-columns=KIND:.kind,NAME:.metadata.name --all-namespaces | xargs -L1 oc get -n **NAMESPACE** -o yaml | grep -B 2 -A 5 **SECRET_NAME**

    Replace the following:

    • SECRET_NAME: The name of the secret resource

    • NAMESPACE: The namespace of the secret resource

  3. If no other resources are using the secret, delete it:

    oc delete secret **SECRET_NAME** -n **NAMESPACE**

    Replace the following:

    • SECRET_NAME: The name of the secret resource

    • NAMESPACE: The namespace of the secret resource

Wait for cert-manager to be ready:

kubectl wait --for=condition=ready pod -l app.kubernetes.io/instance=cert-manager -n cert-manager --timeout=300s

Deploy S3-compatible storage

Mission Control requires S3-compatible object storage for backups and observability data. This guide uses SeaweedFS as a lightweight S3-compatible storage solution.

  1. Create the SeaweedFS deployment:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: seaweedfs
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: seaweedfs-s3-config
      namespace: seaweedfs
    data:
      s3.json: |
        {
          "identities": [
            {
              "name": "anonymous",
              "credentials": [
                {
                  "accessKey": "any",
                  "secretKey": "any"
                }
              ],
              "actions": [
                "Admin",
                "Read",
                "List",
                "Tagging",
                "Write"
              ]
            }
          ]
        }
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: seaweedfs
      namespace: seaweedfs
    spec:
      type: ClusterIP
      ports:
        - name: s3
          port: 8333
          targetPort: 8333
      selector:
        app: seaweedfs
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: seaweedfs
      namespace: seaweedfs
    spec:
      serviceName: seaweedfs
      replicas: 1
      selector:
        matchLabels:
          app: seaweedfs
      template:
        metadata:
          labels:
            app: seaweedfs
        spec:
          initContainers:
          - name: init-config
            image: chrislusf/seaweedfs:latest
            command:
            - sh
            - -c
            - |
              mkdir -p /data/s3
              cp /config/s3.json /data/s3/config.json
            volumeMounts:
            - name: data
              mountPath: /data
            - name: s3-config
              mountPath: /config
          containers:
          - name: seaweedfs
            image: chrislusf/seaweedfs:latest
            args:
            - server
            - -s3
            - -s3.config=/data/s3/config.json
            - -dir=/data
            - -volume.max=0
            ports:
            - containerPort: 8333
              name: s3
            volumeMounts:
            - name: data
              mountPath: /data
            - name: s3-config
              mountPath: /config
          volumes:
          - name: s3-config
            configMap:
              name: seaweedfs-s3-config
      volumeClaimTemplates:
      - metadata:
          name: data
        spec:
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 10Gi
  2. Apply the configuration:

    kubectl apply -f seaweedfs-deployment.yaml
  3. Wait for SeaweedFS to be ready:

    kubectl wait --for=condition=ready pod -l app=seaweedfs -n seaweedfs --timeout=300s
  4. Create required S3 buckets:

    apiVersion: batch/v1
    kind: Job
    metadata:
      name: create-s3-buckets
      namespace: seaweedfs
    spec:
      template:
        spec:
          restartPolicy: OnFailure
          containers:
          - name: create-buckets
            image: amazon/aws-cli:latest
            env:
            - name: AWS_ACCESS_KEY_ID
              value: "any"
            - name: AWS_SECRET_ACCESS_KEY
              value: "any"
            command:
            - sh
            - -c
            - |
              aws --endpoint-url=http://seaweedfs.seaweedfs.svc.cluster.local:8333 s3 mb s3://mission-control-backups || true
              aws --endpoint-url=http://seaweedfs.seaweedfs.svc.cluster.local:8333 s3 mb s3://loki-chunks || true
              aws --endpoint-url=http://seaweedfs.seaweedfs.svc.cluster.local:8333 s3 mb s3://mimir-blocks || true

    Replace the following:

    • seaweedfs.seaweedfs.svc.cluster.local with your SeaweedFS service name

    • AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY with your actual credentials

  5. Apply the bucket creation job:

    kubectl apply -f create-s3-buckets.yaml

Install Mission Control

Install Mission Control using Helm with a custom values file.

  1. Create a values.yaml file:

    controlPlane: true
    disableCertManagerCheck: false
    
    nodeSelector:
      mission-control.datastax.com/role: platform
    
    allowOperatorsOnDatabaseNodes: false
    
    client:
      manageCrds: true
    
    ui:
      enabled: true
      service:
        type: ClusterIP
      https:
        enabled: true
      nodeSelector:
        mission-control.datastax.com/role: platform
      resources:
        requests:
          memory: "256Mi"
          cpu: "100m"
        limits:
          memory: "512Mi"
          cpu: "500m"
    
    dex:
      nodeSelector:
        mission-control.datastax.com/role: platform
      resources:
        requests:
          memory: "128Mi"
          cpu: "50m"
        limits:
          memory: "256Mi"
          cpu: "200m"
      config:
        enablePasswordDB: true
        staticPasswords:
          - email: admin
            hash: "BCRYPT_HASH"
            username: admin
            userID: "USER_ID"
    
    grafana:
      enabled: true
      nodeSelector:
        mission-control.datastax.com/role: platform
      resources:
        requests:
          memory: "256Mi"
          cpu: "100m"
        limits:
          memory: "512Mi"
          cpu: "500m"
    
    loki:
      enabled: true
      loki:
        storage:
          bucketNames:
            chunks: loki-chunks
          s3:
            accessKeyId: any
            endpoint: http://seaweedfs.seaweedfs.svc.cluster.local:8333
            insecure: true
            region: us-east-1
            s3ForcePathStyle: true
            secretAccessKey: any
          type: s3
      read:
        nodeSelector:
          mission-control.datastax.com/role: platform
        persistence:
          enabled: true
          size: 5Gi
        replicas: 1
        resources:
          requests:
            memory: "256Mi"
            cpu: "100m"
          limits:
            memory: "512Mi"
            cpu: "500m"
      write:
        nodeSelector:
          mission-control.datastax.com/role: platform
        persistence:
          enabled: true
          size: 5Gi
        replicas: 1
        resources:
          requests:
            memory: "256Mi"
            cpu: "100m"
          limits:
            memory: "512Mi"
            cpu: "500m"
      backend:
        nodeSelector:
          mission-control.datastax.com/role: platform
        replicas: 1
        resources:
          requests:
            memory: "256Mi"
            cpu: "100m"
          limits:
            memory: "512Mi"
            cpu: "500m"
        extraArgs:
          - '-config.expand-env=true'
    
    mimir:
      enabled: true
      mimir:
        structuredConfig:
          blocks_storage:
            backend: s3
            s3:
              endpoint: seaweedfs.seaweedfs.svc.cluster.local:8333
              bucket_name: mimir-blocks
              access_key_id: any
              secret_access_key: any
              insecure: true
      nginx:
        nodeSelector:
          mission-control.datastax.com/role: platform
        resources:
          requests:
            memory: "128Mi"
            cpu: "50m"
          limits:
            memory: "256Mi"
            cpu: "200m"
    
    k8ssandra-operator:
      disableCrdUpgraderJob: true
      nodeSelector:
        mission-control.datastax.com/role: platform
      resources:
        requests:
          memory: "256Mi"
          cpu: "100m"
        limits:
          memory: "512Mi"
          cpu: "500m"
      cass-operator:
        disableCertManagerCheck: true
        nodeSelector:
          mission-control.datastax.com/role: platform
        resources:
          requests:
            memory: "256Mi"
            cpu: "100m"
          limits:
            memory: "512Mi"
            cpu: "500m"

    Replace the following:

    • BCRYPT_HASH: The bcrypt hash of your admin password. You can generate this with the following command:

      echo yourPassword | htpasswd -BinC 10 admin | cut -d: -f2
    • USER_ID: A unique identifier for the admin user (for example, admin-001)

  2. Add the Mission Control Helm repository:

    helm repo add mission-control HELM_REPO_URL
    helm repo update

    Replace HELM_REPO_URL with the Helm repository URL provided by DataStax. Contact IBM Support for repository access.

  3. Install Mission Control:

    helm install mission-control mission-control/mission-control \
      --namespace mission-control \
      --create-namespace \
      --values values.yaml \
      --wait \
      --timeout 10m
  4. Verify the installation:

    kubectl get pods -n mission-control

    All pods should show Running status.

Access the Mission Control UI

Access the Mission Control UI using port forwarding.

  1. Create a port forward to the UI service:

    kubectl port-forward -n mission-control svc/mission-control-ui 9091:8080
  2. Open your browser and navigate to https://localhost:9091.

  3. Accept the self-signed certificate warning.

  4. Log in with the credentials you configured in the Dex static passwords section.

Create a database cluster

Create a test database cluster to verify your Mission Control installation.

  1. Create an image pull secret for HCD images:

    kubectl create secret docker-registry mission-control-registry \
      --docker-server=proxy.replicated.com \
      --docker-username=LICENSE_ID \
      --docker-password=LICENSE_ID \
      -n NAMESPACE

    Replace the following:

    • LICENSE_ID: Your Mission Control license ID

    • NAMESPACE: The namespace where you want to create the cluster

  2. Create a K8ssandraCluster resource:

    apiVersion: k8ssandra.io/v1alpha1
    kind: K8ssandraCluster
    metadata:
      name: test-cluster
      namespace: NAMESPACE
    spec:
      cassandra:
        serverVersion: "1.2.4"
        serverImage: "proxy.replicated.com/proxy/datastax-hcd/hcd-server"
        datacenters:
          - metadata:
              name: dc1
            size: 1
            storageConfig:
              cassandraDataVolumeClaimSpec:
                storageClassName: standard
                accessModes:
                  - ReadWriteOnce
                resources:
                  requests:
                    storage: 5Gi
            config:
              cassandraYaml:
                num_tokens: 16
            resources:
              requests:
                memory: "1Gi"
                cpu: "500m"
              limits:
                memory: "2Gi"
                cpu: "1000m"
        imagePullSecrets:
          - name: mission-control-registry

    Replace NAMESPACE with your target namespace.

  3. Apply the cluster configuration:

    kubectl apply -f test-cluster.yaml
  4. Monitor cluster creation:

    kubectl get k8ssandracluster -n NAMESPACE -w

    Replace NAMESPACE with your target namespace.

    The cluster reaches Ready status when all nodes are running.

Troubleshooting

You might encounter the following issues during cluster creation.

Authentication fails

If you can’t log in to the UI, do the following:

  1. Check the Dex pod logs:

    kubectl logs -n mission-control -l app.kubernetes.io/name=dex
  2. Verify that the Dex configuration includes enablePasswordDB: true and valid static passwords.

  3. Restart the Dex pod:

    kubectl delete pod -n mission-control -l app.kubernetes.io/name=dex

    Restart the UI pod:

    kubectl delete pod -n mission-control -l app.kubernetes.io/name=mission-control-ui

Image pull failures

If database pods fail with ImagePullBackOff, do the following:

  1. Verify that the image pull secret exists:

    kubectl get secret mission-control-registry -n NAMESPACE
  2. Check the secret is referenced in the K8ssandraCluster spec under imagePullSecrets.

  3. Verify that your license ID is valid.

Resource exhaustion

If pods fail to schedule or the cluster becomes unresponsive, do the following:

  1. Check node resource usage:

    kubectl top nodes
  2. Reduce database cluster size to 1 node.

  3. Reduce resource requests in your values.yaml file.

  4. Restart the kind cluster:

    kind delete cluster --name mission-control
    kind create cluster --config kind-cluster-config.yaml

Port forward disconnects

If the port forward to the UI disconnects frequently, do the following:

  1. Run the port forward in the background:

    kubectl port-forward -n mission-control svc/mission-control-ui 9091:8080 &
  2. Use a NodePort service instead:

    kubectl patch svc mission-control-ui -n mission-control -p '{"spec":{"type":"NodePort"}}'
  3. Get the NodePort:

    kubectl get svc mission-control-ui -n mission-control
  4. Access the UI using the Docker host IP and NodePort.

API server timeouts

If kubectl commands timeout with connection errors, do the following:

  1. Check the control plane status:

    docker ps --filter name=mission-control-control-plane

    Or with Podman:

    podman ps --filter name=mission-control-control-plane
  2. Restart the control plane container if needed:

    docker restart mission-control-control-plane

    Or with Podman:

    podman restart mission-control-control-plane
  3. Wait 30-60 seconds for the API server to become responsive.

Pods stuck in Pending

If database pods remain in Pending state, do the following:

  1. Check pod events:

    kubectl describe pod POD_NAME -n NAMESPACE
  2. Common causes:

    • Insufficient resources: Reduce memory and CPU requests in the K8ssandraCluster spec

    • Missing node labels: Verify that database nodes have the mission-control.datastax.com/role: database label

    • Storage issues: Check that the StorageClass exists and can provision volumes

  3. Reduce resource requests if needed:

    kubectl patch k8ssandracluster CLUSTER_NAME -n NAMESPACE \
      --type='json' \
      -p='[{"op": "replace", "path": "/spec/cassandra/resources/requests/memory", "value": "1Gi"},
           {"op": "replace", "path": "/spec/cassandra/resources/requests/cpu", "value": "500m"}]'

Replace the following:

  • CLUSTER_NAME: The name of your K8ssandraCluster

  • NAMESPACE: The namespace where the cluster is deployed

SeaweedFS connection failures

If Mission Control components can’t connect to SeaweedFS, do the following:

  1. Verify that SeaweedFS is running:

    kubectl get pods -n seaweedfs
  2. Check the SeaweedFS service:

    kubectl get svc -n seaweedfs
  3. Test S3 connectivity from within the cluster:

    kubectl run -it --rm debug --image=amazon/aws-cli --restart=Never -- \
      aws --endpoint-url=http://seaweedfs.seaweedfs.svc.cluster.local:8333 \
      s3 ls
  4. Verify that buckets exist:

    kubectl logs -n seaweedfs job/create-s3-buckets

Certificate errors

If you encounter certificate validation errors, do the following:

  1. Verify that cert-manager is running:

    kubectl get pods -n cert-manager
  2. Check certificate status:

    kubectl get certificates -n mission-control
  3. View certificate details:

    kubectl describe certificate CERTIFICATE_NAME -n mission-control
  4. If certificates fail to issue, check cert-manager logs:

    kubectl logs -n cert-manager -l app=cert-manager

Clean up

Delete the kind cluster when you finish testing the local Mission Control installation:

kind delete cluster --name mission-control

This removes all Mission Control components and database clusters.

Was this helpful?

Give Feedback

How can we improve the documentation?

© Copyright IBM Corporation 2026 | Privacy policy | Terms of use Manage Privacy Choices

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: Contact IBM