Install and configure Mission Control using Helm
Mission Control is packaged as a Helm chart along with its dependencies. While Mission Control is typically deployed through the KOTS Admin Console, you can also install it directly with Helm-based tooling.
Contact DataStax Support for Helm registry access. Only accounts with paid Hyper-Converged Database (HCD) or DataStax Enterprise (DSE) plans can submit support tickets. For information about DataStax products and subscription plans, see the DataStax website. |
Helm allows you to configure and template Kubernetes resources. It provides both an API and CLI for deploying resources. You can use Helm to install and manage Mission Control in both online and air gap environments. Helm offers more customization and detailed configuration of Mission Control components and its Kubernetes resources than the KOTS Admin Console.
Prerequisites
To install Mission Control using Helm, you need the following:
-
You have prepared either a bare-metal/VM or a pre-existing Kubernetes environment where you will install and configure Mission Control.
-
A downloaded Mission Control license file.
Mission Control requires a license file to provide Kubernetes Off-The-Shelf (KOTS) or Helm with required information out installation. Information includes customer identifiers, software update channels, and entitlements.
Are you exploring Mission Control as a solution for your organization? Fill out this registration form to request a community edition license.
If you need a replacement license file or a non-community edition, or want to convert your Public Preview license to use a stable channel release version, please contact your account team.
-
Helm installed.
-
Access to the Helm registry.
Configuration values
You can configure a Helm chart installation using command-line flags or a supplied values.yaml
file.
Helm structures its values.yaml
file with the top-level chart’s values placed at the root of the file and sub-charts placed under a top-level key matching the chart’s name or alias.
DataStax recommends placing all required configuration values within a values.yaml
file.
This allows simple versioning and configuration iteration.
You can download a sample values.yaml
file with default Mission Control settings.
Common configurations
Helm simplifies the management of Kubernetes applications by using configuration files to define and deploy resources.
Using Helm’s values.yaml
file, you can customize the settings for your deployment, and configure each component to your specific needs.
The following sections describe common configuration keys and values that you might use for each chart and its dependencies. Links to default templates are provided as a reference for additional configuration options.
Mission Control
# -- Determines if the mission-control-operator should be installed as the control plane
# or if it's simply in a secondary cluster waiting to be promoted
controlPlane: true
disableCertManagerCheck: false
# -- Node labels for operator pod assignment.
# Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
#
nodeSelector:
mission-control.datastax.com/role: platform
# -- Node affinity for operator pod assignment.
allowOperatorsOnDatabaseNodes: false
client:
# -- Automatically handle CRD upgrades
manageCrds: true
ui:
enabled: true
service:
nodePort: 30880
https:
# -- Enable HTTPS for the UI using self signed certificate
enabled: true
Sub chart configuration
This section details the configuration options for various sub-charts included in Mission Control.
Refer to the upstream Helm chart repository for each sub chart for a complete list of available configuration options.
Dex IdP
You can find Dex IdP upstream configuration keys in the Dex IdP Helm chart repo.
Place these entries under the dex
key in your values.yaml
file.
dex:
config:
enablePasswordDB: true
staticPasswords:
- email: admin@example.com
hash: "HASH"
username: admin
userID: "USER_ID"
Replace the following:
-
HASH
: The bcrypt hash of the password. On *nix systems, you can generate this with the following command:echo yourPassword | htpasswd -BinC 10 admin | cut -d: -f2
-
USER_ID
: The ID of the user.
Grafana
Grafana upstream configuration keys are available in the Grafana Helm chart repo.
Place these entries under the grafana
key in your values.yaml
file.
grafana:
enabled: false
K8ssandra operator
Place these entries under the k8ssandra-operator
key in your values.yaml
file.
k8ssandra-operator:
disableCrdUpgraderJob: true
cass-operator:
disableCertManagerCheck: true
Loki
Loki upstream configuration keys are available in the Loki Helm chart repo.
Place these entries under the loki
key in your values.yaml
file.
loki:
enabled: true
loki:
storage:
bucketNames:
chunks: my_loki_chunks_bucket
limits_config:
retention_period: 7d
read:
persistence:
enabled: true
size: 10Gi
storageClassName: ""
replicas: 1
write:
persistence:
enabled: true
size: 10Gi
storageClassName: ""
replicas: 1
backend:
replicas: 1
To back Loki with an S3 bucket, adjust the above configuration with the following:
loki:
enabled: true
loki:
storage:
bucketNames:
chunks: <S3 BUCKET NAME>
s3:
accessKeyId: <AWS ACCESS KEY ID>
endpoint: s3.<AWS REGION>.amazonaws.com
insecure: false
region: <AWS REGION>
s3: s3.<AWS REGION>.amazonaws.com
s3ForcePathStyle: false
secretAccessKey: <AWS ACCESS SECRET KEY>
type: s3
read:
persistence:
enabled: true
size: 10Gi
storageClassName: ""
replicas: 1
write:
persistence:
enabled: true
size: 10Gi
storageClassName: ""
replicas: 1
backend:
replicas: 1
Replace the following:
-
<S3 BUCKET NAME>
: The name of your S3 bucket. -
<AWS ACCESS KEY ID>
: The access key ID. -
<AWS REGION>
: The AWS region for your endpoint. -
<AWS REGION>
: The AWS region. -
<AWS REGION
: The AWS region for your S3 bucket. -
<AWS ACCESS SECRET KEY>
: The secret key for your AWS access.
To back Loki with a GCS bucket, first create a secret named loki-secrets
in the mission-control
namespace with the GCP service account JSON stored as a gcp_service_account.json
key:
apiVersion: v1
kind: Secret
metadata:
name: loki-secrets
namespace: mission-control
data:
gcp_service_account.json: >-
IHsgICAid..........vbSIgfQ==
type: Opaque
Then adjust the values for Loki using the following snippet. You can modify the secret name if it is created with a different one.
loki:
backend:
extraEnv:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/loki_secrets/gcp_service_account.json
extraVolumeMounts:
- mountPath: /etc/loki_secrets
name: loki-secrets
extraVolumes:
- name: loki-secrets
secret:
items:
- key: gcp_service_account.json
path: gcp_service_account.json
secretName: loki-secrets
persistence:
size: 30Gi
storageClass: ""
volumeClaimsEnabled: "1"
enabled: true
gateway:
extraEnv:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/loki_secrets/gcp_service_account.json
extraVolumeMounts:
- mountPath: /etc/loki_secrets
name: loki-secrets
extraVolumes:
- name: loki-secrets
secret:
items:
- key: gcp_service_account.json
path: gcp_service_account.json
secretName: loki-secrets
loki:
commonConfig:
replication_factor: 1
compactor:
retention_enabled: true
shared_store: gcs
working_directory: /var/loki/retention
limits_config:
ingestion_burst_size_mb: 2000
ingestion_rate_mb: 1000
retention_period: 7d
rulerConfig:
storage:
local:
directory: /var/loki/rules
type: local
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
storage:
bucketNames:
chunks: <GCS BUCKET NAME>
gcs:
insecure: true
type: gcs
read:
extraEnv:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/loki_secrets/gcp_service_account.json
extraVolumeMounts:
- mountPath: /etc/loki_secrets
name: loki-secrets
extraVolumes:
- name: loki-secrets
secret:
items:
- key: gcp_service_account.json
path: gcp_service_account.json
secretName: loki-secrets
persistence:
enabled: "1"
size: 30Gi
storageClassName: ""
replicas: 1
sidecar:
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
write:
extraEnv:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/loki_secrets/gcp_service_account.json
extraVolumeMounts:
- mountPath: /etc/loki_secrets
name: loki-secrets
extraVolumes:
- name: loki-secrets
secret:
items:
- key: gcp_service_account.json
path: gcp_service_account.json
secretName: loki-secrets
persistence:
size: 30Gi
storageClass: ""
volumeClaimsEnabled: "1"
replicas: 1
Mimir
Mimir upstream configuration keys are available in the Mimir Helm chart repo.
Place these entries under the mimir
key in your values.yaml
file.
mimir:
alertmanager:
enabled: true
extraArgs:
alertmanager-storage.backend: local
alertmanager-storage.local.path: /etc/alertmanager/config
alertmanager.configs.fallback: /etc/alertmanager/config/default.yml
alertmanager.sharding-ring.replication-factor: "2"
extraVolumeMounts:
- mountPath: /etc/alertmanager/config
name: alertmanager-config
- mountPath: /alertmanager
name: alertmanager-config-tmp
extraVolumes:
- name: alertmanager-config
secret:
secretName: alertmanager-config
- emptyDir: {}
name: alertmanager-config-tmp
persistentVolume:
accessModes:
- ReadWriteOnce
enabled: "1"
size: 10Gi
replicas: "2"
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
ingester:
extraArgs:
ingester.max-global-series-per-user: "0"
ingester.ring.replication-factor: "1"
persistentVolume:
size: 64Gi
replicas: "1"
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
store_gateway:
persistentVolume:
size: 64Gi
compactor:
extraArgs:
compactor.blocks-retention-period: 30d
persistentVolume:
enabled: "1"
size: 64Gi
replicas: "1"
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
distributor:
extraArgs:
ingester.ring.replication-factor: "1"
replicas: "1"
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
mimir:
structuredConfig:
activity_tracker:
filepath: /data/activity.log
limits:
ingestion_burst_size: 100000
ingestion_rate: 50000
max_label_names_per_series: 120
out_of_order_time_window: 5m
The above values omit configuration of a storage backend. Augment the above configuration with one of the following backends: To use a GCS bucket to store metrics, update the above definition as follows and modify the placeholders to match your system:
mimir:
structuredConfig:
activity_tracker:
filepath: /data/activity.log
blocks_storage:
backend: gcs
bucket_store:
sync_dir: /data/tsdb-sync
gcs:
bucket_name: <GCS BUCKET NAME>
service_account: 'GCP SERVICE ACCOUNT JSON CONTENT'
tsdb:
dir: /data/tsdb
limits:
ingestion_burst_size: 100000
ingestion_rate: 50000
max_label_names_per_series: 120
out_of_order_time_window: 5m
Replace the following:
-
<GCS BUCKET NAME>
: The name of your GCS bucket. -
<GCP SERVICE ACCOUNT JSON CONTENT>
: The JSON content for your GCP service account.
To use an S3 bucket to store metrics:
mimir:
structuredConfig:
activity_tracker:
filepath: /data/activity.log
blocks_storage:
backend: s3
bucket_store:
sync_dir: /data/tsdb-sync
s3:
access_key_id: <AWS ACCESS KEY ID>
bucket_name: <S3 BUCKET NAME>
endpoint: s3.AWS REGION.amazonaws.com
insecure: false
secret_access_key: <AWS SECRET ACCESS KEY>
tsdb:
dir: /data/tsdb
limits:
ingestion_burst_size: 100000
ingestion_rate: 50000
max_label_names_per_series: 120
out_of_order_time_window: 5m
Replace the following:
-
<AWS ACCESS KEY ID>
: The AWS access key ID. -
<S3 BUCKET NAME>
: The name of your S3 bucket. -
<AWS REGION>
: The AWS region for your S3 endpoint. -
<AWS SECRET ACCESS KEY>
: The access key for your AWS secret.
Vector
The Vector chart is deployed multiple times in different contexts. Each instantiation has a different alias, allowing for multiple configurations.
Place these entries under the agent
and aggregator
keys in your values.yaml
file.
Agent
Vector running in agent mode collects structured logs from each Kubernetes worker and the underlying container runtime, passing them along to the centralized aggregator.
agent:
enabled: true
Aggregator
Vector running in aggregator mode collects and processes all metrics and logs before sending them to downstream persistence systems—for example, Mimir, Loki, and external sinks.
aggregator:
enabled: true
service:
type: NodePort
ports:
- name: vector
protocol: TCP
port: 6000
targetPort: 6000
nodePort: 30600
Configure airgap Helm installations
To install Mission Control using Helm in an airgapped environment, you must override coordinates of all the images in the values.yaml
file.
Update the placeholders to match your registry and namespace within this registry. All images must be loaded in the private registry beforehand. Image tags will evolve across versions and must be updated as well. |
Here’s a sample values file to use as a base:
# -- Determines if the mission-control-operator should be installed as the control plane
# or if it's simply in a secondary cluster waiting to be promoted
controlPlane: true
disableCertManagerCheck: false
image:
registry: <REGISTRY_ADDRESS>:<REGISTRY_PORT>
repository: datastax/mission-control
pullPolicy: IfNotPresent
tag: v1.4.0
imageConfigs:
registryOverride: <REGISTRY_ADDRESS>:<REGISTRY_PORT>
reaper:
repository: thelastpickle/cassandra-reaper
medusa:
repository: k8ssandra/medusa
# -- Node affinity for operator pod assignment.
allowOperatorsOnDatabaseNodes: false
client:
# -- Automatically handle CRD upgrades
manageCrds: true
image:
registry: <REGISTRY_ADDRESS>:<REGISTRY_PORT>
repository: k8ssandra/k8ssandra-client
tag: latest
# -- Configuration of the job that runs at installation time to patch the conversion webhook in the CRD.
crdPatchJob:
image:
registry: <REGISTRY_ADDRESS>:<REGISTRY_PORT>
repository: bitnami/kubectl
tag: 1.30.1
ui:
enabled: true
# -- Base URL that client browsers will use to access the UI.
# If Dex only uses static passwords and/or the LDAP connector, this can be left empty, and the UI will work via any
# routable URL.
# If Dex uses an external provider (e.g. OIDC), this must be set, and the UI can only be accessed via this canonical
# URL.
baseUrl: ''
image:
registry: <REGISTRY_ADDRESS>:<REGISTRY_PORT>
repository: datastax/mission-control-ui
tag: v1.4.0
service:
nodePort: 30880
https:
enabled: true
# https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml
grafana:
enabled: true
imageRegistry: <REGISTRY_ADDRESS>:<REGISTRY_PORT>
image:
repository: <REGISTRY_ADDRESS>:<REGISTRY_PORT>/grafana/grafana
sidecar:
image:
repository: <REGISTRY_ADDRESS>:<REGISTRY_PORT>/kiwigrid/k8s-sidecar
downloadDashboardsImage:
repository: <REGISTRY_ADDRESS>:<REGISTRY_PORT>/curlimages/curl
initChownData:
image:
repository: <REGISTRY_ADDRESS>:<REGISTRY_PORT>/busybox
plugins: []
# https://github.com/k8ssandra/k8ssandra-operator/blob/main/charts/k8ssandra-operator/values.yaml
k8ssandra-operator:
image:
registry: <REGISTRY_ADDRESS>:<REGISTRY_PORT>
cass-operator:
image:
registry: <REGISTRY_ADDRESS>:<REGISTRY_PORT>
imageConfig:
systemLogger: <REGISTRY_ADDRESS>:<REGISTRY_PORT>/k8ssandra/system-logger:v1.22.1
configBuilder: <REGISTRY_ADDRESS>:<REGISTRY_PORT>/datastax/cass-config-builder:1.0-ubi8
k8ssandraClient: <REGISTRY_ADDRESS>:<REGISTRY_PORT>/k8ssandra/k8ssandra-client:v0.5.0
loki:
#enabled: false
kubectlImage:
registry: <REGISTRY_ADDRESS>:<REGISTRY_PORT>
sidecar:
image:
repository: <REGISTRY_ADDRESS>:<REGISTRY_PORT>/kiwigrid/k8s-sidecar
global:
image:
registry: <REGISTRY_ADDRESS>:<REGISTRY_PORT>
minio:
image:
repository: <REGISTRY_ADDRESS>:<REGISTRY_PORT>/minio/minio
mcImage:
repository: <REGISTRY_ADDRESS>:<REGISTRY_PORT>/minio/mc
loki:
storage:
type: "s3"
s3:
region: eu-west-1
bucketNames:
chunks: chunks-bucket
limits_config:
retention_period: 7d
read:
persistence:
enabled: true
size: 10Gi
replicas: 1
write:
persistence:
enabled: true
size: 10Gi
replicas: 1
backend:
replicas: 1
mimir:
alertmanager:
enabled: true
extraArgs:
alertmanager-storage.backend: local
alertmanager-storage.local.path: /etc/alertmanager/config
alertmanager.configs.fallback: /etc/alertmanager/config/default.yml
alertmanager.sharding-ring.replication-factor: "2"
extraVolumeMounts:
- mountPath: /etc/alertmanager/config
name: alertmanager-config
- mountPath: /alertmanager
name: alertmanager-config-tmp
extraVolumes:
- name: alertmanager-config
secret:
secretName: alertmanager-config
- emptyDir: {}
name: alertmanager-config-tmp
persistentVolume:
accessModes:
- ReadWriteOnce
enabled: "1"
size: 10Gi
replicas: "2"
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
ingester:
extraArgs:
ingester.max-global-series-per-user: "0"
ingester.ring.replication-factor: "1"
persistentVolume:
size: 64Gi
replicas: "1"
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
store_gateway:
persistentVolume:
size: 64Gi
enabled: "1"
replicas: "1"
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
compactor:
extraArgs:
compactor.blocks-retention-period: 30d
persistentVolume:
enabled: "1"
size: 64Gi
replicas: "1"
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
ruler:
enabled: true
extraArgs:
ingester.ring.replication-factor: "1"
ruler-storage.backend: local
ruler-storage.local.directory: /etc/rules
ruler.alertmanager-url: http://mission-control-mimir-alertmanager:8080/alertmanager
ruler.query-frontend.address: mission-control-mimir-query-frontend:9095
extraVolumeMounts:
- mountPath: /etc/rules/anonymous
name: ruler-config
extraVolumes:
- configMap:
defaultMode: 420
name: ruler-config
name: ruler-config
replicas: "1"
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
distributor:
extraArgs:
ingester.ring.replication-factor: "1"
replicas: "1"
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
image:
repository: <REGISTRY_ADDRESS>:<REGISTRY_PORT>/grafana/mimir
memcached:
image:
repository: <REGISTRY_ADDRESS>:<REGISTRY_PORT>/memcached
memcachedExporter:
image:
repository: <REGISTRY_ADDRESS>:<REGISTRY_PORT>/prom/memcached-exporter
nginx:
image:
registry: <REGISTRY_ADDRESS>:<REGISTRY_PORT>
gateway:
nginx:
image:
registry: <REGISTRY_ADDRESS>:<REGISTRY_PORT>
enterprise:
image:
repository: <REGISTRY_ADDRESS>:<REGISTRY_PORT>/grafana/enterprise-metrics
mcImage:
repository: <REGISTRY_ADDRESS>:<REGISTRY_PORT>/minio/mc
mimir:
structuredConfig:
activity_tracker:
filepath: /data/activity.log
limits:
ingestion_burst_size: 100000
ingestion_rate: 50000
max_label_names_per_series: 120
out_of_order_time_window: 5m
agent:
image:
repository: <REGISTRY_ADDRESS>:<REGISTRY_PORT>/timberio/vector
aggregator:
image:
repository: <REGISTRY_ADDRESS>:<REGISTRY_PORT>/timberio/vector
replicated:
enabled: false
images:
replicated-sdk: <REGISTRY_ADDRESS>:<REGISTRY_PORT>/replicated/replicated-sdk:v1.0.0-beta.14
kube-state-metrics:
image:
registry: <REGISTRY_ADDRESS>:<REGISTRY_PORT>
dex:
image:
repository: <REGISTRY_ADDRESS>:<REGISTRY_PORT>/datastax/mission-control-dex
config:
enablePasswordDB: true
staticPasswords:
- email: admin@example.com
hash: "HASH"
username: admin
userID: "USER_ID"
Replace the following:
-
<REGISTRY_ADDRESS>
: The address of your registry. -
<REGISTRY_PORT>
: The namespace within your registry. -
HASH
: The bcrypt hash of the password. On *nix systems, you can generate this with the following command:echo yourPassword | htpasswd -BinC 10 admin | cut -d: -f2
-
USER_ID
: The ID of the user.
Install Mission Control with Helm
To install Mission Control with Helm, do the following:
-
Log in to the Helm registry:
helm registry login registry.replicated.com --username 'HELM_INSTALL_EMAIL_ADDRESS' --password 'HELM_INSTALL_PASSWORD'
Replace the following:
-
HELM_INSTALL_EMAIL_ADDRESS
: The email address for Helm-based installations -
HELM_INSTALL_PASSWORD
: The Mission Control license ID
-
-
If you haven’t done so already, get your Helm registry credentials from DataStax Support.
-
Create your
values.yaml
file or use the default DataStax file. -
Install Mission Control using your registry credentials:
helm install mission-control oci://registry.replicated.com/mission-control/mission-control --namespace mission-control --create-namespace -f values.yaml
Upgrade Mission Control with Helm
Run the following command to upgrade Mission Control:
helm upgrade mission-control oci://registry.replicated.com/mission-control/mission-control --namespace mission-control --create-namespace -f values.yaml