Install Mission Control with Helm
Helm provides a package manager for Kubernetes applications, enabling you to install, upgrade, and manage Mission Control configurations efficiently. Use this installation method when you need fine-grained control over the installation process and want to integrate with GitOps workflows.
|
Contact IBM Support for Helm registry access. Only accounts with paid Hyper-Converged Database (HCD) or DataStax Enterprise (DSE) plans can submit support tickets. For information about DataStax products and subscription plans, see the DataStax products page. |
If you need to install Mission Control in environments where cluster-scoped resources must be managed separately from the main installation, see Install Mission Control with Helm using separate cluster resources.
For organizations that require separation of cluster administration and application management, see Install Kubernetes cluster-level resources separately for information about installing Kubernetes cluster-level resources in a separate chart.
Prerequisites
Before you begin, ensure you have:
-
A prepared installation environment on your existing Kubernetes cluster.
-
A downloaded Mission Control license file.
Mission Control requires a license file to provide Kubernetes Off-The-Shelf (KOTS) or Helm with required information for installation. Information includes customer identifiers, software update channels, and entitlements.
Contact your sales representative or call 888-746-7426 to request a license.
If you need a replacement license file or a non-community edition, or want to convert your Public Preview license to use a stable channel release version, contact your account team.
-
Helm version 3.14.0 to 3.18.0 installed
-
Access to the Helm registry.
For information about security configurations, see Security overrides.
Configure pod-to-pod routing
Ensure that all database pods can route to each other. This is a critical requirement for proper operation and data consistency.
The requirement applies to:
-
All database pods within the same region or availability zone.
-
All database pods across different availability zones within the same region.
-
All database pods across different regions for multi-region deployments.
-
All database pods across different racks in the same datacenter for multi-region deployments.
The way you configure pod-to-pod routing depends on your cluster architecture:
- Single-cluster deployments
-
The cluster’s Container Network Interface (CNI) typically provides pod-to-pod network connectivity for database pods within a single Kubernetes cluster. You usually need no additional configuration beyond standard Kubernetes networking.
- Security considerations for shared clusters
-
If your database cluster shares a Kubernetes cluster with other applications, implement security controls to prevent unauthorized access to database internode ports (7000/7001):
-
NetworkPolicy isolation (required): Use Kubernetes NetworkPolicy to restrict access to internode ports to only authorized database pods. NetworkPolicy prevents other applications in the cluster from accessing these ports even if underlying firewall rules are broad.
-
Internode TLS encryption (required): Enable internode TLS to protect data in transit and prevent unauthorized nodes from joining the cluster.
-
Dedicated node pools (recommended): Consider dedicated node pools or subnets for database workloads to enable more granular firewall controls at the infrastructure level.
-
- Multi-cluster deployments
-
For database pods that span multiple Kubernetes clusters, NetworkPolicy alone doesn’t provide sufficient connectivity. You must establish Layer 3 network connectivity or overlay connectivity between the database pod networks (pod CIDRs or node subnets depending on your deployment). Kubernetes NetworkPolicy operates only within a single cluster boundary and can’t provide cross-cluster connectivity.
-
Choose one of the following approaches to establish pod network connectivity across clusters:
-
Routed pod CIDRs (recommended): Use cloud provider native routing solutions when your platform supports them. This approach provides the best performance and simplest operational model.
-
AWS: VPC Peering, Transit Gateway, or AWS Cloud WAN.
-
Azure: VNet Peering or Virtual WAN.
-
GCP: VPC Peering or Cloud VPN.
-
-
Submariner: Open-source multi-cluster connectivity solution, common in OpenShift multi-cluster deployments. Submariner provides encrypted tunnels between clusters. For more information, see the Submariner documentation.
-
Cilium Cluster Mesh: For clusters that use Cilium CNI. For more information, see the Cilium documentation.
-
Cilium Cluster Mesh provides native multi-cluster networking with Cilium.
-
Cilium Cluster Mesh enables pod-to-pod connectivity across clusters.
-
-
-
After you establish cross-cluster connectivity, implement the following security measures. Traditional firewall rules alone lack application awareness and can’t distinguish between different pods or services within a cluster. Use Kubernetes NetworkPolicy for pod-level access control within clusters, and combine it with network-level firewalls for defense in depth.
-
Enable internode TLS encryption to protect data in transit between clusters.
-
Configure firewall rules at the network level to restrict traffic between cluster pod CIDRs.
-
Use NetworkPolicy within each cluster to further restrict access to database ports.
-
Consider using dedicated subnets or VPCs for database clusters to enable network-level isolation.
-
-
To verify that pod-to-pod routing has been configured properly, do the following:
-
Test connectivity between database pods using
nodetool statusorcqlsh. -
Check that all nodes can see each other in the cluster topology.
-
Monitor for connection errors or timeouts in database logs.
-
Verify that gossip protocol communication functions correctly.
If pod-to-pod routing isn’t implemented correctly, you might experience the following:
-
Connectivity issues between database pods.
-
Cluster instability.
-
Data consistency issues.
-
Failed replication.
-
Incomplete or failed cluster operations.
Configure cert-manager
Mission Control uses Cert Manager to handle the issuance and automation of TLS certificates, making it a prerequisite for Mission Control’s installation.
Before installing Mission Control, you must install and configure cert-manager to ensure proper cleanup of certificate secrets when Mission Control clusters are deleted.
|
For OpenShift environments, install cert-manager from the Operator Hub instead of using Helm. See Install Mission Control on OpenShift for complete OpenShift installation instructions. |
Mission Control deletes the certificate objects when you delete a MissionControlCluster resource, which then deletes all the certificate secrets.
However, `cert-manager’s default settings leave the generated secrets behind when you delete the upstream certificates.
To ensure proper cleanup of certificate secrets when you delete Mission Control clusters, configure cert-manager to add ownership references on the secrets it generates.
- Helm installation
-
helm install \ cert-manager jetstack/cert-manager \ --namespace cert-manager \ --create-namespace \ --version VERSION \ --set 'extraArgs[0]=--enable-certificate-owner-ref=true'Replace
VERSIONwith the version ofcert-manageryou want to install. DataStax recommends version 1.16.1. - Kustomize installation
-
Add the following argument to the
cert-managerdeployment in thecert-manager-controllercontainer:- '--enable-certificate-owner-ref=true' - OpenShift installation
-
The OpenShift
cert-managerOperator doesn’t automatically delete certificate secrets when you removeCertificateresources. You must clean up secret resources manually withoccommands.-
Find the name of the
Certificateresources you want to delete:oc get certificate CERTIFICATE_NAME -n NAMESPACE -o yaml | grep "secretName"Replace the following:
-
CERTIFICATE_NAME: The name of theCertificateresource -
NAMESPACE: The namespace of theCertificateresource
-
-
Check if any resources are using the secret:
oc get all -n **NAMESPACE** -o custom-columns=KIND:.kind,NAME:.metadata.name --all-namespaces | xargs -L1 oc get -n **NAMESPACE** -o yaml | grep -B 2 -A 5 **SECRET_NAME**Replace the following:
-
SECRET_NAME: The name of the secret resource -
NAMESPACE: The namespace of the secret resource
-
-
If no other resources are using the secret, delete it:
oc delete secret **SECRET_NAME** -n **NAMESPACE**Replace the following:
-
SECRET_NAME: The name of the secret resource -
NAMESPACE: The namespace of the secret resource
-
-
Configuration values
You can configure a Helm chart installation using command-line flags or a supplied values.yaml file.
Helm structures its values.yaml file with the top-level chart’s values placed at the root of the file and sub-charts placed under a top-level key matching the chart’s name or alias.
DataStax recommends placing all required configuration values within a values.yaml file.
This allows simple versioning and configuration iteration.
You can download a sample values.yaml file with default Mission Control settings.
Common configurations
Helm simplifies the management of Kubernetes applications by using configuration files to define and deploy resources.
Using Helm’s values.yaml file, you can customize the settings for your deployment, and configure each component to your specific needs.
The following sections describe common configuration keys and values that you might use for each chart and its dependencies. Links to default templates are provided as a reference for additional configuration options.
# -- Determines if the mission-control-operator should be installed as the control plane
# or if it's simply in a secondary cluster waiting to be promoted
controlPlane: true
disableCertManagerCheck: true
# -- Node labels for operator pod assignment.
# Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
# When set, Mission Control components (UI, API, observability stack) are scheduled on nodes with this label.
# Configure your sub-charts (Loki, Mimir, Grafana) with matching nodeSelector values to ensure all platform components use the same nodes.
nodeSelector:
mission-control.datastax.com/role: platform
# -- Controls whether operator components (like Reaper) can run on database-labeled nodes.
# When false (default), these components require the platform label.
# Set to true to allow operator components to schedule on database nodes.
allowOperatorsOnDatabaseNodes: false
client:
# -- Automatically handle CRD upgrades
manageCrds: true
# -- Platform services ingress configuration
# Enables unified access to Mission Control UI, Grafana, and Vector aggregator through domain-based routing
ingress:
# -- Enable ingress for all platform services
enabled: false
# -- top-level domain for Mission Control interface (e.g., mc.region.example.com)
regionDomain: ""
# -- Wildcard domain for service and database subdomains (for example, *.mc.region.example.com)
# Used for Grafana (grafana.mc.region.example.com) and database (my-cluster.my-project.mc.region.example.com)
wildcardDomain: ""
ui:
enabled: true
# -- Base URL for the UI, required when using Ingress or external authentication (OIDC)
# Example: https://mission-control.example.com
baseUrl: ""
ingress:
# -- Enable Ingress for UI access (recommended for production)
enabled: false
# hosts:
# - host: mission-control.example.com
# paths:
# - path: /
# pathType: Prefix
# tls:
# - secretName: mission-control-tls
# hosts:
# - mission-control.example.com
https:
# -- Enable HTTPS for the UI using self signed certificate
enabled: true
Configure custom alerting rules
In version 1.15.0 and later, Mission Control uses a two-tier alerting rules structure that separates automatically managed default rules from user-managed custom rules.
You can configure custom alerting rules through Helm values using the customRules field:
alerting:
customRules: |
- alert: CustomAlert
annotations:
context: ""
description: "Custom alert triggered"
summary: "Custom alert triggered"
expr: your_promql_expression
for: 5m
labels:
group: ""
severity: warning
For more information, see Create custom alerting rules.
Configure sub-charts
This section details the configuration options for various sub-charts included in Mission Control.
Refer to the upstream Helm chart repository for each sub chart for a complete list of available configuration options.
Dex IdP
You can find Dex IdP upstream configuration keys in the Dex IdP Helm chart repo.
Place these entries under the dex key in your values.yaml file.
dex:
config:
enablePasswordDB: true
staticPasswords:
- email: admin@example.com
hash: "HASH"
username: admin
userID: "USER_ID"
Replace the following:
-
HASH: The bcrypt hash of the password. On *nix systems, you can generate this with the following command:echo yourPassword | htpasswd -BinC 10 admin | cut -d: -f2 -
USER_ID: The ID of the user
Grafana
You can find Grafana upstream configuration keys in the Grafana Helm chart repo.
Place these entries under the grafana key in your values.yaml file.
grafana:
enabled: false
K8ssandra operator
Place these entries under the k8ssandra-operator key in your values.yaml file.
k8ssandra-operator:
disableCrdUpgraderJob: true
cass-operator:
disableCertManagerCheck: true
Loki
You can find Loki upstream configuration keys in the Loki Helm chart repo.
Place these entries under the loki key in your values.yaml file.
loki:
enabled: true
loki:
storage:
bucketNames:
chunks: LOKI_CHUNKS_BUCKET
limits_config:
retention_period: 7d
read:
persistence:
enabled: true
size: 10Gi
storageClassName: ""
replicas: 1
write:
persistence:
enabled: true
size: 10Gi
storageClassName: ""
replicas: 1
backend:
replicas: 1
The above configuration uses a local storage backend. You can augment the above configuration with one of the following backends: S3, GCS, or Azure blob storage.
Configure Loki with S3 storage
To back Loki with an S3 bucket using Kubernetes secrets for credentials, do the following:
-
Create a Kubernetes secret with your S3 credentials:
kubectl create secret generic SECRET_NAME -n mission-control \ --from-literal=SECRET_KEY_ACCESS_KEY_ID=ACCESS_KEY_ID \ --from-literal=SECRET_KEY_SECRET_ACCESS_KEY=SECRET_ACCESS_KEYReplace the following:
-
SECRET_NAME: The name of your secret. For example,loki-s3-secrets. -
SECRET_KEY_ACCESS_KEY_ID: The key for your access key ID. For example,s3-access-key-id. -
ACCESS_KEY_ID: Your AWS access key ID. -
SECRET_KEY_SECRET_ACCESS_KEY: The key for your secret access key. For example,s3-secret-access-key. -
SECRET_ACCESS_KEY: Your AWS secret access key.
-
-
Configure the
backendsection to reference the secret values through environment variables:loki: enabled: true loki: storage: bucketNames: chunks: S3_BUCKET_NAME s3: accessKeyId: "${ENV_VAR_ACCESS_KEY}" secretAccessKey: "${ENV_VAR_SECRET_KEY}" endpoint: S3_ENDPOINT insecure: false region: AWS_REGION s3: s3.AWS_REGION.amazonaws.com s3ForcePathStyle: false type: s3 backend: replicas: 1 extraArgs: - '-config.expand-env=true' extraEnv: - name: ENV_VAR_ACCESS_KEY valueFrom: secretKeyRef: name: SECRET_NAME key: SECRET_KEY_ACCESS_KEY_ID - name: ENV_VAR_SECRET_KEY valueFrom: secretKeyRef: name: SECRET_NAME key: SECRET_KEY_SECRET_ACCESS_KEYReplace the following:
-
S3_BUCKET_NAME: The name of your S3 bucket. -
ENV_VAR_ACCESS_KEY: The environment variable name for the access key. For example,AWS_ACCESS_KEY_ID. -
ENV_VAR_SECRET_KEY: The environment variable name for the secret key. For example,AWS_SECRET_ACCESS_KEY. -
S3_ENDPOINT: The endpoint for your S3 bucket. This value is optional for Amazon S3 buckets. The default endpoint iss3.AWS_REGION.amazonaws.com. -
AWS_REGION: The AWS region for your S3 bucket. -
SECRET_NAME: The name of your Kubernetes secret. For example,loki-s3-secrets. -
SECRET_KEY_ACCESS_KEY_ID: The key in the secret for the access key ID. For example,s3-access-key-id. -
SECRET_KEY_SECRET_ACCESS_KEY: The key in the secret for the secret access key. For example,s3-secret-access-key.
-
|
The |
For more information on configuring Loki with S3, including service account settings, see the Grafana documentation.
Configure Loki with GCS storage
To back Loki with a GCS bucket, first create a secret named loki-secrets in the mission-control namespace with the GCP service account JSON stored as a gcp_service_account.json key:
apiVersion: v1
kind: Secret
metadata:
name: loki-secrets
namespace: mission-control
data:
gcp_service_account.json: >-
IHsgICAid..........vbSIgfQ==
type: Opaque
Then adjust the values for Loki using the following snippet. You can modify the secret name if it is created with a different one.
loki:
backend:
extraEnv:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/loki_secrets/gcp_service_account.json
extraVolumeMounts:
- mountPath: /etc/loki_secrets
name: loki-secrets
extraVolumes:
- name: loki-secrets
secret:
items:
- key: gcp_service_account.json
path: gcp_service_account.json
secretName: loki-secrets
persistence:
size: 30Gi
storageClass: ""
volumeClaimsEnabled: "1"
enabled: true
gateway:
extraEnv:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/loki_secrets/gcp_service_account.json
extraVolumeMounts:
- mountPath: /etc/loki_secrets
name: loki-secrets
extraVolumes:
- name: loki-secrets
secret:
items:
- key: gcp_service_account.json
path: gcp_service_account.json
secretName: loki-secrets
loki:
commonConfig:
replication_factor: 1
compactor:
retention_enabled: true
shared_store: gcs
working_directory: /var/loki/retention
limits_config:
ingestion_burst_size_mb: 2000
ingestion_rate_mb: 1000
retention_period: 7d
rulerConfig:
storage:
local:
directory: /var/loki/rules
type: local
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
storage:
bucketNames:
chunks: GCS_BUCKET_NAME
gcs:
insecure: true
type: gcs
read:
extraEnv:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/loki_secrets/gcp_service_account.json
extraVolumeMounts:
- mountPath: /etc/loki_secrets
name: loki-secrets
extraVolumes:
- name: loki-secrets
secret:
items:
- key: gcp_service_account.json
path: gcp_service_account.json
secretName: loki-secrets
persistence:
enabled: "1"
size: 30Gi
storageClassName: ""
replicas: 1
sidecar:
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
write:
extraEnv:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/loki_secrets/gcp_service_account.json
extraVolumeMounts:
- mountPath: /etc/loki_secrets
name: loki-secrets
extraVolumes:
- name: loki-secrets
secret:
items:
- key: gcp_service_account.json
path: gcp_service_account.json
secretName: loki-secrets
persistence:
size: 30Gi
storageClass: ""
volumeClaimsEnabled: "1"
replicas: 1
Replace GCS_BUCKET_NAME with the name of your GCS bucket.
Configure Loki with Azure storage
To back Loki with an Azure blob storage, adjust the above configuration with the following:
loki:
loki:
storage:
type: azure
bucketNames:
chunks: LOKI_BUCKET_NAME
azure:
accountName: STORAGE_ACCOUNT_NAME
accountKey: STORAGE_ACCOUNT_KEY
endpoint_suffix: STORAGE_ACCOUNT_ENDPOINT_SUFFIX
structuredConfig:
storage_config:
azure:
container_name: LOKI_BUCKET_NAME
Replace the following:
-
LOKI_BUCKET_NAME: The name of your Azure blob storage bucket -
STORAGE_ACCOUNT_NAME: The name of your Azure storage account -
STORAGE_ACCOUNT_KEY: The access key for your Azure storage account -
STORAGE_ACCOUNT_ENDPOINT_SUFFIX: The endpoint suffix for your Azure storage account
Mimir
You can find Mimir upstream configuration keys in the Mimir Helm chart repo.
Place these entries under the mimir key in your values.yaml file.
mimir:
alertmanager:
enabled: true
extraArgs:
alertmanager-storage.backend: local
alertmanager-storage.local.path: /etc/alertmanager/config
alertmanager.configs.fallback: /etc/alertmanager/config/default.yml
alertmanager.sharding-ring.replication-factor: "2"
extraVolumeMounts:
- mountPath: /etc/alertmanager/config
name: alertmanager-config
- mountPath: /alertmanager
name: alertmanager-config-tmp
extraVolumes:
- name: alertmanager-config
secret:
secretName: alertmanager-config
- emptyDir: {}
name: alertmanager-config-tmp
persistentVolume:
accessModes:
- ReadWriteOnce
enabled: "1"
size: 10Gi
replicas: "2"
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
ingester:
extraArgs:
ingester.max-global-series-per-user: "0"
ingester.ring.replication-factor: "1"
persistentVolume:
size: 64Gi
replicas: "1"
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
store_gateway:
persistentVolume:
size: 64Gi
compactor:
extraArgs:
compactor.blocks-retention-period: 30d
persistentVolume:
enabled: "1"
size: 64Gi
replicas: "1"
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
distributor:
extraArgs:
ingester.ring.replication-factor: "1"
replicas: "1"
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
mimir:
structuredConfig:
activity_tracker:
filepath: /data/activity.log
limits:
ingestion_burst_size: 100000
ingestion_rate: 50000
max_label_names_per_series: 120
out_of_order_time_window: 5m
The above values omit configuration of a storage backend. Augment the above configuration with one of the following backends: GCS, S3, or Azure blob storage.
Configure Mimir with GCS storage
To use a GCS bucket to store metrics, update the above definition as follows and modify the placeholders to match your system:
mimir:
structuredConfig:
activity_tracker:
filepath: /data/activity.log
blocks_storage:
backend: gcs
bucket_store:
sync_dir: /data/tsdb-sync
gcs:
bucket_name: GCS_BUCKET_NAME
service_account: 'GCP_SERVICE_ACCOUNT_JSON_CONTENT'
tsdb:
dir: /data/tsdb
limits:
ingestion_burst_size: 100000
ingestion_rate: 50000
max_label_names_per_series: 120
out_of_order_time_window: 5m
Replace the following:
-
GCS_BUCKET_NAME: The name of your GCS bucket -
GCP_SERVICE_ACCOUNT_JSON_CONTENT: The JSON content for your GCP service account
Configure Mimir with S3 storage
To use a S3 bucket to store metrics with Kubernetes secrets for credentials, do the following:
-
Create a Kubernetes secret with your S3 credentials:
kubectl create secret generic SECRET_NAME -n mission-control \ --from-literal=SECRET_KEY_ACCESS_KEY_ID=ACCESS_KEY_ID \ --from-literal=SECRET_KEY_SECRET_ACCESS_KEY=SECRET_ACCESS_KEYReplace the following:
-
SECRET_NAME: The name of your secret. For example,mimir-s3-secrets. -
SECRET_KEY_ACCESS_KEY_ID: The key for your access key ID. For example,s3-access-key-id. -
ACCESS_KEY_ID: Your AWS access key ID. -
SECRET_KEY_SECRET_ACCESS_KEY: The key for your secret access key. For example,s3-secret-access-key. -
SECRET_ACCESS_KEY: Your AWS secret access key.
-
-
Configure Mimir to reference the secret values through environment variables:
mimir: structuredConfig: activity_tracker: filepath: /data/activity.log blocks_storage: backend: s3 bucket_store: sync_dir: /data/tsdb-sync s3: access_key_id: "${ENV_VAR_ACCESS_KEY}" bucket_name: S3_BUCKET_NAME endpoint: s3.AWS_REGION.amazonaws.com insecure: false secret_access_key: "${ENV_VAR_SECRET_KEY}" tsdb: dir: /data/tsdb limits: ingestion_burst_size: 100000 ingestion_rate: 50000 max_label_names_per_series: 120 out_of_order_time_window: 5m extraEnvFrom: - secretRef: name: SECRET_NAME extraEnv: - name: ENV_VAR_ACCESS_KEY valueFrom: secretKeyRef: name: SECRET_NAME key: SECRET_KEY_ACCESS_KEY_ID - name: ENV_VAR_SECRET_KEY valueFrom: secretKeyRef: name: SECRET_NAME key: SECRET_KEY_SECRET_ACCESS_KEYReplace the following:
-
S3_BUCKET_NAME: The name of your S3 bucket. -
AWS_REGION: The AWS region for your S3 endpoint. -
ENV_VAR_ACCESS_KEY: The environment variable name for the access key. For example,AWS_ACCESS_KEY_ID. -
ENV_VAR_SECRET_KEY: The environment variable name for the secret key. For example,AWS_SECRET_ACCESS_KEY. -
SECRET_NAME: The name of your Kubernetes secret. For example,mimir-s3-secrets. -
SECRET_KEY_ACCESS_KEY_ID: The key in the secret for the access key ID. For example,s3-access-key-id. -
SECRET_KEY_SECRET_ACCESS_KEY: The key in the secret for the secret access key. For example,s3-secret-access-key.For more information on configuring Mimir with S3, see the Grafana Mimir documentation.
-
Configure Mimir with Azure storage
To use an Azure blob storage to store metrics, update the above definition as follows and modify the placeholders to match your system:
mimir:
mimir:
structuredConfig:
common:
storage:
backend: azure
azure:
account_name: STORAGE_ACCOUNT_NAME
account_key: STORAGE_ACCOUNT_KEY
endpoint_suffix: STORAGE_ACCOUNT_ENDPOINT_SUFFIX
blocks_storage:
backend: azure
azure:
container_name: MIMIR_BUCKET_NAME
Replace the following:
-
LOKI_BUCKET_NAME: The name of your Azure blob storage bucket -
STORAGE_ACCOUNT_NAME: The name of your Azure storage account -
STORAGE_ACCOUNT_KEY: The access key for your Azure storage account -
STORAGE_ACCOUNT_ENDPOINT_SUFFIX: The endpoint suffix for your Azure storage account -
MIMIR_BUCKET_NAME: The name of your Azure blob storage bucket
Vector
The Vector chart is deployed multiple times in different contexts. Each instantiation has a different alias, allowing for multiple configurations.
Place these entries under the agent and aggregator keys in your values.yaml file.
Agent
Vector running in agent mode collects structured logs from each Kubernetes worker and the underlying container runtime, passing them along to the centralized aggregator.
agent:
enabled: true
Aggregator
Vector running in aggregator mode collects and processes all metrics and logs before sending them to downstream persistence systems like Loki, Mimir, or external sinks.
aggregator:
enabled: true
service:
type: ClusterIP
ports:
- name: vector
protocol: TCP
port: 6000
targetPort: 6000
# For external access to Vector aggregator, use Ingress instead of NodePort:
# ingress:
# enabled: true
# className: nginx
# hosts:
# - host: vector.example.com
# paths:
# - path: /
# pathType: Prefix
|
Starting with v1.18.0, the default Vector aggregator service type changed from NodePort to Ingress for accepting metrics and logs from external data planes. If you have external data planes with observability enabled, you must update your data plane configuration:
|
Configure airgap Helm installations
To install Mission Control using Helm in an airgapped environment, you must override coordinates of all the images in the values.yaml file and configure image pull secrets.
|
Update the placeholders to match your registry and namespace within this registry. All images must be loaded in the private registry beforehand. Image tags will evolve across versions and must be updated as well. |
For registry override locations, see Override registry credentials for airgap installations.
For configuring image pull secrets, see Configure image pull secrets.
Here’s a sample values file you can use as a base:
# -- Determines if the mission-control-operator should be installed as the control plane
# or if it's simply in a secondary cluster waiting to be promoted
controlPlane: true
disableCertManagerCheck: true
image:
registry: REGISTRY_ADDRESS:REGISTRY_PORT
repository: datastax/mission-control
pullPolicy: IfNotPresent
tag: v1.4.0
imageConfigs:
registryOverride: REGISTRY_ADDRESS:REGISTRY_PORT
reaper:
repository: thelastpickle/cassandra-reaper
medusa:
repository: k8ssandra/medusa
# -- Node affinity for operator pod assignment.
allowOperatorsOnDatabaseNodes: false
client:
# -- Automatically handle CRD upgrades
manageCrds: true
image:
registry: REGISTRY_ADDRESS:REGISTRY_PORT
repository: k8ssandra/k8ssandra-client
tag: latest
# -- Configuration of the job that runs at installation time to patch the conversion webhook in the CRD.
crdPatchJob:
image:
registry: REGISTRY_ADDRESS:REGISTRY_PORT
repository: bitnami/kubectl
tag: 1.30.1
ui:
enabled: true
# -- Base URL that client browsers will use to access the UI.
# If Dex only uses static passwords and/or the LDAP connector, this can be left empty, and the UI will work via any
# routable URL.
# If Dex uses an external provider (e.g. OIDC), this must be set, and the UI can only be accessed via this canonical
# URL.
baseUrl: ''
image:
registry: REGISTRY_ADDRESS:REGISTRY_PORT
repository: datastax/mission-control-ui
tag: v1.4.0
ingress:
enabled: false
# Configure Ingress for UI access in air-gapped environments
# className: nginx
# hosts:
# - host: mission-control.example.com
# paths:
# - path: /
# pathType: Prefix
https:
enabled: true
# https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml
grafana:
enabled: true
imageRegistry: REGISTRY_ADDRESS:REGISTRY_PORT
image:
repository: REGISTRY_ADDRESS:REGISTRY_PORT/REGISTRY_NAMESPACE/grafana
sidecar:
image:
repository: REGISTRY_ADDRESS:REGISTRY_PORT/REGISTRY_NAMESPACE/k8s-sidecar
downloadDashboardsImage:
repository: REGISTRY_ADDRESS:REGISTRY_PORT/REGISTRY_NAMESPACE/curl
initChownData:
image:
repository: REGISTRY_ADDRESS:REGISTRY_PORT/REGISTRY_NAMESPACE
plugins: []
# https://github.com/k8ssandra/k8ssandra-operator/blob/main/charts/k8ssandra-operator/values.yaml
k8ssandra-operator:
image:
registry: REGISTRY_ADDRESS:REGISTRY_PORT
cass-operator:
image:
registry: REGISTRY_ADDRESS:REGISTRY_PORT
imageConfig:
systemLogger: REGISTRY_ADDRESS:REGISTRY_PORT/REGISTRY_NAMESPACE/system-logger:v1.22.1
configBuilder: REGISTRY_ADDRESS:REGISTRY_PORT/REGISTRY_NAMESPACE/cass-config-builder:1.0-ubi8
k8ssandraClient: REGISTRY_ADDRESS:REGISTRY_PORT/REGISTRY_NAMESPACE/k8ssandra-client:v0.5.0
loki:
#enabled: false
kubectlImage:
registry: REGISTRY_ADDRESS:REGISTRY_PORT
sidecar:
image:
repository: REGISTRY_ADDRESS:REGISTRY_PORT/REGISTRY_NAMESPACE/k8s-sidecar
global:
image:
registry: REGISTRY_ADDRESS:REGISTRY_PORT
minio:
image:
repository: REGISTRY_ADDRESS:REGISTRY_PORT/REGISTRY_NAMESPACE/minio
mcImage:
repository: REGISTRY_ADDRESS:REGISTRY_PORT/REGISTRY_NAMESPACE/mc
loki:
storage:
type: "s3"
s3:
region: eu-west-1
bucketNames:
chunks: chunks-bucket
limits_config:
retention_period: 7d
read:
persistence:
enabled: true
size: 10Gi
replicas: 1
write:
persistence:
enabled: true
size: 10Gi
replicas: 1
backend:
replicas: 1
mimir:
alertmanager:
enabled: true
extraArgs:
alertmanager-storage.backend: local
alertmanager-storage.local.path: /etc/alertmanager/config
alertmanager.configs.fallback: /etc/alertmanager/config/default.yml
alertmanager.sharding-ring.replication-factor: "2"
extraVolumeMounts:
- mountPath: /etc/alertmanager/config
name: alertmanager-config
- mountPath: /alertmanager
name: alertmanager-config-tmp
extraVolumes:
- name: alertmanager-config
secret:
secretName: alertmanager-config
- emptyDir: {}
name: alertmanager-config-tmp
persistentVolume:
accessModes:
- ReadWriteOnce
enabled: "1"
size: 10Gi
replicas: "2"
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
ingester:
extraArgs:
ingester.max-global-series-per-user: "0"
ingester.ring.replication-factor: "1"
persistentVolume:
size: 64Gi
replicas: "1"
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
store_gateway:
persistentVolume:
size: 64Gi
enabled: "1"
replicas: "1"
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
compactor:
extraArgs:
compactor.blocks-retention-period: 30d
persistentVolume:
enabled: "1"
size: 64Gi
replicas: "1"
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
ruler:
enabled: true
extraArgs:
ingester.ring.replication-factor: "1"
ruler-storage.backend: local
ruler-storage.local.directory: /etc/rules
ruler.alertmanager-url: http://mission-control-mimir-alertmanager:8080/alertmanager
ruler.query-frontend.address: mission-control-mimir-query-frontend:9095
extraVolumeMounts:
- mountPath: /etc/rules/anonymous
name: ruler-config
extraVolumes:
- name: ruler-config
projected:
defaultMode: 420
sources:
- configMap:
name: mission-control-ruler-config
- configMap:
name: mission-control-ruler-custom-config
replicas: "1"
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
distributor:
extraArgs:
ingester.ring.replication-factor: "1"
replicas: "1"
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
image:
repository: REGISTRY_ADDRESS:REGISTRY_PORT/REGISTRY_NAMESPACE/mimir
memcached:
image:
repository: REGISTRY_ADDRESS:REGISTRY_PORT/REGISTRY_NAMESPACE
memcachedExporter:
image:
repository: REGISTRY_ADDRESS:REGISTRY_PORT/REGISTRY_NAMESPACE/memcached-exporter
nginx:
image:
registry: REGISTRY_ADDRESS:REGISTRY_PORT
gateway:
nginx:
image:
registry: REGISTRY_ADDRESS:REGISTRY_PORT
enterprise:
image:
repository: REGISTRY_ADDRESS:REGISTRY_PORT/REGISTRY_NAMESPACE/enterprise-metrics
mcImage:
repository: REGISTRY_ADDRESS:REGISTRY_PORT/REGISTRY_NAMESPACE/mc
mimir:
structuredConfig:
activity_tracker:
filepath: /data/activity.log
limits:
ingestion_burst_size: 100000
ingestion_rate: 50000
max_label_names_per_series: 120
out_of_order_time_window: 5m
agent:
image:
repository: REGISTRY_ADDRESS:REGISTRY_PORT/REGISTRY_NAMESPACE/vector
aggregator:
image:
repository: REGISTRY_ADDRESS:REGISTRY_PORT/REGISTRY_NAMESPACE/vector
replicated:
enabled: false
images:
replicated-sdk: REGISTRY_ADDRESS:REGISTRY_PORT/REGISTRY_NAMESPACE/replicated-sdk:v1.0.0-beta.14
kube-state-metrics:
image:
registry: REGISTRY_ADDRESS:REGISTRY_PORT
dex:
image:
repository: REGISTRY_ADDRESS:REGISTRY_PORT/REGISTRY_NAMESPACE/mission-control-dex
config:
enablePasswordDB: true
staticPasswords:
- email: EMAIL_ADDRESS
hash: "HASH"
username: admin
userID: "USER_ID"
Replace the following:
-
REGISTRY_ADDRESS: The address of your registry -
REGISTRY_PORT: The port of your registry -
EMAIL_ADDRESS: The email address of the user -
HASH: The bcrypt hash of the password. On *nix systems, you can generate this with the following command:echo yourPassword | htpasswd -BinC 10 admin | cut -d: -f2
GCP configuration
The following sections describe configuration values required for GCP deployments.
GCP control plane configuration
Download a GCP control plane YAML configuration to use as a base YAML configuration.
Replace the following:
-
OIDC_CLIENT_ID: The OIDC client ID for authentication -
OIDC_CLIENT_SECRET: The OIDC client secret for authentication -
OIDC_ISSUER: The OIDC issuer URL -
MC_DOMAIN: The domain name for Mission Control -
ADMIN_EMAIL: The email address for the admin user -
HASHED_ADMIN_PASSWORD: The bcrypt hash of the admin password -
SERVICE_ACCOUNT_JSON: The GCP service account JSON content -
REGISTRY_ADDRESS: The address of your registry -
REGISTRY_PORT: The port of your registry -
REGISTRY_NAMESPACE: The namespace within your registry
GCP data plane online values configuration
Download a GCP data plane online YAML configuration to use as a base.
Replace the following:
-
VECTOR_AGGREGATOR_URL: The URL for the vector aggregator -
VECTOR_VOLUME_SIZE: The size of the vector volume -
VECTOR_STORAGE_CLASS: The storage class for vector -
REGISTRY_ADDRESS: The address of your registry -
REGISTRY_PORT: The port of your registry -
REGISTRY_NAMESPACE: The namespace within your registry
GCP data plane local observability configuration
Download a GCP data plane local observability YAML configuration to use as a base.
Replace the following:
-
VECTOR_AGGREGATOR_URL: The URL for the vector aggregator -
VECTOR_VOLUME_SIZE: The size of the vector volume -
VECTOR_STORAGE_CLASS: The storage class for vector -
REGISTRY_ADDRESS: The address of your registry -
REGISTRY_PORT: The port of your registry -
REGISTRY_NAMESPACE: The namespace within your registry
Install Mission Control with Helm
To install Mission Control with Helm, do the following:
-
Log in to the Helm registry:
helm registry login registry.replicated.com --username 'LICENSE_ID' --password 'LICENSE_ID'Replace
LICENSE_IDwith your Mission Control license ID. -
If you haven’t done so already, get your Helm registry credentials from IBM Support.
-
Create your
values.yamlfile or use the default DataStax file. -
Install Mission Control using your registry credentials:
helm install mission-control oci://registry.replicated.com/mission-control/mission-control --namespace mission-control --create-namespace -f values.yamlYou must use the same release name for the data plane installation as you used for the control plane installation. This ensures proper communication and resource management between the planes.
Post-installation configuration
After installation, configure and access your deployment: