Install Mission Control on OpenShift

This guide explains how to install Mission Control in an OpenShift environment. OpenShift is a cloud-native platform that you build on Kubernetes to provide automated installation, upgrades, and lifecycle management for containerized applications.

Prerequisites

  • An OpenShift cluster running version 4.8 or later

  • Access to the oc command-line tool

  • Helm CLI (recommended installation method) or KOTS CLI

  • An installation environment that you prepared on your existing Kubernetes cluster

  • A downloaded Mission Control license file.

    Mission Control requires a license file to provide Kubernetes Off-The-Shelf (KOTS) or Helm with required information for installation. Information includes customer identifiers, software update channels, and entitlements.

    Contact your sales representative or call 888-746-7426 to request a license.

    If you need a replacement license file or a non-community edition, or want to convert your Public Preview license to use a stable channel release version, contact your account team.

  • Helm version 3.14.0 to 3.18.0 installed

  • Access to the Helm registry

Contact IBM Support for Helm registry access. Only accounts with paid Hyper-Converged Database (HCD) or DataStax Enterprise (DSE) plans can submit support tickets. For information about DataStax products and subscription plans, see the DataStax products page.

For information about security configurations, see Security overrides.

Install cert-manager operator

Mission Control requires the cert-manager operator for Red Hat OpenShift to manage TLS certificates.

To install cert-manager, do the following:

  1. In the OpenShift web console, navigate to OperatorsOperatorHub.

  2. Search for "cert-manager Operator for Red Hat OpenShift".

  3. Select the operator, and then click Install.

  4. Follow the installation wizard to complete the installation.

For detailed instructions, see the Red Hat OpenShift cert-manager documentation.

OpenShift installations use the OpenShift cert-manager Operator, which doesn’t automatically delete certificate secrets when you remove Certificate resources. You must clean up secret resources manually with oc commands.

Configure pod-to-pod routing

Ensure that all database pods can route to each other. This is a critical requirement for proper operation and data consistency.

The requirement applies to:

  • All database pods within the same region or availability zone.

  • All database pods across different availability zones within the same region.

  • All database pods across different regions for multi-region deployments.

  • All database pods across different racks in the same datacenter for multi-region deployments.

The way you configure pod-to-pod routing depends on your cluster architecture:

Single-cluster deployments

The cluster’s Container Network Interface (CNI) typically provides pod-to-pod network connectivity for database pods within a single Kubernetes cluster. You usually need no additional configuration beyond standard Kubernetes networking.

Security considerations for shared clusters

If your database cluster shares a Kubernetes cluster with other applications, implement security controls to prevent unauthorized access to database internode ports (7000/7001):

  • NetworkPolicy isolation (required): Use Kubernetes NetworkPolicy to restrict access to internode ports to only authorized database pods. NetworkPolicy prevents other applications in the cluster from accessing these ports even if underlying firewall rules are broad.

  • Internode TLS encryption (required): Enable internode TLS to protect data in transit and prevent unauthorized nodes from joining the cluster.

  • Dedicated node pools (recommended): Consider dedicated node pools or subnets for database workloads to enable more granular firewall controls at the infrastructure level.

Multi-cluster deployments

For database pods that span multiple Kubernetes clusters, NetworkPolicy alone doesn’t provide sufficient connectivity. You must establish Layer 3 network connectivity or overlay connectivity between the database pod networks (pod CIDRs or node subnets depending on your deployment). Kubernetes NetworkPolicy operates only within a single cluster boundary and can’t provide cross-cluster connectivity.

  1. Choose one of the following approaches to establish pod network connectivity across clusters:

    • Routed pod CIDRs (recommended): Use cloud provider native routing solutions when your platform supports them. This approach provides the best performance and simplest operational model.

      • AWS: VPC Peering, Transit Gateway, or AWS Cloud WAN.

      • Azure: VNet Peering or Virtual WAN.

      • GCP: VPC Peering or Cloud VPN.

    • Submariner: Open-source multi-cluster connectivity solution, common in OpenShift multi-cluster deployments. Submariner provides encrypted tunnels between clusters. For more information, see the Submariner documentation.

    • Cilium Cluster Mesh: For clusters that use Cilium CNI. For more information, see the Cilium documentation.

      • Cilium Cluster Mesh provides native multi-cluster networking with Cilium.

      • Cilium Cluster Mesh enables pod-to-pod connectivity across clusters.

  2. After you establish cross-cluster connectivity, implement the following security measures. Traditional firewall rules alone lack application awareness and can’t distinguish between different pods or services within a cluster. Use Kubernetes NetworkPolicy for pod-level access control within clusters, and combine it with network-level firewalls for defense in depth.

    • Enable internode TLS encryption to protect data in transit between clusters.

    • Configure firewall rules at the network level to restrict traffic between cluster pod CIDRs.

    • Use NetworkPolicy within each cluster to further restrict access to database ports.

    • Consider using dedicated subnets or VPCs for database clusters to enable network-level isolation.

To verify that pod-to-pod routing has been configured properly, do the following:

  1. Test connectivity between database pods using nodetool status or cqlsh.

  2. Check that all nodes can see each other in the cluster topology.

  3. Monitor for connection errors or timeouts in database logs.

  4. Verify that gossip protocol communication functions correctly.

If pod-to-pod routing isn’t implemented correctly, you might experience the following:

  • Connectivity issues between database pods.

  • Cluster instability.

  • Data consistency issues.

  • Failed replication.

  • Incomplete or failed cluster operations.

Configure OpenShift-specific Helm values

When you install Mission Control on OpenShift using Helm, you must add OpenShift-specific configuration to your values.yaml file.

DNS configuration

OpenShift uses a different DNS service architecture than standard Kubernetes deployments. Standard Kubernetes typically uses kube-dns, while OpenShift uses the openshift-dns namespace with the dns-default service as its DNS resolver.

Configure DNS settings correctly to enable proper service discovery and inter-pod communication within the Mission Control observability stack. If the DNS configuration is missing or incorrect, then Loki and Mimir components cannot resolve service names, which prevents metrics and logs collection.

Add the following DNS configuration for Loki and Mimir components:

loki:
  global:
    clusterDomain: cluster.local
    dnsNamespace: openshift-dns
    dnsService: dns-default
mimir:
  gateway:
    nginx:
      config:
        resolver: dns-default.openshift-dns.svc.cluster.local
  nginx:
    nginxConfig:
      resolver: "dns-default.openshift-dns.svc"

The following values are required for OpenShift. These values are specific to OpenShift and differ from standard Kubernetes DNS configurations.

  • dnsNamespace: Specifies the namespace where OpenShift’s DNS service runs

  • dnsService: References OpenShift’s default DNS resolver service

  • resolver: Configures Mimir’s nginx to use the OpenShift DNS resolver

To verify your OpenShift DNS configuration, run the following command:

oc get svc -n openshift-dns

The command gets the dns-default service. If your OpenShift cluster uses a different DNS service name, update the configuration values accordingly.

Control plane configuration

For control plane deployments, download the OpenShift control plane YAML configuration to use as a base.

Replace the following placeholders with your environment-specific values:

  • OBSERVABILITY_VECTOR_VOLUME_SIZE: The size of the vector volume

  • OBSERVABILITY_VECTOR_STORAGE_CLASS: The storage class for vector

  • FALLBACK_STATIC_EMAIL_ADDRESS: The email address for the fallback static user

  • OBSERVABILITY_LOGS_STORAGE_RETENTION: The retention period for logs

  • OBSERVABILITY_LOGS_BUCKET_NAME: The name of the S3 bucket for logs

  • OBSERVABILITY_S3_ACCESS_KEY_ID: The AWS access key ID

  • OBSERVABILITY_S3_REGION: The AWS region

  • OBSERVABILITY_S3_SECRET_ACCESS_KEY: The AWS secret access key

  • OBSERVABILITY_METRICS_ALERTS_VOLUME_SIZE: The size of the alerts volume

  • OBSERVABILITY_METRICS_STORAGE_CLASS: The storage class for metrics

  • OBSERVABILITY_METRICS_STORAGE_RETENTION: The retention period for metrics

  • OBSERVABILITY_METRICS_COMPACTOR_VOLUME_SIZE: The size of the compactor volume

  • OBSERVABILITY_METRICS_INGESTER_VOLUME_SIZE: The size of the ingester volume

  • OBSERVABILITY_BUCKET_NAME: The name of the S3 bucket for metrics

  • REGION: The AWS region for S3 endpoints

Data plane configuration

For data plane deployments, download the OpenShift data plane YAML configuration to use as a base.

Replace the following placeholders with your environment-specific values:

  • VECTOR_AGGREGATOR_URL: The URL for the vector aggregator

  • VECTOR_VOLUME_SIZE: The size of the vector volume

  • VECTOR_STORAGE_CLASS: The storage class for vector

Install Mission Control

Install Mission Control using one of the following methods:

  • KOTS

Install Mission Control by following the instructions in Install Mission Control with KOTS.

You must use the same application name for the data plane installation as you used for the control plane installation. This ensures proper communication and resource management between the planes.

Configure Security Context Constraints (SCC)

After you install Mission Control, configure permissions for Mission Control service accounts using Security Context Constraints (SCC).

After you create the SCC or apply policy changes, allow a few minutes for pods to schedule properly.

Choose one of the following methods:

  • Use the pre-defined nonroot-v2 SCC

  • Create a custom SCC

Run the following commands to grant access to Mission Control service accounts:

oc adm policy add-scc-to-user nonroot-v2 -z loki
oc adm policy add-scc-to-user nonroot-v2 -z mission-control
oc adm policy add-scc-to-user nonroot-v2 -z mission-control-agent
oc adm policy add-scc-to-user nonroot-v2 -z mission-control-aggregator
oc adm policy add-scc-to-user nonroot-v2 -z mission-control-cass-operator
oc adm policy add-scc-to-user nonroot-v2 -z mission-control-dex
oc adm policy add-scc-to-user nonroot-v2 -z mission-control-k8ssandra-operator
oc adm policy add-scc-to-user nonroot-v2 -z mission-control-kube-state-metrics
oc adm policy add-scc-to-user nonroot-v2 -z mission-control-mimir

For more granular control, create a custom SCC with the necessary permissions for Mission Control service accounts. Use the following example SCC definition:

kind: SecurityContextConstraints
apiVersion: security.openshift.io/v1
metadata:
  name: mission-control
runAsUser:
  type: MustRunAsNonRoot
seLinuxContext:
  type: RunAsAny
fsGroup:
  type: RunAsAny
supplementalGroups:
  type: RunAsAny
volumes:
- '*'
requiredDropCapabilities:
- ALL
allowedCapabilities:
- NET_BIND_SERVICE
allowHostNetwork: true
allowHostDirVolumePlugin: false
users:
- system:serviceaccount:PROJECT_NAME:loki
- system:serviceaccount:PROJECT_NAME:mission-control
- system:serviceaccount:PROJECT_NAME:mission-control-agent
- system:serviceaccount:PROJECT_NAME:mission-control-aggregator
- system:serviceaccount:PROJECT_NAME:mission-control-cass-operator
- system:serviceaccount:PROJECT_NAME:mission-control-dex
- system:serviceaccount:PROJECT_NAME:mission-control-k8ssandra-operator
- system:serviceaccount:PROJECT_NAME:mission-control-kube-state-metrics
- system:serviceaccount:PROJECT_NAME:mission-control-mimir

Replace PROJECT_NAME with the name of your project.

Apply the SCC to your cluster:

oc apply -f mission-control-scc.yaml

Post-installation configuration

After you complete the installation, configure access to Mission Control:

Was this helpful?

Give Feedback

How can we improve the documentation?

© Copyright IBM Corporation 2026 | Privacy policy | Terms of use Manage Privacy Choices

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: Contact IBM