Install Mission Control on OpenShift

You can install Mission Control within an OpenShift environment. Red Hat OpenShift is a cloud-native platform that leverages Kubernetes to streamline application development. OpenShift offers automated installation, upgrades, and lifecycle management for the container stack, encompassing the operating system, Kubernetes, cluster services, and applications.

Prerequisites

To install Mission Control in an OpenShift environment, you need the following:

  • An existing KOTS CLI installation.

  • An OpenShift cluster with oc command-line tool access.

Install Mission Control and grant permissions to service accounts

To install Mission Control in an OpenShift environment, do the following:

  1. Install the cert-manager operator in your OpenShift cluster. For instructions, see the Red Hat OpenShift documentation.

  2. If you haven’t already, install the KOTS CLI.

  3. Install Mission Control.

  4. Grant permissions to Mission Control service accounts.

    • Pre-defined nonroot-v2 SCC

    • Custom SCC

    This option uses the pre-defined SCC named nonroot-v2. Grant access to Mission Control service accounts using the following commands:

    oc adm policy add-scc-to-user nonroot-v2 -z loki
    oc adm policy add-scc-to-user nonroot-v2 -z mission-control
    oc adm policy add-scc-to-user nonroot-v2 -z mission-control-agent
    oc adm policy add-scc-to-user nonroot-v2 -z mission-control-aggregator
    oc adm policy add-scc-to-user nonroot-v2 -z mission-control-cass-operator
    oc adm policy add-scc-to-user nonroot-v2 -z mission-control-dex
    oc adm policy add-scc-to-user nonroot-v2 -z mission-control-k8ssandra-operator
    oc adm policy add-scc-to-user nonroot-v2 -z mission-control-kube-state-metrics
    oc adm policy add-scc-to-user nonroot-v2 -z mission-control-mimir

    Define the SCC in a YAML file, and then apply it to your cluster.

    After you create the SCC or apply the policy change, it might take a few minutes for pods to properly schedule.

    If you prefer more granular control, you can create a custom SCC with the necessary permissions for Mission Control service accounts. Define the SCC in a YAML file, and then apply it to your cluster.

    You can use the example SCC definition and update the default settings:

    kind: SecurityContextConstraints
    apiVersion: security.openshift.io/v1
    metadata:
      name: mission-control
    runAsUser:
      type: MustRunAsNonRoot
    seLinuxContext:
      type: RunAsAny
    fsGroup:
      type: RunAsAny
    supplementalGroups:
      type: RunAsAny
    volumes:
    - '*'
    requiredDropCapabilities:
    - ALL
    allowedCapabilities:
    - NET_BIND_SERVICE
    allowHostNetwork: true
    allowHostDirVolumePlugin: false
    users:
    - system:serviceaccount:mission-control:loki
    - system:serviceaccount:mission-control:mission-control
    - system:serviceaccount:mission-control:mission-control-agent
    - system:serviceaccount:mission-control:mission-control-aggregator
    - system:serviceaccount:mission-control:mission-control-cass-operator
    - system:serviceaccount:mission-control:mission-control-dex
    - system:serviceaccount:mission-control:mission-control-k8ssandra-operator
    - system:serviceaccount:mission-control:mission-control-kube-state-metrics
    - system:serviceaccount:mission-control:mission-control-mimir

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com