Install and configure Mission Control

Use KOTS for the installation and management of Mission Control. KOTS offers a user-friendly environment for handling the lifecycle of Mission Control and its components. The installation of Mission Control is simple with a browser-based guided process.

Prerequisites

  • A downloaded Mission Control license file.

    Mission Control requires a license file to provide Kubernetes Off-The-Shelf (KOTS) or Helm with required information out installation. Information includes customer identifiers, software update channels, and entitlements.

    Are you exploring Mission Control as a solution for your organization? Fill out this registration form to request a community edition license.

    If you need a replacement license file or a non-community edition, or want to convert your Public Preview license to use a stable channel release version, please contact your account team.

  • You have prepared either a bare-metal/VM or a pre-existing Kubernetes cluster for configuring and installing Mission Control.

  • Access to the KOTS admin UI. Choose the tab for your type of environment and follow specific instructions.

    • Bare-metal/VM environment

    • Existing environment

    1. Open your web browser to the KOTSADM_URL returned in the results from your Primary nodes setup.

      Partial sample result from that setup:

      Installation
      		  Complete ✔
      
      
      Kotsadm: KOTSADM_URL
      Login with password (will not be shown again): KOTSADM_PASSWORD
      This password has been set for you by default.
      ...
    2. Log in with the KOTSADM_PASSWORD also provided in those results unless you’ve reset the password - in which case, use the reset value.

    3. Exit the KOTS Admin Console at any time with Control+C.

    1. Run the following command to port-forward to the kotsadmin session:

      kubectl kots admin-console -n NAMESPACE

      Replace NAMESPACE with either the default value of mission-control or your chosen namespace.

    2. Exit the KOTS Admin Console at any time with Control+C.

Choose an installation mode

Select the installation mode that best suits your environment and needs. Your license file is enabled for your environment.

  • Online install

  • Offline (airgap) install

Install Mission Control into an environment where internet access is available to the hosts.

  1. Authenticate with the KOTS admin interface and supply your license file when prompted.

  2. Configure your components.

    Mission Control consists of a number of components that can be deployed, configured, and scaled independently of one another. This configuration step covers each section, value, and default present within Mission Control. Not all sections listed are visible at any given time. They are dynamically displayed as you make selections.

    Note: These values may be adjusted over time by revisiting the admin interface.

    Deployment Mode

    Mode: Specifies whether to deploy Mission Control in Control Plane mode or Data Plane mode. Control Plane mode enables top-level observability and management features. Data Plane mode focuses on edge locations where database nodes are deployed. There must only be one (1) Control Plane across all Mission Control regions. Every other region should be a Data Plane.

    Data Plane

    Vector Aggregator URL: The URL of the Vector Aggregator service in the Control Plane.

    Observability - Scheduling

    Primary workers: Adds a toleration that allows scheduling of observability components on Kubernetes Primary workers. By default Kubernetes Primary nodes are tainted to only run workloads for scheduling, metadata management, and core API services. Enable this setting to allow for the scheduling of observability components on these nodes.

    Database workers: Removes the node selector from restricting observability components to Platform workers. This allows for scheduling of observability components on worker nodes with other labels; for example, database nodes. Enable this only for constrained environments or when all workers provide homogenous hardware.

    Observability - Storage

    Configures the storage backend for observability data.

    Backend: Determines which storage engine to use for observability data. Use separate buckets for metrics versus logs storage; you cannot use the same bucket for both.

    • S3 (and compatible)

    • GCS

    • Azure Blob Storage

    Observability - Storage - Amazon Simple Storage Service (S3) and compatible

    Region: Default: us-east-1.

    Region identifier for the AWS S3 bucket. This field is not used by S3-compatible storage backends.

    Access Key ID: Key ID used during authentication to the storage backend.

    Secret Access Key: Secret access key used during authentication to the storage backend.

    Endpoint URL: The endpoint URL for the S3-compatible storage backend. This field is not used by AWS S3.

    Observability Bucket Insecure Access: Allows for disabling Certificate Authority (CA) certificate validation when connecting to S3-compatible storage backends.

    Observability - Storage - Google Cloud Storage

    Service Account: Google Cloud service account JSON key

    Observability - Storage - Azure Blob Storage

    Storage Account Name Storage Account Key Storage Account Endpoint Suffix: Default: blob.core.windows.net

    Observability - Metrics

    Mission Control utilizes Mimir as a highly available and scalable solution for storing and querying metrics data. Check the field’s box to access additional configuration settings related to the metrics stack.

    Topology: Change the replication factor and the number of replicas for each Mimir component.

    Resources: Change the resources (CPU, memory) allocated to each Mimir component.

    Advanced Settings

    Observability - Metrics - Topology

    Mimir component counts and replication factors.

    Number of Distributor instances: Default: 1. Distributors receive metrics samples from Vector and forward them to the ingesters.

    Number of Ingester instances: Default: 1. Ingester instances receive metrics samples from the Distributor and write them to the object storage.

    Number of Compactor instances: Default: 1. Compactors are responsible for compacting the data in object storage that are received from multiple ingesters and for removing duplicate samples.

    Number of Query Frontend instances: Default: 1. Query Frontend is responsible for checking for query results in a cache. Queries which may not be answered from the cache are queued for the Queriers.

    Number of Query Scheduler instances: Default: 1. Maintains the queue of queries to be performed.

    Number of Querier instances: Default: 1. Queriers are responsible for querying the data in object storage and on ingesters to fetch all data required for a query.

    Number of Store Gateway instances: Default: 1. Store Gateways are responsible for fetching data from object storage and serving it to the Queriers.

    Number of ruler instances: Default: 1. Ruler is responsible for evaluating recording and alerting rules and sending alerts to the Alert Manager.

    Number of Alert Manager instances: Default: 2. Alert Manager is responsible for handling alerts sent by the Ruler and sending notifications to the configured receivers. The Alert Manager deduplicates and groups alert notifications, and routes them to a notification channel, such as email, PagerDuty, or OpsGenie. A minimum of two (2) instances are required for the Alert Manager.

    Ingester Replication Factor: Default: 1. Writes to the Mimir cluster are successful if a majority of ingesters received the data.

    Mimir Alertmanager Replication Factor: Default: 2. The Alert Manager shards and replicates alerts by tenant. A minimum Replication Factor of two (2) is required.

    Observability - Metrics - Resources

    Requests: Amount of a resource to validate for scheduling a workload on a given worker.

    CPU: Default: 100m. The amount of CPU requested for the Mimir components in milli-CPUs. 1000m is equivalent to 1 vCPU.

    Memory: Default: 128Mi. The amount of memory requested for the Mimir components. The allowable units are: Gi, Mi, and Ki. Memory is measured in bytes.

    Limits: Limit to apply to containers once they are running on a worker.

    CPU: Default: 0. CPU usage limit for Mimir components in milli-CPUs. 1000m is equivalent to 1 vCPU. Setting this to 0 removes all limits.

    Memory: Default: 2Gi. Memory usage limit for Mimir components. The allowable units are: Gi, Mi, and Ki. Memory is measured in bytes. Setting this to 0 (zero) removes all limits.

    Observability - Metrics - Advanced

    Max Global Series Per User: Default: 0. The maximum allowed number of series that will be accepted, per tenant. 0 (zero) means unlimited.

    Observability - Metrics - Object Storage

    Bucket Name: The name of the bucket where Mimir will store its chunks.

    Storage Retention: Default: 7d. The allowable units are: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), y (years). 0 (zero) means unlimited.

    Tune Attached Storage: Allows for tuning the local attached storage.

    Observability - Metrics - Attached Storage

    Use Persistent Volumes: Required for production deployments.

    Storage Class: Setting the storage class to "-" disables dynamic provisioning. Leaving it empty (null) uses the default storage class.

    Access Modes: Default: ReadWriteOnce.

    Alert Manager Volume Size: Default: 1Gi.

    Use 10GB for Production environments. Warning: this size cannot be modified after the initialization of the cluster.

    Compactor Volume Size: Default: 2Gi.

    Use a minimum of 300GB for production environments. Warning: this size cannot be modified after the initialization of the cluster.

    Ingester Volume Size: Default: 2Gi.

    Use a minimum of 100GB for production environments. Warning: this size cannot be modified after the initialization of the cluster.

    Store Gateway Volume Size: Default: 2Gi.

    Use 50GB for production environments. Warning: this size cannot be modified after the initialization of the cluster.

    Observability - Logs

    Mission Control utilizes Loki as a highly available and scalable solution for storing and querying log data. Check the field’s box to access additional configuration settings related to the metrics stack.

    Topology: Change the replication factor and the number of replicas for each Loki component.

    Observability - Logs - Topology

    Loki component counts and replication factors.

    Number of Reader instances: Default: 1. Loki readers are responsible for querying the data in object storage and on ingesters to fetch all data required for a query.

    Number of Writer instances: Default: 1. Loki writers are responsible for writing logs to the object storage.

    Loki replication factor: Default: 1. Determines the number of Writer instances that will receive a given write. This should be less than or equal to the number of Writer instances.

    Observability - Logs - Object Storage

    Chunks Bucket Name: The name of the bucket where Loki stores its chunks.

    Storage Retention - Default: 7d. The allowable units are: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), y (years). 0s means unlimited retention. Minimum period is 24h.

    Force Path-Style Addressing: Forces requests to use AWS S3 path-style addressing. This is useful when using MinIO as the S3 storage backend.

    Tune Attached Storage: Allows for tuning the local attached storage.

    Observability - Logs - Attached Storage

    Use Persistent Volumes: Required for production deployments.

    Volume Size: Default: 10Gi. Warning: this size cannot be modified after the initialization of the cluster.

    Storage Class: Leave it empty (null) to use the default storage class.

    Web Interface

    Enable Web Interface: Default: The option is selected. Enables the Mission Control Web UI.

    Authentication

    Mission Control provides a number of authentication connectors.

    LDAP: Allows for configuring the LDAP connector.

    OpenID Connect (OIDC): Allows for configuring the OIDC connector.

    Static Credentials: Enables specifying a static user account. This is recommended during initial configuration, but should be disabled after deploying one of the other connectors.

    Authentication - LDAP

    See LDAP connector documentation.

    Host: Host and optional port of the LDAP server.

    No SSL: Required if the LDAP host is not using TLS (port 389). This option inherently leaks passwords to anyone on the same network as Dex. Important: Do not use this outside of exploratory phases. Not recommended.

    Skip TLS verify: Turn off TLS certificate verification. This is unsecure. Important: Do not use this outside of exploratory phases. Not recommended.

    Start TLS: Connect to the server using the ldap:// protocol and then issue a StartTLS command. If unspecified, connections use the ldaps:// protocol.

    Root CA: A trusted root certificate file (base64-encoded PEM file).

    Bind DN: The Distinguished Name (DN) to bind with when performing the search. If not provided, the search is performed anonymously.

    Bind password: The password to bind with when performing the search. If not provided, the search is performed anonymously.

    Username prompt: The prompt to display to the user when asking for their username. If unspecified, the default is "Username".

    User base DN: BaseDN from which to start the search. It translates to the query (&(objectClass=person)(uid=<username>)).

    User filter: Optional filter to apply when searching for the user. It translates to the query (&(objectClass=person)(uid=<username>)).

    Username attribute: Username attribute used for comparing user entries. This is translated and combined with the other filter as (<attr>=<username>).

    Group base DN: BaseDN from which to start the search. It translates to the query (&(objectClass=group)(member=<user uid>)).

    Group filter: Optional filter to apply when searching the directory.

    Group/user matchers: A list of field pairs used to match a user to a group. It adds an additional requirement to the filter that an attribute in the group must match the user’s attribute value. Expected format: multi-line YAML list of objects with userAttr and groupAttr keys.

    Authentication - OpenID Connect (OIDC)

    See OIDC connector documentation.

    Mission Control URL: A canonical URL that clients browsers will use to connect to the Mission Control UI. This is required with external providers because they will redirect back to this URL. It must also be addressable from within the Kubernetes cluster.

    Issuer URL: Canonical URL of the provider, also used for configuration discovery. This value MUST match the value returned in the provider config discovery.

    Client ID: OAuth Client ID.

    Client secret: OAuth Client Secret.

    Basic auth unsupported: Some providers require passing client_secret through POST parameters instead of basic auth, despite the OAuth2 RFC discouraging it. Many of these cases are caught internally, but you may need to check this box.

    Scopes: List of additional scopes to request in token response. Default is profile and email. See the Full list. Expected format: multi-line YAML list of strings.

    Skip email_verified: Some providers return claims without email_verified, when they had no usage of emails verification in the enrollment process, or if they are acting as a proxy for another IDP. An example is AWS Cognito with an upstream SAML IDP. Checking this box forces email_verified to true in Dex’s response. Not recommended.

    Enable groups: Groups claims (like the rest of OIDC claims through Dex) only refresh when the IDToken is refreshed, meaning the regular refresh flow doesn’t update the groups claim. Therefore, by default, the OIDC connector doesn’t allow groups claims. If it is acceptable to have potentially stale group claims then you can use this option to enable groups claims through the OIDC connector on a per-connector basis. Not recommended.

    Get user info: When enabled, the OpenID Connector queries the UserInfo endpoint for additional claims. UserInfo claims take priority over claims returned by the IDToken. This option should be used when the IDToken doesn’t contain all the claims requested. See OpenID user information.

    User ID key: The claim used as user ID. Defaults to sub. See full claims list.

    Username key: The claim used as username. Defaults to name.

    ACR values: The Authentication Context Class values within the Authentication Request that the Authorization Server is being requested to process from this Client. Expected format: multi-line YAML list of strings.

    Prompt type: For offline_access, the prompt parameter is set by default to prompt=consent. However, this is not supported by all OIDC providers. Some of them support different values for prompt, like login or none.

    Preferred username claim: The claim used as preferred username. Defaults to preferred_username.

    Email claim: The claim used as email. Defaults to email.

    Preferred groups claim: The claim used as groups. Defaults to groups.

    Authentication - Static Credentials

    Email: An email address to be used during authentication.

    Password Hash: The bcrypt hash of the password. On *nix systems, generate this with the following command:

    echo yourPassword | htpasswd -BinC 10 admin | cut -d: -f2
  3. Run preflight checks.

    After configuration is complete a number of automated checks are run against your environment. This ensures that all required components are present prior to installation. If every check passes then continue to the Deploy step. If not, consult the following sections for how to remedy common issues:

    Cert Manager

    Cert Manager is required as part of Mission Control. If you are using the runtime-based installation then it is preinstalled. Kubernetes-based installations should ensure it is installed prior to proceeding.

    Important: Skipping this step will result in installation failure.

    Storage Class

    In most installations you should ensure the availability of a Kubernetes Storage Class that provides the WaitForFirstConsumer volume binding mode. This functionality waits for compute to be assigned to a workload prior to requesting storage. It prevents situations where workloads are scheduled for a particular worker while the storage disk resides on a different worker and there is a lag in the request for a persistent storage volume.

  4. Deploy Mission Control.

    After all checks pass click the Deploy button. KOTS redirects your browser to the dashboard screen indicating the deployment status. Over the next couple of minutes all containers will be deployed and brought online.

What’s next?

Installation is complete and you are ready to use Mission Control for administering DataStax Enterprise (DSE), Hyper-Converged Database (HCD), and Apache Cassandra® clusters. All management operations are available through either of these interfaces:

Install Mission Control into an environment that does not allow access to the internet. This is also known as an airgap install.

Prerequisites
  • Container Registry

    • URL

    • Credentials

      • Read / Write

  • mission-control.airgap bundle

    1. Authenticate with the KOTS admin interface and supply your license file when prompted.

    2. (Optional) Airgap bundle. You are prompted for container registry information and the mission-control.airgap bundle.

      Skip this step if you want to install from online sources.

      If this screen does not appear, your uploaded license does not support airgap installations. Contact DataStax support to enable this functionality.

      On the screen, enter the hostname and credentials for a container registry that is accessible by this environment. The installer loads container images from the airgap bundle into this registry for use by the Kubernetes runtime. Next, provide the airgap bundle from your local machine.

    3. Configuration.

      Mission Control consists of a number of components that can be deployed, configured, and scaled independently of one another. This configuration step covers each section, value, and default present within Mission Control. Not all sections listed are visible at any given time. They are dynamically displayed as you make selections.

      Note: These values may be adjusted over time by revisiting the admin interface.

      Deployment Mode

      Mode: Specifies whether to deploy Mission Control in Control Plane mode or Data Plane mode. Control Plane mode enables top-level observability and management features. Data Plane mode focuses on edge locations where database nodes are deployed. There must only be one (1) Control Plane across all Mission Control regions. Every other region should be a Data Plane.

      Data Plane

      Vector Aggregator URL: The URL of the Vector Aggregator service in the Control Plane.

      Observability - Scheduling

      Primary workers: Adds a toleration that allows scheduling of observability components on Kubernetes Primary workers. By default Kubernetes Primary nodes are tainted to only run workloads for scheduling, metadata management, and core API services. Enable this setting to allow for the scheduling of observability components on these nodes.

      Database workers: Removes the node selector from restricting observability components to Platform workers. This allows for scheduling of observability components on worker nodes with other labels; for example, database nodes. Enable this only for constrained environments or when all workers provide homogenous hardware.

      Observability - Storage

      Configures the storage backend for observability data.

      Backend: Determines which storage engine to use for observability data. Use separate buckets for metrics versus logs storage; you cannot use the same bucket for both.

      • S3 (and compatible)

      • GCS

      • Azure Blob Storage

      Observability - Storage - Amazon Simple Storage Service (S3) and compatible

      Region: Default: us-east-1.

      Region identifier for the AWS S3 bucket. This field is not used by S3-compatible storage backends.

      Access Key ID: Key ID used during authentication to the storage backend.

      Secret Access Key: Secret access key used during authentication to the storage backend.

      Endpoint URL: The endpoint URL for the S3-compatible storage backend. This field is not used by AWS S3.

      Observability Bucket Insecure Access: Allows for disabling Certificate Authority (CA) certificate validation when connecting to S3-compatible storage backends.

      Observability - Storage - Google Cloud Storage

      Service Account: Google Cloud service account JSON key

      Observability - Storage - Azure Blob Storage

      Storage Account Name Storage Account Key Storage Account Endpoint Suffix - Default: blob.core.windows.net

      Observability - Metrics

      Mission Control utilizes Mimir as a highly available and scalable solution for storing and querying metrics data. Check the field’s box to access additional configuration settings related to the metrics stack.

      Topology: Change the replication factor and the number of replicas for each Mimir component.

      Resources: Change the resources (CPU, memory) allocated to each Mimir component.

      Advanced Settings

      Observability - Metrics - Topology

      Mimir component counts and replication factors.

      Number of Distributor instances: Default: 1. Distributors receive metrics samples from Vector and forward them to the ingesters.

      Number of Ingester instances: Default: 1. Ingester instances receive metrics samples from the Distributor and write them to the object storage.

      Number of Compactor instances: Default: 1. Compactors are responsible for compacting the data in object storage that are received from multiple ingesters and for removing duplicate samples.

      Number of Query Frontend instances: Default: 1. Query Frontend is responsible for checking for query results in a cache. Queries which may not be answered from the cache are queued for the Queriers.

      Number of Query Scheduler instances: Default: 1. Maintains the queue of queries to be performed.

      Number of Querier instances: Default: 1. Queriers are responsible for querying the data in object storage and on ingesters to fetch all data required for a query.

      Number of Store Gateway instances: Default: 1. Store Gateways are responsible for fetching data from object storage and serving it to the Queriers.

      Number of ruler instances: Default: 1. Ruler is responsible for evaluating recording and alerting rules and sending alerts to the Alert Manager.

      Number of Alert Manager instances: Default: 2. Alert Manager is responsible for handling alerts sent by the Ruler and sending notifications to the configured receivers. The Alert Manager deduplicates and groups alert notifications, and routes them to a notification channel, such as email, PagerDuty, or OpsGenie. A minimum of two (2) instances are required for the Alert Manager.

      Ingester Replication Factor: Default: 1. Writes to the Mimir cluster are successful if a majority of ingesters received the data.

      Mimir Alertmanager Replication Factor: Default: 2. The Alert Manager shards and replicates alerts by tenant. A minimum Replication Factor of two (2) is required.

      Observability - Metrics - Resources

      Requests: Amount of a resource to validate for scheduling a workload on a given worker.

      CPU: Default: 100m. The amount of CPU requested for the Mimir components in milli-CPUs. 1000m is equivalent to 1 vCPU.

      Memory: Default: 128Mi. The amount of memory requested for the Mimir components. The allowable units are: Gi, Mi, and Ki. Memory is measured in bytes.

      Limits: Limit to apply to containers once they are running on a worker.

      CPU: Default: 0. CPU usage limit for Mimir components in milli-CPUs. 1000m is equivalent to 1 vCPU. Setting this to 0 removes all limits.

      Memory: Default: 2Gi. Memory usage limit for Mimir components. The allowable units are: Gi, Mi, and Ki. Memory is measured in bytes. Setting this to 0 (zero) removes all limits.

      Observability - Metrics - Advanced

      Max Global Series Per User: Default: 0. The maximum allowed number of series that will be accepted, per tenant. 0 (zero) means unlimited.

      Observability - Metrics - Object Storage

      Bucket Name: The name of the bucket where Mimir will store its chunks.

      Storage Retention: Default: 7d. The allowable units are: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), y (years). 0 (zero) means unlimited.

      Tune Attached Storage: Allows for tuning the local attached storage.

      Observability - Metrics - Attached Storage

      Use Persistent Volumes: Required for production deployments.

      Storage Class: Setting the storage class to "-" disables dynamic provisioning. Leaving it empty (null) uses the default storage class.

      Access Modes: Default: ReadWriteOnce.

      Alert Manager Volume Size: Default: 1Gi.

      Use 10GB for Production environments. Warning: this size cannot be modified after the initialization of the cluster.

      Compactor Volume Size: Default: 2Gi.

      Use a minimum of 300GB for production environments. Warning: this size cannot be modified after the initialization of the cluster.

      Ingester Volume Size: Default: 2Gi.

      Use a minimum of 100GB for production environments. Warning: this size cannot be modified after the initialization of the cluster.

      Store Gateway Volume Size: Default: 2Gi.

      Use 50GB for production environments. Warning: this size cannot be modified after the initialization of the cluster.

      Observability - Logs

      Mission Control utilizes Loki as a highly available and scalable solution for storing and querying log data. Check the field’s box to access additional configuration settings related to the metrics stack.

      Topology: Change the replication factor and the number of replicas for each Loki component.

      Observability - Logs - Topology

      Loki component counts and replication factors.

      Number of Reader instances: Default: 1. Loki readers are responsible for querying the data in object storage and on ingesters to fetch all data required for a query.

      Number of Writer instances: Default: 1. Loki writers are responsible for writing logs to the object storage.

      Loki replication factor: Default: 1. Determines the number of Writer instances that will receive a given write. This should be less than or equal to the number of Writer instances.

      Observability - Logs - Object Storage

      Chunks Bucket Name: The name of the bucket where Loki stores its chunks.

      Storage Retention: Default: 7d. The allowable units are: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), y (years). 0s means unlimited retention. Minimum period is 24h.

      Force Path-Style Addressing: Forces requests to use AWS S3 path-style addressing. This is useful when using MinIO as the S3 storage backend.

      Tune Attached Storage: Allows for tuning the local attached storage.

      Observability - Logs - Attached Storage

      Use Persistent Volumes: Required for production deployments.

      Volume Size: Default: 10Gi. Warning: this size cannot be modified after the initialization of the cluster.

      Storage Class: Leave it empty (null) to use the default storage class.

      Web Interface

      Enable Web Interface: Default: The option is selected. Enables the Mission Control Web UI.

      Authentication

      Mission Control provides a number of authentication connectors.

      LDAP: Allows for configuring the LDAP connector.

      OpenID Connect (OIDC): Allows for configuring the OIDC connector.

      Static Credentials: Enables specifying a static user account. This is recommended during initial configuration, but should be disabled after deploying one of the other connectors.

      Authentication - LDAP

      See LDAP connector documentation.

      Host: Host and optional port of the LDAP server.

      No SSL: Required if the LDAP host is not using TLS (port 389). This option inherently leaks passwords to anyone on the same network as Dex. Important: Do not use this outside of exploratory phases. Not recommended.

      Skip TLS verify: Turn off TLS certificate verification. This is unsecure. Important: Do not use this outside of exploratory phases. Not recommended.

      Start TLS: Connect to the server using the ldap:// protocol and then issue a StartTLS command. If unspecified, connections use the ldaps:// protocol.

      Root CA: A trusted root certificate file (base64-encoded PEM file).

      Bind DN: The Distinguished Name (DN) to bind with when performing the search. If not provided, the search is performed anonymously.

      Bind password: The password to bind with when performing the search. If not provided, the search is performed anonymously.

      Username prompt: The prompt to display to the user when asking for their username. If unspecified, the default is "Username".

      User base DN: BaseDN from which to start the search. It translates to the query (&(objectClass=person)(uid=<username>)).

      User filter: Optional filter to apply when searching for the user. It translates to the query (&(objectClass=person)(uid=<username>)).

      Username attribute: Username attribute used for comparing user entries. This is translated and combined with the other filter as (<attr>=<username>).

      Group base DN: BaseDN from which to start the search. It translates to the query (&(objectClass=group)(member=<user uid>)).

      Group filter: Optional filter to apply when searching the directory.

      Group/user matchers: A list of field pairs used to match a user to a group. It adds an additional requirement to the filter that an attribute in the group must match the user’s attribute value. Expected format: multi-line YAML list of objects with userAttr and groupAttr keys.

      Authentication - OpenID Connect (OIDC)

      See OIDC connector documentation.

      Mission Control URL: A canonical URL that clients browsers will use to connect to the Mission Control UI. This is required with external providers because they will redirect back to this URL. It must also be addressable from within the Kubernetes cluster.

      Issuer URL: Canonical URL of the provider, also used for configuration discovery. This value MUST match the value returned in the provider config discovery.

      Client ID: OAuth Client ID.

      Client secret: OAuth Client Secret.

      Basic auth unsupported: Some providers require passing client_secret through POST parameters instead of basic auth, despite the OAuth2 RFC discouraging it. Many of these cases are caught internally, but you may need to check this box.

      Scopes: List of additional scopes to request in token response. Default is profile and email. See the Full list. Expected format: multi-line YAML list of strings.

      Skip email_verified: Some providers return claims without email_verified, when they had no usage of emails verification in the enrollment process, or if they are acting as a proxy for another IDP. An example is AWS Cognito with an upstream SAML IDP. Checking this box forces email_verified to true in Dex’s response. Not recommended.

      Enable groups: Groups claims (like the rest of OIDC claims through Dex) only refresh when the IDToken is refreshed, meaning the regular refresh flow doesn’t update the groups claim. Therefore, by default, the OIDC connector doesn’t allow groups claims. If it is acceptable to have potentially stale group claims then you can use this option to enable groups claims through the OIDC connector on a per-connector basis. Not recommended.

      Get user info: When enabled, the OpenID Connector queries the UserInfo endpoint for additional claims. UserInfo claims take priority over claims returned by the IDToken. This option should be used when the IDToken doesn’t contain all the claims requested. See OpenID user information.

      User ID key: The claim used as user ID. Defaults to sub. See full claims list.

      Username key: The claim used as username. Defaults to name.

      ACR values: The Authentication Context Class values within the Authentication Request that the Authorization Server is being requested to process from this Client. Expected format: multi-line YAML list of strings.

      Prompt type: For offline_access, the prompt parameter is set by default to prompt=consent. However, this is not supported by all OIDC providers. Some of them support different values for prompt, like login or none.

      Preferred username claim: The claim used as preferred username. Defaults to preferred_username.

      Email claim: The claim used as email. Defaults to email.

      Preferred groups claim: The claim used as groups. Defaults to groups.

      Authentication - Static Credentials

      Email: An email address to be used during authentication.

      Password Hash: The bcrypt hash of the password. On *nix systems, generate this with the following command:

      echo yourPassword | htpasswd -BinC 10 admin | cut -d: -f2
    4. Preflight checks.

      After configuration is complete a number of automated checks are run against your environment. This ensures that all required components are present prior to installation. If every check passes then continue to the Deploy step. If not, consult the following sections for how to remedy common issues:

      Cert Manager

      Cert Manager is required as part of Mission Control. If you are using the runtime-based installation it is preinstalled. Kubernetes-based installations should ensure it is installed prior to proceeding. Skipping this step will result in installation failure.

      Storage Class

      In most installations you should ensure the availability of a Kubernetes Storage Class that provides the WaitForFirstConsumer volume binding mode. This functionality waits for compute to be assigned to a workload prior to requesting storage. It prevents situations where workloads are scheduled for a particular worker while the storage disk resides on a different worker and there is a lag in the request for a persistent storage volume.

    5. Deploy.

      After all checks pass click the Deploy button. KOTS redirects your browser to the dashboard screen indicating the deployment status. In just a few minutes all containers will be deployed and brought online.

Next steps

Installation is complete and you are ready to use Mission Control for the deployment and management of DSE, HCD, and Cassandra clusters.

All management operations are available through either of these interfaces:

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com