Install the Mission Control server runtime using the embedded Kubernetes cluster

When installing onto bare-metal or VM infrastructure, you must first run the Mission Control Runtime installer to create and configure Kubernetes where you are installing Mission Control. The runtime installer sets up the prerequisite services and then installs Mission Control components. Runtime installer accommodates online and offline (airgap) modes.

Installation of Mission Control with the embedded runtime requires minimal tasks on each host. The tasks vary depending on the role of the host.

Mission Control must be installed on a dedicated virtual machine (VM) or a bare-metal server that can be cloud-based or on-premises.

Management nodes

The top level of a regional installation contains mc-management nodes that assign workloads to platform and database nodes, maintain the declarative desired state of the installations, and perform health checks of running services.

Platform nodes

Run the shared resources for the environment, including all operators, the observability stack, and the web interface.

Database nodes

Runs the database instances.

You can opt to run all of the same hardware for each of the installed platform and database nodes or label each instance with its respective workload type. The following diagram highlights where each component exists within the cluster.

Runtime topology nodes

Choose one of the following two modes to install the core runtime for Mission Control:

Online installation

Into an environment where internet access is available to the hosts.

Offline installation

Into an environment that does not allow access to the internet. This is also known as an airgap installation.

Install the core runtime

  • Online installation

  • Offline (airgap) install

Prerequisites
  • A product license file.

  • The planned servers are running and available.

  • The operating system is installed and the hosts comply with Replicated’s embedded cluster requirements.

  • You have outbound connectivity to the internet.

  • The load balancer is online and set to forward TCP traffic on the following:

    • Port 6443 to Management nodes.

    • Port 30880 to Platform and Database nodes.

  • You have 40Gi of storage in your ROOT folder.

  • The storage requirements are met for the embedded cluster. For more information, see Prerequisites in the Replicated documentation.

Management nodes

Mission Control uses the Kubernetes runtime with its concept of Management and Worker nodes. First bring Management nodes online, and after all become available, then platform and database Worker nodes join.

Mission Control installation requires k0s. Use sudo mission-control shell to execute installation commands.

  1. Download the installation assets to all the nodes in your cluster:

    curl -f https://replicated.app/embedded/mission-control/stable/VERSION_NUMBER -H "Authorization: LICENSE_ID" -o mission-control-stable.tgz

    Replace the following:

    • VERSION_NUMBER: Mission Control version number, for example v1.7.0. By default, use latest, or specify a version number, such as v1.7.0, if you need to install a specific version.

    • LICENSE_ID: License ID to authenticate the download. The ID is available in your Mission Control license file.

  2. Extract the installation assets:

    tar xvzf mission-control-stable.tgz
  3. Run the following installation script on the first Management node:

    sudo ./mission-control install --license license.yaml

    If you have less than 40Gi of storage in your ROOT folder, you must add additional storage.

    The installer creates the Mission Control components and sets up the Kubernetes environment.

  4. Enter the Admin Console password when prompted. This sets your credentials for the KOTS Admin Console.

    Results
    ? Enter an Admin Console password: 
    ? Confirm password: 

    The installer prepares host files, installs the Kubernetes runtime, and sets up the storage and embedded cluster.

    Persistent data, such as SSTables, is stored on disk under /var/openebs. You must mount this folder on a volume that has enough disk space to host the HCD, DSE, or Cassandra data.

  5. Optional: Run the following k0s commands to view the services running on the Management node:

    sudo ./mission-control shell
    # OUTPUT GOES HERE
    
    # Note the user is now in a terminal shell with kubectl on the path configured to talk to the cluster.
    kubectl get pods -A
    kubectl get nodes

    After the installation is complete, the results are displayed in the terminal.

    Results
    ✔ Host files materialized!
    ✔ Node installation finished!
    ✔ Storage is ready!
    ✔ Embedded Cluster is ready!
    ✔ Admin Console is ready!
    ✔ Additional components are ready!
    Visit the Admin Console to configure and install mission-control: https://KOTSADM_URL
  6. Enter the Admin Console URL in your browser to continue with the installation.

  7. Click Continue to Setup, select your certificate type, and then click Continue.

  8. On the Mission Control Admin Console, enter a password, and then click Log in.

  9. On the Nodes page, select Add node to add the remaining Management nodes.

  10. On the Add a Node dialog, select one or more roles to add.

    Select only one role per platform or database worker node. Assigning multiple roles to a single node might cause unexpected behavior and require you to reinstall KOTS.

    The UI allows the addition of multiple roles simultaneously without the need to hold the Shift or Ctrl key. However, this might lead to multiple roles being assigned to the same node. If you accidentally select multiple roles, be sure to deselect any roles you don’t want to assign.

  11. Copy the join command and run it on the machine you want to add to the cluster. For HA, you must add at least three Management nodes with the mc-management role.

    Example join command
    sudo ./mission-control join IP_ADDRESS:3000 JOIN_TOKEN
  12. Close the dialog after you have selected the necessary roles and copied the join commands.

  13. Paste the join command into the terminal of the machine you want to add to the cluster:

    sudo ./mission-control join IP_ADDRESS:3000 JOIN_TOKEN

    Replace the following:

    • IP_ADDRESS: The IP address of the Management node

    • JOIN_TOKEN: The join token copied from the Management node

  14. Repeat the installation process on the remaining nodes.

    Each time you select Add node, the last role you selected is preselected by default. You must deselect the applicable button for any roles you don’t want to add to avoid adding multiple roles to the same node.

    Optional: Run the following command to view the nodes you have added to the cluster:

    sudo k0s kubectl get nodes --show-labels
  15. After you have added all the nodes to the cluster, return to the Nodes page, and then click Continue.

  16. On the Configure Mission Control page, complete the configuration steps to install Mission Control. For more information, see Install and configure Mission Control.

What’s next

With all nodes online and the core runtime up and available, you can continue installing Mission Control.

  1. Open your web browser to the KOTSADM_URL returned in the results from the Management node.

  2. Sign in with the KOTSADM_PASSWORD to continue with Mission Control online installation.

Prerequisites
  • All planned servers are running and available.

  • The operating system is installed and the hosts comply with k0s system requirements.

  • The load balancer is online and set to forward TCP traffic on the following:

    • Port 6443 to Management nodes.

    • Port 30880 to platform and database Worker nodes.

  • The airgap installer, which you can download from mission-control.tar.gz.

  • Storage requirements are met for the embedded cluster. For more information, see Prerequisites in the Replicated documentation.

Management nodes

Mission Control uses Kubernetes runtime with its concept of Management and Worker nodes. First bring Management nodes online, and after all become available, then platform and database Worker nodes join.

  1. Download the installation assets:

    curl -f 'https://replicated.app/embedded/mission-control/stable/VERSION_NUMBER?airgap=true' -H "Authorization: LICENSE_ID" -o mission-control-stable.tgz

    Replace the following:

    • VERSION_NUMBER: By default, use latest, or specify a version number, such as v1.7.0, if you need to install a specific version.

    • LICENSE_ID: Your license ID. The ID is available in your Mission Control license file.

  2. Extract the installation assets:

    tar xvzf mission-control-stable.tgz

    The mission-control.tar.gz tarball contains the following files:

    • mission-control: Mission Control installer

    • license.yaml: Mission Control license file

    • mission-control.airgap: Mission Control airgap bundle

  3. Run the installation command:

    sudo ./mission-control install --license license.yaml --airgap-bundle mission-control.airgap

    The installer creates the Mission Control components and sets up the Kubernetes environment.

  4. Enter the Admin Console password when prompted. This sets your credentials for the KOTS Admin Console.

    Results
    ? Enter an Admin Console password: 
    ? Confirm password: 

    The installer prepares host files, installs the Kubernetes runtime, and sets up the storage and embedded cluster.

    Persistent data, such as SSTables, is stored on disk under /var/openebs. You must mount this folder on a volume that has enough disk space to host the HCD, DSE, or Cassandra data.

    Results
    ✔ Host files materialized!
    ✔ Node installation finished!
    ✔ Storage is ready!
    ✔ Embedded Cluster is ready!
    ✔ Admin Console is ready!
    ✔ Additional components are ready!
    Visit the Admin Console to configure and install mission-control: https://KOTSADM_URL
  5. Enter the Admin Console URL in your browser to continue with the installation.

  6. Click Continue to Setup, select your certificate type, and then click Continue.

  7. On the Log in to Mission Control Admin Console page, enter a password, and then click Log in.

  8. On the Nodes page, select Add node to add the remaining Management nodes.

  9. On the Add a Node dialog, select one or more roles to add.

    You can add multiple roles at once. It’s not necessary to hold down the Shift or Ctrl key to select multiple roles. You must deselect the applicable button for any roles you don’t want to add if you selected multiple roles.

  10. Copy the join command, and run it on the machine you want to add to the cluster. For HA, you must add three Management nodes with the mc-management role.

    Example join command
    sudo ./mission-control join IP_ADDRESS:3000 JOIN_TOKEN
  11. Close the dialog after you have selected the necessary roles and copied the join commands.

  12. Paste the join command into the terminal of the machine you want to add to the cluster:

    sudo ./mission-control join IP_ADDRESS:3000 JOIN_TOKEN

    Replace the following:

    • IP_ADDRESS: The IP address of the Management node

    • JOIN_TOKEN: The join token copied from the Management node

  13. Repeat the installation process on the remaining nodes.

    Each time you select Add node, the last role you selected is preselected by default. You must deselect the applicable button for any roles you don’t want to add to avoid adding multiple roles to the same node.

    Optional: Run the following command to view the nodes you have added to the cluster:

    sudo mission-control shell
    kubectl get nodes --show-labels
  14. After you have added all the nodes to the cluster, return to the Nodes page, and then click Continue.

  15. On the Configure Mission Control page, complete the configuration steps to install Mission Control. For more information, see Install and configure Mission Control.

Next steps

With all nodes online and the core runtime up and available, you can continue installing Mission Control.

  1. Open your web browser to the KOTSADM_URL returned in the results from Management nodes.

  2. Log in with the KOTSADM_PASSWORD to continue with Mission Control offline installation.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com