Install the Mission Control server runtime using the embedded Kubernetes cluster
When installing onto bare-metal or VM infrastructure, you must first run the Mission Control Runtime installer to create and configure Kubernetes where you are installing Mission Control. The runtime installer sets up the prerequisite services and then installs Mission Control components. Runtime installer accommodates online and offline (airgap) modes.
Installation of Mission Control with the embedded runtime requires minimal tasks on each host. The tasks vary depending on the role of the host.
Mission Control must be installed on a dedicated virtual machine (VM) or a bare-metal server that can be cloud-based or on-premises. |
- Management nodes
-
The top level of a regional installation contains
mc-management
nodes that assign workloads toplatform
anddatabase
nodes, maintain the declarative desired state of the installations, and perform health checks of running services. - Platform nodes
-
Run the shared resources for the environment, including all operators, the observability stack, and the web interface.
- Database nodes
-
Runs the database instances.
You can opt to run all of the same hardware for each of the installed platform
and database
nodes or label each instance with its respective workload type. The following diagram highlights where each component exists within the cluster.
Choose one of the following two modes to install the core runtime for Mission Control:
- Online installation
-
Into an environment where internet access is available to the hosts.
- Offline installation
-
Into an environment that does not allow access to the internet. This is also known as an airgap installation.
Install the core runtime
-
Online installation
-
Offline (airgap) install
-
A product license file.
-
The planned servers are running and available.
-
The operating system is installed and the hosts comply with Replicated’s embedded cluster requirements.
-
You have outbound connectivity to the internet.
-
The load balancer is online and set to forward TCP traffic on the following:
-
Port 6443 to Management nodes.
-
Port 30880 to Platform and Database nodes.
-
-
You have 40Gi of storage in your ROOT folder.
-
The storage requirements are met for the embedded cluster. For more information, see Prerequisites in the Replicated documentation.
Mission Control uses the Kubernetes runtime with its concept of Management and Worker nodes.
First bring Management nodes online, and after all become available, then platform
and database
Worker nodes join.
Mission Control installation requires k0s. Use |
-
Download the installation assets to all the nodes in your cluster:
curl -f https://replicated.app/embedded/mission-control/stable/VERSION_NUMBER -H "Authorization: LICENSE_ID" -o mission-control-stable.tgz
Replace the following:
-
VERSION_NUMBER
: Mission Control version number, for examplev1.7.0
. By default, uselatest
, or specify a version number, such asv1.7.0
, if you need to install a specific version. -
LICENSE_ID
: License ID to authenticate the download. The ID is available in your Mission Control license file.
-
-
Extract the installation assets:
tar xvzf mission-control-stable.tgz
-
Run the following installation script on the first Management node:
sudo ./mission-control install --license license.yaml
If you have less than 40Gi of storage in your ROOT folder, you must add additional storage.
The installer creates the Mission Control components and sets up the Kubernetes environment.
-
Enter the Admin Console password when prompted. This sets your credentials for the KOTS Admin Console.
Results
? Enter an Admin Console password: ? Confirm password:
The installer prepares host files, installs the Kubernetes runtime, and sets up the storage and embedded cluster.
Persistent data, such as SSTables, is stored on disk under
/var/openebs
. You must mount this folder on a volume that has enough disk space to host the HCD, DSE, or Cassandra data. -
Optional: Run the following k0s commands to view the services running on the Management node:
sudo ./mission-control shell # OUTPUT GOES HERE # Note the user is now in a terminal shell with
kubectl
on the path configured to talk to the cluster. kubectl get pods -A kubectl get nodesAfter the installation is complete, the results are displayed in the terminal.
Results
✔ Host files materialized! ✔ Node installation finished! ✔ Storage is ready! ✔ Embedded Cluster is ready! ✔ Admin Console is ready! ✔ Additional components are ready! Visit the Admin Console to configure and install mission-control: https://KOTSADM_URL
-
Enter the Admin Console URL in your browser to continue with the installation.
-
Click Continue to Setup, select your certificate type, and then click Continue.
-
On the Mission Control Admin Console, enter a password, and then click Log in.
-
On the Nodes page, select Add node to add the remaining Management nodes.
-
On the Add a Node dialog, select one or more roles to add.
Select only one role per
platform
ordatabase
worker node. Assigning multiple roles to a single node might cause unexpected behavior and require you to reinstall KOTS.The UI allows the addition of multiple roles simultaneously without the need to hold the Shift or Ctrl key. However, this might lead to multiple roles being assigned to the same node. If you accidentally select multiple roles, be sure to deselect any roles you don’t want to assign.
-
Copy the join command and run it on the machine you want to add to the cluster. For HA, you must add at least three Management nodes with the
mc-management
role.Example join command
sudo ./mission-control join IP_ADDRESS:3000 JOIN_TOKEN
-
Close the dialog after you have selected the necessary roles and copied the join commands.
-
Paste the join command into the terminal of the machine you want to add to the cluster:
sudo ./mission-control join IP_ADDRESS:3000 JOIN_TOKEN
Replace the following:
-
IP_ADDRESS
: The IP address of the Management node -
JOIN_TOKEN
: The join token copied from the Management node
-
-
Repeat the installation process on the remaining nodes.
Each time you select Add node, the last role you selected is preselected by default. You must deselect the applicable button for any roles you don’t want to add to avoid adding multiple roles to the same node.
Optional: Run the following command to view the nodes you have added to the cluster:
sudo k0s kubectl get nodes --show-labels
-
After you have added all the nodes to the cluster, return to the Nodes page, and then click Continue.
-
On the Configure Mission Control page, complete the configuration steps to install Mission Control. For more information, see Install and configure Mission Control.
With all nodes online and the core runtime up and available, you can continue installing Mission Control.
-
Open your web browser to the
KOTSADM_URL
returned in the results from the Management node. -
Sign in with the
KOTSADM_PASSWORD
to continue with Mission Control online installation.
-
All planned servers are running and available.
-
The operating system is installed and the hosts comply with k0s system requirements.
-
The load balancer is online and set to forward TCP traffic on the following:
-
Port 6443 to Management nodes.
-
Port 30880 to
platform
anddatabase
Worker nodes.
-
-
The airgap installer, which you can download from mission-control.tar.gz.
-
Storage requirements are met for the embedded cluster. For more information, see Prerequisites in the Replicated documentation.
Mission Control uses Kubernetes runtime with its concept of Management and Worker nodes.
First bring Management nodes online, and after all become available, then platform
and database
Worker nodes join.
-
Download the installation assets:
curl -f 'https://replicated.app/embedded/mission-control/stable/VERSION_NUMBER?airgap=true' -H "Authorization: LICENSE_ID" -o mission-control-stable.tgz
Replace the following:
-
VERSION_NUMBER
: By default, uselatest
, or specify a version number, such asv1.7.0
, if you need to install a specific version. -
LICENSE_ID
: Your license ID. The ID is available in your Mission Control license file.
-
-
Extract the installation assets:
tar xvzf mission-control-stable.tgz
The
mission-control.tar.gz
tarball contains the following files:-
mission-control: Mission Control installer
-
license.yaml: Mission Control license file
-
mission-control.airgap: Mission Control airgap bundle
-
-
Run the installation command:
sudo ./mission-control install --license license.yaml --airgap-bundle mission-control.airgap
The installer creates the Mission Control components and sets up the Kubernetes environment.
-
Enter the Admin Console password when prompted. This sets your credentials for the KOTS Admin Console.
Results
? Enter an Admin Console password: ? Confirm password:
The installer prepares host files, installs the Kubernetes runtime, and sets up the storage and embedded cluster.
Persistent data, such as SSTables, is stored on disk under
/var/openebs
. You must mount this folder on a volume that has enough disk space to host the HCD, DSE, or Cassandra data.Results
✔ Host files materialized! ✔ Node installation finished! ✔ Storage is ready! ✔ Embedded Cluster is ready! ✔ Admin Console is ready! ✔ Additional components are ready! Visit the Admin Console to configure and install mission-control: https://KOTSADM_URL
-
Enter the Admin Console URL in your browser to continue with the installation.
-
Click Continue to Setup, select your certificate type, and then click Continue.
-
On the Log in to Mission Control Admin Console page, enter a password, and then click Log in.
-
On the Nodes page, select Add node to add the remaining Management nodes.
-
On the Add a Node dialog, select one or more roles to add.
You can add multiple roles at once. It’s not necessary to hold down the Shift or Ctrl key to select multiple roles. You must deselect the applicable button for any roles you don’t want to add if you selected multiple roles.
-
Copy the join command, and run it on the machine you want to add to the cluster. For HA, you must add three Management nodes with the
mc-management
role.Example join command
sudo ./mission-control join IP_ADDRESS:3000 JOIN_TOKEN
-
Close the dialog after you have selected the necessary roles and copied the join commands.
-
Paste the join command into the terminal of the machine you want to add to the cluster:
sudo ./mission-control join IP_ADDRESS:3000 JOIN_TOKEN
Replace the following:
-
IP_ADDRESS
: The IP address of the Management node -
JOIN_TOKEN
: The join token copied from the Management node
-
-
Repeat the installation process on the remaining nodes.
Each time you select Add node, the last role you selected is preselected by default. You must deselect the applicable button for any roles you don’t want to add to avoid adding multiple roles to the same node.
Optional: Run the following command to view the nodes you have added to the cluster:
sudo mission-control shell kubectl get nodes --show-labels
-
After you have added all the nodes to the cluster, return to the Nodes page, and then click Continue.
-
On the Configure Mission Control page, complete the configuration steps to install Mission Control. For more information, see Install and configure Mission Control.
With all nodes online and the core runtime up and available, you can continue installing Mission Control.
-
Open your web browser to the
KOTSADM_URL
returned in the results from Management nodes. -
Log in with the
KOTSADM_PASSWORD
to continue with Mission Control offline installation.