Install the Mission Control server runtime using the embedded Kubernetes cluster
When installing onto bare-metal or VM infrastructure, you must first run the Mission Control Runtime installer to create and configure Kubernetes where you are installing Mission Control. The runtime installer sets up the prerequisite services and then installs Mission Control components. Runtime installer accommodates online and offline (airgap) modes.
Installation of Mission Control with the embedded runtime requires minimal tasks on each host. The tasks vary depending on the role of the host.
Mission Control must be installed on a dedicated virtual machine (VM) or a bare-metal server that can be cloud-based or on-premises. |
- Management nodes
-
The top level of a regional installation contains
mc-management
nodes that assign workloads toplatform
anddatabase
nodes, maintain the declarative desired state of the installations, and perform health checks of running services. - Platform nodes
-
Run the shared resources for the environment, including all operators, the observability stack, and the web interface.
- Database nodes
-
Runs the database instances.
You can opt to run all of the same hardware for each of the installed platform
and database
nodes or label each instance with its respective workload type. The following diagram highlights where each component exists within the cluster.
Choose one of the following two modes to install the core runtime for Mission Control:
- Online installation
-
Into an environment where internet access is available to the hosts.
- Offline installation
-
Into an environment that does not allow access to the internet. This is also known as an airgap installation.
Check the connectivity between nodes
Before you begin the installation, ensure that the nodes can communicate with each other on all required ports.
To check node connectivity, do the following:
-
Download the Mission Control command line tool,
mcctl
, to help you with the installation process: -
Upload the
mcctl
binary to each host where you plan to install Mission Control, and then run the following command on the management nodes to check the connectivity between the servers:mcctl ports server
The results are similar to following:
SUCCESS TCP listener started SUCCESS UDP listener started SUCCESS Use 'mcctl ports client HOST' on one of the worker nodes to ensure ports are accessible TCP server listening on port 10257 TCP server listening on port 2379 TCP server listening on port 10249 TCP server listening on port 9099 TCP server listening on port 2380 TCP server listening on port 10256 TCP server listening on port 10248 TCP server listening on port 10259 TCP server listening on port 9443 TCP server listening on port 9091 UDP server listening on port 4789 TCP server listening on port 10250 TCP server listening on port 6443 TCP server listening on port 50000 TCP server listening on port 30000 ▀ Listening for connections... (8s)
-
On the platform and database nodes, check the connectivity between the servers:
mcctl ports client HOST
Replace
HOST
with the IP address of each of the management nodes, one after the other.The tool tests all ports for connectivity, and the results display in the terminal.
2025/01/24 08:26:39 INFO Testing ports against the server located at HOST... 2025/01/24 08:26:39 INFO Checking TCP ports... 2025/01/24 08:26:39 INFO Checking TCP port 2379 2025/01/24 08:26:39 INFO Response from TCP port 2379: Echo: ping 2025/01/24 08:26:39 INFO Checking TCP port 2380 2025/01/24 08:26:39 INFO Response from TCP port 2380: Echo: ping 2025/01/24 08:26:39 INFO Checking TCP port 6443 2025/01/24 08:26:39 INFO Response from TCP port 6443: Echo: ping 2025/01/24 08:26:39 INFO Checking TCP port 9091 2025/01/24 08:26:39 INFO Response from TCP port 9091: Echo: ping 2025/01/24 08:26:39 INFO Checking TCP port 9099 2025/01/24 08:26:39 INFO Response from TCP port 9099: Echo: ping 2025/01/24 08:26:39 INFO Checking TCP port 9443 2025/01/24 08:26:39 INFO Response from TCP port 9443: Echo: ping 2025/01/24 08:26:39 INFO Checking TCP port 10248 2025/01/24 08:26:39 INFO Response from TCP port 10248: Echo: ping 2025/01/24 08:26:39 INFO Checking TCP port 10249 2025/01/24 08:26:39 INFO Response from TCP port 10249: Echo: ping 2025/01/24 08:26:39 INFO Checking TCP port 10250 2025/01/24 08:26:39 INFO Response from TCP port 10250: Echo: ping 2025/01/24 08:26:39 INFO Checking TCP port 10256 2025/01/24 08:26:39 INFO Response from TCP port 10256: Echo: ping 2025/01/24 08:26:39 INFO Checking TCP port 10257 2025/01/24 08:26:39 INFO Response from TCP port 10257: Echo: ping 2025/01/24 08:26:39 INFO Checking TCP port 10259 2025/01/24 08:26:39 INFO Response from TCP port 10259: Echo: ping 2025/01/24 08:26:39 INFO Checking TCP port 30000 2025/01/24 08:26:39 INFO Response from TCP port 30000: Echo: ping 2025/01/24 08:26:39 INFO Checking TCP port 50000 2025/01/24 08:26:39 INFO Response from TCP port 50000: Echo: ping 2025/01/24 08:26:39 INFO Checking UDP ports... 2025/01/24 08:26:39 INFO Response from UDP port 4789: Echo: ping 2025/01/24 08:26:39 INFO All ports checked successfully
-
If no errors appear, proceed with the installation. If there are errors, resolve them before proceeding by opening the appropriate ports and protocols for access.
Install the core runtime
-
Online installation
-
Offline (airgap) install
-
A product license file.
-
The planned servers are running and available.
-
The operating system is installed and the hosts comply with Replicated’s embedded cluster requirements.
-
You have outbound connectivity to the internet.
-
The load balancer is online and set to forward TCP traffic on the following:
-
Port 6443 to Management nodes.
-
Port 30880 to Platform and Database nodes.
-
-
You have 40Gi of storage in your ROOT folder.
-
The storage requirements are met for the embedded cluster. For more information, see Prerequisites in the Replicated documentation.
Mission Control uses the Kubernetes runtime with its concept of Management and Worker nodes.
First bring Management nodes online, and after all become available, then platform
and database
Worker nodes join.
Mission Control installation requires k0s. Use |
-
Download the installation assets to all the nodes in your cluster:
curl -f https://replicated.app/embedded/mission-control/stable -H "Authorization: LICENSE_ID" -o mission-control-stable.tgz
Replace
LICENSE_ID
with your license ID to authenticate the download. The ID is available in your Mission Control license file. -
Extract the installation assets:
tar xvzf mission-control-stable.tgz
-
Run the following installation script on the first Management node:
sudo ./mission-control install --license license.yaml
If you have less than 40Gi of storage in your ROOT folder, you must add additional storage.
The installer creates the Mission Control components and sets up the Kubernetes environment.
-
Enter the Admin Console password when prompted. This sets your credentials for the KOTS Admin Console.
Results
? Enter an Admin Console password: ? Confirm password:
The installer prepares host files, installs the Kubernetes runtime, and sets up the storage and embedded cluster.
Persistent data, such as SSTables, is stored on disk under
/var/openebs
. You must mount this folder on a volume that has enough disk space to host the HCD, DSE, or Cassandra data. -
Optional: Run the following k0s commands to view the services running on the Management node:
sudo ./mission-control shell # OUTPUT GOES HERE # Note the user is now in a terminal shell with
kubectl
on the path configured to talk to the cluster. kubectl get pods -A kubectl get nodesAfter the installation is complete, the results are displayed in the terminal.
Results
✔ Host files materialized! ✔ Node installation finished! ✔ Storage is ready! ✔ Embedded Cluster is ready! ✔ Admin Console is ready! ✔ Additional components are ready! Visit the Admin Console to configure and install mission-control: https://KOTSADM_URL
-
Enter the Admin Console URL in your browser to continue with the installation.
-
Click Continue to Setup, select your certificate type, and then click Continue.
-
On the Mission Control Admin Console, enter a password, and then click Log in.
-
On the Nodes page, select Add node to add the remaining Management nodes.
-
On the Add a Node dialog, select one or more roles to add.
Select only one role per
platform
ordatabase
worker node. Assigning multiple roles to a single node might cause unexpected behavior and require you to reinstall KOTS.The UI allows the addition of multiple roles simultaneously without the need to hold the Shift or Ctrl key. However, this might lead to multiple roles being assigned to the same node. If you accidentally select multiple roles, be sure to deselect any roles you don’t want to assign.
-
Copy the join command and run it on the machine you want to add to the cluster. For HA, you must add at least three Management nodes with the
mc-management
role.Example join command
sudo ./mission-control join IP_ADDRESS:3000 JOIN_TOKEN
-
Close the dialog after you have selected the necessary roles and copied the join commands.
-
Paste the join command into the terminal of the machine you want to add to the cluster:
sudo ./mission-control join IP_ADDRESS:3000 JOIN_TOKEN
Replace the following:
-
IP_ADDRESS
: The IP address of the Management node -
JOIN_TOKEN
: The join token copied from the Management node
-
-
Repeat the installation process on the remaining nodes.
Each time you select Add node, the last role you selected is preselected by default. You must deselect the applicable button for any roles you don’t want to add to avoid adding multiple roles to the same node.
Optional: Run the following command to view the nodes you have added to the cluster:
sudo k0s kubectl get nodes --show-labels
-
After you have added all the nodes to the cluster, return to the Nodes page, and then click Continue.
-
On the Configure Mission Control page, complete the configuration steps to install Mission Control. For more information, see Install and configure Mission Control.
With all nodes online and the core runtime up and available, you can continue installing Mission Control.
-
Open your web browser to the
KOTSADM_URL
returned in the results from the Management node. -
Sign in with the
KOTSADM_PASSWORD
to continue with Mission Control online installation.
-
All planned servers are running and available.
-
The operating system is installed and the hosts comply with Replicated’s embedded cluster requirements.
-
The load balancer is online and set to forward TCP traffic on the following:
-
Port 6443 to Management nodes.
-
Port 30880 to
platform
anddatabase
Worker nodes.
-
-
The airgap installer, which you can download from mission-control.tar.gz.
-
Storage requirements are met for the embedded cluster. For more information, see Prerequisites in the Replicated documentation.
Mission Control uses Kubernetes runtime with its concept of Management and Worker nodes.
First bring Management nodes online, and after all become available, then platform
and database
Worker nodes join.
-
Download the installation assets:
curl -f 'https://replicated.app/embedded/mission-control/stable?airgap=true' -H "Authorization: LICENSE_ID" -o mission-control-stable.tgz
Replace the
LICENSE_ID
with your license ID. The ID is available in your Mission Control license file. -
Extract the installation assets:
tar xvzf mission-control-stable.tgz
The
mission-control.tar.gz
tarball contains the following files:-
mission-control: Mission Control installer
-
license.yaml: Mission Control license file
-
mission-control.airgap: Mission Control airgap bundle
-
-
Run the installation command:
sudo ./mission-control install --license license.yaml --airgap-bundle mission-control.airgap
The installer creates the Mission Control components and sets up the Kubernetes environment.
-
Enter the Admin Console password when prompted. This sets your credentials for the KOTS Admin Console.
Results
? Enter an Admin Console password: ? Confirm password:
The installer prepares host files, installs the Kubernetes runtime, and sets up the storage and embedded cluster.
Persistent data, such as SSTables, is stored on disk under
/var/openebs
. You must mount this folder on a volume that has enough disk space to host the HCD, DSE, or Cassandra data.Results
✔ Host files materialized! ✔ Node installation finished! ✔ Storage is ready! ✔ Embedded Cluster is ready! ✔ Admin Console is ready! ✔ Additional components are ready! Visit the Admin Console to configure and install mission-control: https://KOTSADM_URL
-
Enter the Admin Console URL in your browser to continue with the installation.
-
Click Continue to Setup, select your certificate type, and then click Continue.
-
On the Log in to Mission Control Admin Console page, enter a password, and then click Log in.
-
On the Nodes page, select Add node to add the remaining Management nodes.
-
On the Add a Node dialog, select one or more roles to add.
You can add multiple roles at once. It’s not necessary to hold down the Shift or Ctrl key to select multiple roles. You must deselect the applicable button for any roles you don’t want to add if you selected multiple roles.
-
Copy the join command, and run it on the machine you want to add to the cluster. For HA, you must add three Management nodes with the
mc-management
role.Example join command
sudo ./mission-control join IP_ADDRESS:3000 JOIN_TOKEN
-
Close the dialog after you have selected the necessary roles and copied the join commands.
-
Paste the join command into the terminal of the machine you want to add to the cluster:
sudo ./mission-control join IP_ADDRESS:3000 JOIN_TOKEN
Replace the following:
-
IP_ADDRESS
: The IP address of the Management node -
JOIN_TOKEN
: The join token copied from the Management node
-
-
Repeat the installation process on the remaining nodes.
Each time you select Add node, the last role you selected is preselected by default. You must deselect the applicable button for any roles you don’t want to add to avoid adding multiple roles to the same node.
Optional: Run the following command to view the nodes you have added to the cluster:
sudo mission-control shell kubectl get nodes --show-labels
-
After you have added all the nodes to the cluster, return to the Nodes page, and then click Continue.
-
On the Configure Mission Control page, complete the configuration steps to install Mission Control. For more information, see Install and configure Mission Control.
With all nodes online and the core runtime up and available, you can continue installing Mission Control.
-
Open your web browser to the
KOTSADM_URL
returned in the results from Management nodes. -
Log in with the
KOTSADM_PASSWORD
to continue with Mission Control offline installation.