Mission Control server runtime installer
When installing onto bare-metal or VM infrastructure, you must first run the Mission Control Runtime installer to create and configure Kubernetes where you are installing Mission Control. The runtime installer sets up the prerequisite services and then installs Mission Control components. Runtime installer accommodates online and offline (airgap) modes.
Installation of Mission Control with the embedded runtime requires minimal tasks on each host. The tasks vary depending on the role of the host.
Mission Control must be installed on a dedicated virtual machine (VM) or a bare-metal server that can be cloud-based or on-premises. |
- Primary nodes
-
The top level of a regional installation contains Primary nodes that assign workloads to Secondary nodes, maintain the declarative desired state of the installations, and perform health checks of running services.
- Secondary nodes
-
Run the various workloads of Mission Control. These nodes are further defined by their workload, with each node having its own resource requirements. You can opt to run all of the same hardware for each of the installed Secondary nodes or label each instance with its respective workload type. The following diagram highlights where each component exists within the cluster.
You must label all Secondary nodes as they join the environment. The label and command are provided during the appropriate installation step. |
When using a public cloud load balancer, see Kurl Public Cloud Load Balancer documentation for configuration best practices and common issues. |
Choose one of the following two modes to install the core runtime for Mission Control:
- online
-
into an environment where internet access is available to the hosts.
- offline
-
into an environment that does not allow access to the internet. This is also known as an airgap install.
-
Online install
-
Offline (airgap) install
-
All planned servers are running and available.
-
Operating system is installed and hosts comply with Kurl System Requirements.
-
Outbound connectivity to the internet.
-
Load balancer online and forwarding TCP traffic on:
-
port 6443 to Primary nodes.
-
port 30880 to Secondary nodes.
-
Mission Control uses Kubernetes runtime with its concept of Primary and Secondary nodes. First bring Primary nodes online, and after all become available, then Secondary nodes join.
-
Install on first Primary node by downloading and running the following installation script (with substitutions) on the first Primary node.
curl -sSL https://kurl.sh/mission-control | sudo bash -s ha load-balancer-address=PRIMARY_NODES_LB:6443
Replace
PRIMARY_NODES_LB
with the IP address or DNS name of the load balancer directing traffic to the Primary nodes.The installer asks a number of questions based on the host’s specific hardware and existing configuration.
Sample results from which you will copy unique
JOIN
commands and KOTS values:Installation Complete ✔ Kotsadm: KOTSADM_URL Login with password (will not be shown again): KOTSADM_PASSWORD This password has been set for you by default. It is recommended that you change this password; this can be done with the following command: KOTS_RESET_PASSWORD_COMMAND To access the cluster with kubectl: bash -l Kurl uses /etc/kubernetes/admin.conf, you might want to copy kubeconfig to your home directory: cp /etc/kubernetes/admin.conf ~/.kube/config chown -R root ~/.kube echo unset KUBECONFIG >> ~/.bash_profile bash -l You will likely need to use sudo to copy and chown /etc/kubernetes/admin.conf. Master node join commands expire after two hours, and worker node join commands expire after 24 hours. To generate new node join commands, run KURL_JOIN_COMMAND_GENERATOR on an existing master node. To add worker nodes to this installation, run the following script on your other nodes: KURL_SECONDARY_JOIN_COMMAND To add MASTER nodes to this installation, run the following script on your other nodes: KURL_PRIMARY_JOIN_COMMAND
This output includes the following values:
-
KOTSADM_URL
: the URL for accessing the KOTS admin interface -
KOTSADM_PASSWORD
: a randomly generated password for the KOTS admin interface -
KOTS_RESET_PASSWORD_COMMAND
: command for manually setting the KOTS admin interface password -
KURL_JOIN_COMMAND_GENERATOR
: command for regenerating theKURL_SECONDARY_JOIN_COMMAND
andKURL_PRIMARY_JOIN_COMMAND
-
KURL_SECONDARY_JOIN_COMMAND
: command to have Secondary nodes join the Primary nodes -
KURL_PRIMARY_JOIN_COMMAND
: command to be run on the additional Primary nodesExample join command:
curl -fsSL https://kurl.sh/version/v2023.01.13-1/95569f3/join.sh \| sudo bash -s kubernetes-master-address=10.154.15.203:6443 kubeadm-token=pjxtic.8jrj88214t1tcyfq kubeadm-token-ca-hash=sha256:7f3374d6e8f1971d33c6a9edb16bac5bc6e2c98d2f7f6fa4209a8178b749d462 kubernetes-version=1.19.16 docker-registry-ip=10.96.2.26 primary-host=10.154.15.203 labels=mission-control.datastax.com/role=database
Append either
labels=mission-control.datastax.com/role=platform
orlabels=mission-control.datastax.com/role=database
label to the node join command for each Secondary node.
-
-
Validation.
After the installer completes, validate that the first Primary node is up and running with:
kubectl get nodes
Sample results should be similar to:
NAME STATUS ROLES AGE VERSION kurl-primary-1 Ready control-plane,master 14m v1.26.13
If your results do not include a single Primary node then please open a DataStax support ticket.
-
Install on remaining Primary nodes.
After the first Primary node is online run the
KURL_PRIMARY_JOIN_COMMAND
on the remaining Primary nodes.This command is unique to your installation. Retrieve the unique command with its substituted value from the results in Primary nodes Step 1, looking for the following line:
# To add MASTER nodes to this installation, run the following script on your other nodes: KURL_PRIMARY_JOIN_COMMAND
Each node will join the existing nodes to form a set of highly-available core cluster services.
-
Validation.
After the installation process is complete on all Primary nodes, validate that they are all online and available:
kubectl get nodes
Sample results with a record displayed for each Primary node:
NAME STATUS ROLES AGE VERSION kurl-primary-1 Ready control-plane,master 21m v1.26.13 kurl-primary-2 Ready control-plane,master 5m35s v1.26.13 kurl-primary-3 Ready control-plane,master 115s v1.26.13
-
Taint Primary nodes.
Now that all Primary nodes are online, taint these instances to prevent them receiving any additional workloads. Given the importance of their role in the environment they must be dedicated to the distribution of tasks across the cluster.
curl -L https://k8s.kurl.sh/latest/tasks.sh | sudo bash -s taint_primaries
All other hosts within a regional environment are considered Secondary nodes. These nodes include management, observability, and database instances. Each node communicates with the previously installed Primary nodes to receive their appropriate workload and configuration information.
-
Install on the Secondary nodes. Run the
KURL_SECONDARY_JOIN_COMMAND
command on each Secondary node.This command is unique to your installation. Retrieve the unique command with its substituted value from the results in Primary nodes Step 1, looking for the following line:
# To add worker nodes to this installation, run the following script on your other nodes: `*KURL_SECONDARY_JOIN_COMMAND*`
Append either
labels=mission-control.datastax.com/role=platform
orlabels=mission-control.datastax.com/role=database
label to the node join command. Each of these nodes will join the Primary nodes to form the core environment where all of the Mission Control components are installed. Secondary nodes are categorized based on their workloads (platform or database). -
Validation.
After each Secondary node is brought online, validate it has joined the cluster with:
kubectl get nodes
There should be a record displayed for each Primary node. See the example below
NAME STATUS ROLES AGE VERSION kurl-primary-1 Ready control-plane,master 49m v1.26.13 kurl-primary-2 Ready control-plane,master 33m v1.26.13 kurl-primary-3 Ready control-plane,master 29m v1.26.13 kurl-secondary-56gv Ready <none> 9m9s v1.26.13 kurl-secondary-86n9 Ready <none> 9m10s v1.26.13 kurl-secondary-b5vt Ready <none> 8m24s v1.26.13 kurl-secondary-bg7b Ready <none> 8m12s v1.26.13 kurl-secondary-cm55 Ready <none> 7m34s v1.26.13 kurl-secondary-czmh Ready <none> 7m14s v1.26.13 kurl-secondary-dn8j Ready <none> 7m9s v1.26.13 kurl-secondary-snf0 Ready <none> 7m13s v1.26.13
With all nodes online and the core runtime up and available you can continue installing Mission Control. Open your web browser to the KOTSADM_URL
returned in the results from Primary nodes Step 1. Log in with the KOTSADM_PASSWORD
to continue with Mission Control online installation.
With all nodes online and the core runtime up and available you can continue with installing Mission Control. Open your web browser to the KOTSADM_URL
returned in the results from Primary nodes Step 1. Log in with the KOTSADM_PASSWORD
to continue with Mission Control online installation.
-
All planned servers are running and available.
-
Operating system is installed and hosts comply with Kurl System Requirements.
-
Load balancer online and forwarding TCP traffic on:
-
port 6443 to Primary nodes.
-
port 30880 to Secondary nodes.
-
-
Download the Kurl Airgap installer - mission-control.tar.gz
Mission Control uses Kubernetes runtime with its concept of Primary and Secondary nodes. First bring Primary nodes online, and after all become available, then Secondary nodes join.
-
Upload the Kurl Installer.
To begin the installation process, copy the Kurl Installer (with substitutions) to each of the nodes and extract it:
scp mission-control.tar.gz SSH_USERNAME@HOST_IP:/home/SSH_USERNAME/ ssh SSH_USERNAME@HOST_IP mkdir mission-control tar xvzpf mission-control.tar.gz --directory mission-control cd mission-control
Replace the following:
-
SSH_USERNAME
: The username used to connect to the remote server via SSH. -
HOST_IP
The IP address of the host.
-
-
Install on first Primary node by downloading and running the following installation script (with substitutions) on the first Primary node.
cat install.sh | sudo bash -s airgap ha load-balancer-address=PRIMARY_NODES_LB:6443
Replace
PRIMARY_NODES_LB
with the IP address or DNS name of the load balancer directing traffic to the Primary nodes.The installer asks a number of questions based on the host’s specific hardware and existing configuration.
Sample results from which you will copy unique
JOIN
commands and KOTS values:Installation Complete ✔ Kotsadm: KOTSADM_URL Login with password (will not be shown again): KOTSADM_PASSWORD This password has been set for you by default. It is recommended that you change this password; this can be done with the following command: KOTS_RESET_PASSWORD_COMMAND To access the cluster with kubectl: bash -l Kurl uses /etc/kubernetes/admin.conf, you might want to unset KUBECONFIG to use .kube/config: echo unset KUBECONFIG >> ~/.bash_profile Master node join commands expire after two hours, and worker node join commands expire after 24 hours. To generate new node join commands, run KURL_JOIN_COMMAND_GENERATOR on an existing master node. To add worker nodes to this installation, copy and unpack this bundle on your other nodes, and run the following: KURL_SECONDARY_JOIN_COMMAND To add MASTER nodes to this installation, copy and unpack this bundle on your other nodes, and run the following: KURL_PRIMARY_JOIN_COMMAND
This output includes the following values:
-
KOTSADM_URL
: the URL for accessing the KOTS admin interface -
KOTSADM_PASSWORD
: a randomly generated password for the KOTS admin interface -
KOTS_RESET_PASSWORD_COMMAND
: command for manually setting the KOTS admin interface password -
KURL_JOIN_COMMAND_GENERATOR
: command for regenerating theKURL_SECONDARY_JOIN_COMMAND
andKURL_PRIMARY_JOIN_COMMAND
-
KURL_SECONDARY_JOIN_COMMAND
: command to have Secondary nodes join the Primary nodes -
KURL_PRIMARY_JOIN_COMMAND
: command to be run on the additional Primary nodes
-
-
Validation.
After the installer completes, validate that the first Primary node is up and running with:
kubectl get nodes
Sample results should be similar to:
NAME STATUS ROLES AGE VERSION kurl-primary-1 Ready control-plane,master 5m3s v1.26.13
If your results do not include a single Primary node then please open a DataStax support ticket.
-
Install on remaining Primary nodes.
After the first Primary node is online run the <`KURL_PRIMARY_JOIN_COMMAND`> on the remaining Primary nodes.
This command is unique to your installation. Retrieve the unique command with its substituted value from the results in Step 2, looking for the following line:
# To add MASTER nodes to this installation, run the following script on your other nodes: KURL_PRIMARY_JOIN_COMMAND
Each node will join the existing nodes to form a set of highly-available core cluster services.
-
Validation.
After the installation process is complete on all Primary nodes, validate that they are all online and available:
kubectl get nodes
Sample results with a record displayed for each Primary node:
NAME STATUS ROLES AGE VERSION kurl-primary-1 Ready control-plane,master 23m v1.26.13 kurl-primary-2 Ready control-plane,master 81s v1.26.13 kurl-primary-3 Ready control-plane,master 25s v1.26.13
-
Taint Primary nodes.
Now that all Primary nodes are online, taint these instances to prevent them receiving any additional workloads. Given the importance of their role in the environment they must be dedicated to the distribution of tasks across the cluster.
cat tasks.sh | sudo bash -s taint_primaries
All other hosts within a regional environment are considered Secondary nodes. These nodes include management, observability, and database instances. Each of these nodes communicates with the previously installed Primary nodes to receive their appropriate workload and configuration information.
-
Install on the Secondary nodes. Run the
KURL_SECONDARY_JOIN_COMMAND
command on each Secondary node.This command is unique to your installation. Retrieve the unique command with its substituted value from the results in Primary nodes Step 2, looking for the following line:
# To add worker nodes to this installation, run the following script on your other nodes: *KURL_SECONDARY_JOIN_COMMAND*
Append either
labels=mission-control.datastax.com/role=platform
orlabels=mission-control.datastax.com/role=database
label to the node join command. Each of these nodes will join the Primary nodes to form the core environment where all of the Mission Control components are installed. Secondary nodes are categorized based on their workloads (platform or database). -
Validation.
After each Secondary node is brought online, validate it has joined the cluster with:
kubectl get nodes
There should be a record displayed for each Primary node:
NAME STATUS ROLES AGE VERSION kurl-primary-1 Ready control-plane,master 49m v1.26.13 kurl-primary-2 Ready control-plane,master 33m v1.26.13 kurl-primary-3 Ready control-plane,master 29m v1.26.13 kurl-secondary-4k2b Ready <none> 12m v1.26.13 kurl-secondary-51t2 Ready <none> 6m30s v1.26.13 kurl-secondary-ff08 Ready <none> 5m28s v1.26.13 kurl-secondary-km8z Ready <none> 4m57s v1.26.13 kurl-secondary-lg7c Ready <none> 3m40s v1.26.13 kurl-secondary-lwfp Ready <none> 3m6s v1.26.13 kurl-secondary-tbpl Ready <none> 88s v1.26.13 kurl-secondary-z8cx Ready <none> 27s v1.26.13
With all nodes online and the core runtime up and available you can continue installing Mission Control.
Open your web browser to the KOTSADM_URL
returned in the results from Primary nodes Step 2.
Log in with the KOTSADM_PASSWORD
to continue with Mission Control offline installation.