Upgrade Mission Control

Change Mission Control’s installed version, configuration, and active licenses.

Upgrade your Mission Control version

Upgrading Mission Control is an incremental process designed to minimize downtime of components.

Run an upgrade only on a single Control Plane or a Data Plane at any given time. You must upgrade all Data Plane clusters first, and then proceed to upgrade the Control Plane.

After selecting your environment for upgrade, select a Primary node for the coordination of upgrade tasks. From the Kubernetes Primary node you drain and upgrade each node and bring it back into service.

Choose one of the following two modes to upgrade the core runtime for Mission Control:

online

into an environment where internet access is available to the hosts.

offline

into an environment that does not allow access to the internet. This is also known as an airgap install.

  • Online upgrade

  • Offline (airgap) upgrade

Prerequisites:

  • Existing runtime-based installation.

  • Internet access on nodes.

Upgrade runtime

  1. Run install script on the upgrade coordinator.

    Choose one of the Primary nodes in your Mission Control installation to be the upgrade coordinator. Initiate and run the upgrade process on this host until the entire cluster is upgraded. Start the upgrade process by running the same command (with substitutions) as was used to install the runtime:

    curl -sSL https://kurl.sh/mission-control | sudo bash -s ha load-balancer-address=<`PRIMARY_NODES_LB`>:6443

    Parameter

    Description

    PRIMARY_NODES_LB

    The IP address or DNS name of the load balancer directing traffic to the Primary nodes.

    This drains the Primary node of all workloads and upgrades the components locally. When all components are upgraded the node is released for scheduling.

  2. Drain the next node.

    After services are restored on the previous node you are prompted to drain the next node in the cluster.

    Sample results should be similar to:

    ⚙  Upgrading remote <`KURL_NODE_TYPE`> node <`KURL_NODE_NAME`> to
       Kubernetes version 1.25.14
    
    Drain node <`KURL_NODE_NAME`> to prepare for upgrade? (Y/n)
    node/<`KURL_NODE_NAME`> cordoned
    Warning: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-n7j46, kube-system/weave-net-t85pj, openebs/openebs-ndm-7wp9w
    evicting pod kurl/ekc-operator-71o2o34567-h0c0p
    evicting pod default/kotsadm-rqlite-1
    pod/ekc-operator-71o2o34567-h0c0p evicted
    pod/kotsadm-rqlite-1 evicted
    node/<`KURL_NODE_NAME`> drained
    
    
    	Run the upgrade script on remote node to proceed:  <`KURL_NODE_NAME`>
    
    	 <`KURL_UPGRADE_COMMAND`>

    Parameter

    Description

    KURL_NODE_TYPE

    Type of node, Primary or Secondary

    KURL_NODE_NAME

    Name of the node to upgrade next

    KURL_UPGRADE_COMMAND

    Command to run on the specified node

    IMPORTANT

    This command does not exit. Keep this window open for ease of copying and pasting the unique values returned in the results. Open another window and proceed with upgrading the next node.

  3. Upgrade next node.

    At this point all nodes are drained of all workloads and you can now run the following command (with substitutions) on the specified <`KURL_NODE_NAME`> host.

       <`KURL_UPGRADE_COMMAND`>

    This command is unique to your installation. Retrieve the unique command with its substituted value from the results in Step 2, looking for the following line:

    Run the upgrade script on remote node to proceed: <`KURL_NODE_NAME`>
    
    	 <`KURL_UPGRADE_COMMAND`>

    This command updates all packages to the appropriate versions on the node, performs any configuration changes, and ends by restarting services.

  4. Repeat for each node.

    The coordinator running the upgrade process automatically detects when a node completes its upgrade and prompts for the next node to upgrade (if needed). Repeat Step 2 and Step 3 for each node until all nodes are upgraded to the same version.

    NOTE

    If multiple-core runtime upgrades are required this process run more than once for each node.

  5. KOTS upgrade.

    The core runtime is now upgraded and the coordinator continues with upgrades to the KOTS admin interface. After initiating the command this automated process requires no interaction.

     KURL_FINALIZE_UPGRADE_COMMAND

    This command is unique to your installation. Retrieve the unique command with its substituted value from the results in Step 2, looking for the following line:

    Run the upgrade script on remote node to proceed: <`KURL_NODE_NAME`>
    
    	 <`KURL_FINALIZE_UPGRADE_COMMAND`>

    Parameter

    Description

    KURL_NODE_NAME

    Name of the node to upgrade next

    KURL_FINALIZE_UPGRADE_COMMAND

    Command to be run on each node to complete the upgrade process

  6. Finalize upgrade.

    Connect to each host and execute the following upgrade script. This script ensures that the local configuration and packages are in alignment with the latest runtime specification:

     KURL_FINALIZE_UPGRADE_COMMAND

    This command is unique to your installation. Retrieve the unique command with its substituted value from the results in Step 2, looking for the following line:

    Run the upgrade script on remote node to proceed: <`KURL_NODE_NAME`>
    
    	 <`KURL_FINALIZE_UPGRADE_COMMAND`>

What’s next

With all nodes and the core runtime upgraded you can continue with accessing Mission Control.

Prerequisites:

Upgrade runtime

  1. Run install script on the upgrade coordinator.

    Choose one of the Primary nodes in your Mission Control installation to be the upgrade coordinator. Initiate and run the upgrade process on this host until the entire cluster is upgraded. Start the upgrade process by running the same command (with substitutions) as was used to install the runtime:

    curl -sSL https://kurl.sh/mission-control | sudo bash -s ha load-balancer-address=<`PRIMARY_NODES_LB`>:6443

    Parameter

    Description

    PRIMARY_NODES_LB

    The IP address or DNS name of the load balancer directing traffic to the Primary nodes.

    This drains the Primary node of all workloads and upgrades the components locally. When all components are upgraded the node is released for scheduling.

  2. Drain the next node.

    After services are restored on the previous node you are prompted to drain the next node in the cluster.

    Sample results should be similar to:

    ⚙  Upgrading remote <`KURL_NODE_TYPE`> node <`KURL_NODE_NAME`> to Kubernetes
       version 1.25.14
    
    Drain node <`KURL_NODE_NAME`> to prepare for upgrade? (Y/n)
    node/<`KURL_NODE_NAME`> cordoned
    Warning: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-n7j46, kube-system/weave-net-t85pj, openebs/openebs-ndm-7wp9w
    evicting pod kurl/ekc-operator-71o2o34567-h0c0p
    evicting pod default/kotsadm-rqlite-1
    pod/ekc-operator-71o2o34567-h0c0p evicted
    pod/kotsadm-rqlite-1 evicted
    node/<`KURL_NODE_NAME`> drained
    
    
      Run the upgrade script on remote node to proceed:  <`KURL_NODE_NAME`>
    
       <`KURL_UPGRADE_COMMAND`>
    https://kurl.sh/version/v2024.02.27-0/mission-control/upgrade.sh | sudo bash -s kubernetes-version=1.25.14 docker-registry-ip=<`DOCKER_REGISTRY_IP_ADDRESS`> primary-host=<`PRIMARY_HOST_IP_ADDRESS`>
    primary-host=<`PRIMARY_HOST_IP_ADDRESS`>
    primary-host=<`PRIMARY_HOST_IP_ADDRESS`>
    secondary-host=<`SECONDARY_HOST_IP_ADDRESS`>
    secondary-host=<`SECONDARY_HOST_IP_ADDRESS`>
    
    secondary-host=<`SECONDARY_HOST_IP_ADDRESS`>
    secondary-host=<`SECONDARY_HOST_IP_ADDRESS`>
    
    secondary-host=<`SECONDARY_HOST_IP_ADDRESS`>
    secondary-host=<`SECONDARY_HOST_IP_ADDRESS`>
    
    secondary-host=<`SECONDARY_HOST_IP_ADDRESS`>
    secondary-host=<`SECONDARY_HOST_IP_ADDRESS`>

    Parameter

    Description

    KURL_NODE_TYPE

    Type of node, Primary or Secondary

    KURL_NODE_NAME

    Name of the node to upgrade next

    KURL_UPGRADE_COMMAND

    Command to run on the specified node

    DOCKER_REGISTRY_IP_ADDRESS

    IP address of the Docker container

    PRIMARY_HOST_IP_ADDRESS

    IP address of the Primary host.

    SECONDARY_HOST_IP_ADDRESS

    IP address of the Secondary host.

    IMPORTANT

    This command does not exit. Keep this window open for ease of copying and pasting the unique values returned in the results. Open another window and proceed with upgrading the next node.

  3. Upgrade next node.

    At this point all nodes are drained of all workloads and you can now run the following command (with substitutions) on the specified <`KURL_NODE_NAME`> host.

       <`KURL_UPGRADE_COMMAND`>

    This command is unique to your installation. Retrieve the unique command with its substituted value from the results in Step 2, looking for the following line:

    Run the upgrade script on remote node to proceed: <`KURL_NODE_NAME`>
    
    	 <`KURL_UPGRADE_COMMAND`>

    Sample results:

    https://kurl.sh/version/v2024.02.27-0/mission-control/upgrade.sh | sudo bash -s kubernetes-version=1.25.14 docker-registry-ip=<`DOCKER_REGISTRY_IP_ADDRESS`> primary-host=<`PRIMARY_HOST_IP_ADDRESS`> primary-host=<`PRIMARY_HOST_IP_ADDRESS`> primary-host=<`PRIMARY_HOST_IP_ADDRESS`>
    secondary-host=<`SECONDARY_HOST_IP_ADDRESS`> secondary-host=<`SECONDARY_HOST_IP_ADDRESS`> secondary-host=<`SECONDARY_HOST_IP_ADDRESS`> secondary-host=<`SECONDARY_HOST_IP_ADDRESS`> secondary-host=<`SECONDARY_HOST_IP_ADDRESS`> secondary-host=<`SECONDARY_HOST_IP_ADDRESS`> secondary-host=<`SECONDARY_HOST_IP_ADDRESS`> secondary-host=<`SECONDARY_HOST_IP_ADDRESS`>

    Parameter

    Description

    DOCKER_REGISTRY_IP_ADDRESS

    IP address of the Docker container

    PRIMARY_HOST_IP_ADDRESS

    IP address of the Primary host.

    SECONDARY_HOST_IP_ADDRESS

    IP address of the Secondary host.

    This command updates all packages to the appropriate versions on the node, performs any configuration changes, and ends by restarting services.

  4. Repeat for each node.

    The coordinator running the upgrade process automatically detects when a node completes its upgrade and prompts for the next node to upgrade (if needed). Repeat Step 2 and Step 3 for each node until all nodes are upgraded to the same version.

    NOTE

    If multiple-core runtime upgrades are required this process run more than once for each node.

  5. KOTS upgrade.

    The core runtime is now upgraded and the coordinator continues with upgrades to the KOTS admin interface. After initiating the command this automated process requires no interaction.

     KURL_FINALIZE_UPGRADE_COMMAND
    Run the upgrade script on remote node to proceed: <`KURL_NODE_NAME`>
    
    	 <`KURL_FINALIZE_UPGRADE_COMMAND`>

    Parameter

    Description

    KURL_NODE_NAME

    Name of the node to upgrade next

    KURL_FINALIZE_UPGRADE_COMMAND

    Command to be run on each node to complete the upgrade process

  6. Finalize upgrade.

    Connect to each host and execute the following upgrade script. This script ensures that the local configuration and packages are in alignment with the latest runtime specification.

     KURL_FINALIZE_UPGRADE_COMMAND

    Sample results:

    https://kurl.sh/version/v2024.02.27-0/mission-control/upgrade.sh | sudo bash -s kubernetes-version=1.25.14 docker-registry-ip=<`DOCKER_REGISTRY_IP_ADDRESS`> primary-host=<`PRIMARY_HOST_IP_ADDRESS`> primary-host=<`PRIMARY_HOST_IP_ADDRESS`> primary-host=<`PRIMARY_HOST_IP_ADDRESS`>
    secondary-host=<`SECONDARY_HOST_IP_ADDRESS`> secondary-host=<`SECONDARY_HOST_IP_ADDRESS`> secondary-host=<`SECONDARY_HOST_IP_ADDRESS`> secondary-host=<`SECONDARY_HOST_IP_ADDRESS`> secondary-host=<`SECONDARY_HOST_IP_ADDRESS`> secondary-host=<`SECONDARY_HOST_IP_ADDRESS`> secondary-host=<`SECONDARY_HOST_IP_ADDRESS`> secondary-host=<`SECONDARY_HOST_IP_ADDRESS`>

    Parameter

    Description

    DOCKER_REGISTRY_IP_ADDRESS

    IP address of the Docker container

    PRIMARY_HOST_IP_ADDRESS

    IP address of the Primary host.

    SECONDARY_HOST_IP_ADDRESS

    IP address of the Secondary host.

What’s next

With all nodes and the core runtime upgraded you can continue with accessing Mission Control.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com