Migrate a DSE cluster to Mission Control

Enterprises with existing DataStax Enterprise (DSE) clusters can use the Mission Control Runtime Installer to migrate to an environment running Mission Control. Benefits include:

  • Zero downtime.

  • Minimal configuration changes.

  • Keeping nodes on the cluster in sync during the migration.

  • Use of Mission Control for next generation cluster management.

The migration is trivial and allows for the management of existing clusters without moving the data to new hardware.

These instructions work only for migrating in place and require the runtime-based installation.

Prerequisites and system requirements

  • A prepared environment. See Mission Control Runtime Installer.

  • Each node must have the same storage group rights in the DSE data directories (ownership).

    • Group read-write access must be given to DSE directories or the migration cannot continue. Mission Control changes file system groups and write access during migration.

  • Download mcctl.

  • User running Mission Control mcctl commands must have access to the DSE’s data directories (running with sudo -u is an acceptable solution).

  • Linux is the only supported platform.

  • Only DSE server versions 6.8.26 and later are supported. Check your version with:

    nodetool version

Migrate an existing DSE cluster to Mission Control

The migration process consists of three phases; import init, import add, and import commit.

  • The import init phase validates the cluster and gathers the necessary configuration data to be used as the base to determine if the migration is possible. It also creates the necessary seed services, the project in which the cluster is created, and related metadata so that migrated nodes can continue communicating with the non-migrated ones, and ensuring that the cluster stays healthy during the process. Run the import init phase only once on the first DSE node.

  • The import add phase runs on every DSE node in the datacenter. The import add command performs the per-node migration, using the configuration fetched in the import init phase. import add supports running in parallel and takes care of serializing the migrations.

  • The final import commit phase requires that the import add phase be run on every node in the cluster. It creates the necessary Kubernetes configurations. It activates all the migrated nodes to Mission Control and then reconciles them to the desired state. After achieving this state, the cluster and all of its nodes should be managed using Mission Control.

As you step through these phases, here are some useful Command Line Interface (CLI) flags for reference:

--nodetool-path

Where the nodetool command is located. This assumes a complete path including the nodetool executable. cqlsh must reside in the same path.

--nodetool status

Describes status and information about nodes in a datacenter.

--cass-config-dir

Directory where cassandra.yaml is located.

--dse-config-dir

Directory where dse.yaml is located.

--cassandra-home

Directory where DSE is installed.

--username

DSE admin account name.

--password

DSE admin password.

--jmxusername

Local JMX authentication username.

--jmxpassword

Local JMX authentication password.

--kubeconfig

Path to the kubeconfig file. If not specified, the default kubeconfig file is used.

--namespace/-n

The project/namespace in which the cluster is imported/located. If not specified, the default namespace is used.

import init phase

The import init phase verifies that the cluster is ready to be migrated. It must be run from a node in the cluster because it connects to the running DSE node to fetch its state. JMX access credentials are required because this phase uses nodetool. After verifying the node state, the configuration from this node is used as the base to which all of the other nodes in the managed cluster must conform. Choose a node with a good configuration to use as a generic base.

It is possible to review the configuration (or a failed run), modify the setup, and rerun the import init command. Rerunning import init replaces the values previously written to the cluster.

If the cluster is using authentication and you wish to use existing superuser username/password combination for communication, then enter them using --username and --password parameters. When no credentials are given, Mission Control creates and instates its own superuser to the cluster. Likewise, if the JMX local authentication is enabled, this Mission Control migration tool requires the JMX user account and password.

Run the import init command:

mcctl import init -n dse

import add phase

import add is the process of migrating an existing DSE node to Mission Control. The DSE node must be running before continuing. You can run multiple nodes at the same with the import add command. The command sequentially runs all the migrations, migrating a single node at a time.

The import add command requires access to modify the data directories that the running DSE instance is using. Ownership and groups of directories must be unified prior to migration. When running under Mission Control, the nodes may use a user id or group id that differs from the existing deployment, thus requiring modification as part of migration.

The import add command supports (and requires) the same parameters as does the import init command if the installation is using a custom path instead of auto-detected paths.

When jmxpassword/jmxusername/username/password parameters are not available, the values used in the import init phase are reused in the import add phase. If a .deb/.rpm package installation was used, the configuration and tool paths are using the defaults provided by the package manager. In some cases these values might not match values in use, and might require overriding.

To aid in detection, the import add command uses the DSE_HOME environment variable. If either the cassandra-home or the DSE_HOME environment variable is set, Mission Control tries to detect cassandra.yaml, dse.yaml, and nodetool-path from its subdirectories.

  1. Run the import add command:

    sudo mcctl import add -n dse

    The import add command sequentially deletes the DSE nodes and recreates them in a K8s cluster while the cluster remains up and running.

  2. Verify by running:

    nodetool status

    and

    kubectl get pods -n dse

Reverse the operation

At this point, the operation is reversible. This might be required if certain prerequisites are not met on every node and a particular node requires certain operations. The migration has not broken the existing DSE cluster installation, but you must remove the migrated node before continuing.

  1. Delete the pod which this node created by running:

    kubectl delete pod/POD_NAME -n dse

    Replace POD_NAME with the name of the pod.

  2. List and then delete all the PersistentVolumeClaim (PVC)s and PersistentVolume (PV)s that this node mounted:

    kubectl get pvc -n dse
    kubectl delete pvc PVC_NAME -n dse
    
    kubectl get pv |grep "pvc-server-data"
    kubectl delete pv PV_NAME

    Replace the following:

    • PVC_NAME: The name of the PVC

    • PV_NAME: The name of the PV

  3. Modify the ClusterConfig file:

    kubectl edit clusterconfig cluster-migrate-config -n dse
  4. Set the migrated field to false instead of true.

  5. Restart the node with OpsCenter.

import commit phase

Before running the import commit phase, all the nodes must first be migrated. Mission Control checks every node’s migration status, and if any node is not migrated it exits in the validation phase. Upon successful validation, the final cluster configuration is changed and applied to all the migrated nodes. This causes a rolling restart of the cluster.

  1. Run the import commit command:

    mcctl import commit -n dse

    The import commit command creates the MissionControlCluster (mccluster in short) object and finalizes the migration.

  2. Verify by running:

    kubectl get mccluster -n dse
    Sample results
    NAMESPACE         NAME      AGE
    dse               dsetest   73m

At the completion of these three phases the DSE cluster is fully migrated to Mission Control with zero down time.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com