• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

DataStax Project Mission Control

    • Overview
      • Release notes
      • FAQs
      • Getting support
    • Installing DataStax Mission Control
      • Planning your install
      • Server-based Runtime Installer
        • Services setup with DataStax Mission Control Runtime Installer
      • Bring your own Kubernetes
        • Installing Control Plane
        • Installing Data Plane
    • Migrating
      • Migrating DSE Cluster to DataStax Mission Control
    • Managing
      • Managing DSE clusters
        • Configuring DSE
          • Authentication
          • Authorization
          • Securing DSE
          • DSE Unified Authorization
        • Cluster lifecycle
          • Creating a cluster
          • Creating a single-token cluster
          • Creating a multi-token cluster
          • Terminating a DSE cluster
          • Upgrading a DSE cluster
        • Datacenter lifecycle
          • Adding a DSE datacenter
          • Terminating a DSE datacenter
        • Node lifecycle
          • Adding DSE nodes
          • Terminating DSE nodes
          • Using per-node configurations
      • Managing DataStax Mission Control infrastructure
        • Adding a node to DataStax Mission Control clusters
        • Terminating a node from DataStax Mission Control clusters
        • Storage classes defined
      • Managing DataStax Mission Control resources
        • Accessing Admin Console
        • Configuring DataStax Mission Control
        • Generating a support bundle
    • Operating on DSE Clusters
      • Cleanup
      • Rebuilding
      • Replacing a node
      • Rolling restart
      • Upgrading SSTables
    • Reference
      • DSECluster manifest
      • CassandraTask manifest
  • DataStax Project Mission Control
  • Migrating
  • Migrating DSE Cluster to DataStax Mission Control

Migrating a DSE Cluster to DataStax Mission Control

DataStax Mission Control is current in Private Preview. It is subject to the beta agreement executed between you and DataStax. DataStax Mission Control is not intended for production use, has not been certified for production workloads, and might contain bugs and other functional issues. There is no guarantee that DataStax Mission Control will ever become generally available. DataStax Mission Control is provided on an “AS IS” basis, without warranty or indemnity of any kind.

If you are interested in trying out DataStax Mission Control please contact your DataStax account team.

Migrating an existing DSE cluster to DataStax Mission Control with zero down time is trivial and allows for the management of existing clusters without moving the data to new hardware.

Prerequisities and system requirements

  • DataStax Mission Control Runtime must be installed prior to migration. See Running DataStax Mission Control Runtime Installer.

  • Each node must have the same storage group rights in the DSE data directories (ownership).

    • Group read-write access must be given to DSE directories or the migration cannot continue. DataStax Mission Control changes file system groups and write access during migration.

  • User running DataStax Mission Control mcctl commands must have access to the DSE’s data directories (running with sudo -u is an acceptable solution).

  • Only JDK8 is supported as the source and target platform.

  • Linux is the only supported platform.

  • Kubernetes must be installed on every DSE node before running the migration.

  • Only DSE server versions 6.8.26 and later are supported. Check your version with:

    nodetool version

Procedure

The migration process consists of three parts; init, add, and commit.

  1. The init phase validates the cluster and gathers the necessary configuration data to be used as the base, determining if the migration is possible. It also creates necessary seed services and related metadata so that migrated nodes can continue communicating with the non-migrated ones, and to ensure that the cluster stays healthy during the process. init is run only once on the first DSE node.

  2. The add phase runs on every DSE node in the datacenter. The add `command performs the per-node migration, using the configuration fetched in the `init phase. add supports running in parallel and takes care of serializing the migrations.

  3. The final commit phase requires that the add phase be run on every node in the cluster. It creates the necessary Kubernetes configurations. It activates all the migrated nodes to DataStax Mission Control and then reconciles them to the wanted state. After this state is achieved, the cluster and all its nodes should be managed using DataStax Mission Control.

As you step through these phases, here are some useful Command Line Interface (CLI) flags for reference:

--nodetool-path

Where the nodetool command is located. This assumes a complete path including the nodetool executable. cqlsh must reside in the same path.

--nodetool status

Describes status and information about nodes in a datacenter.

--cass-config-dir

Directory where cassandra.yaml is located.

--dse-config-dir

Directory where dse.yaml is located.

--cassandra-home

Directory where DSE is installed.

--username

DSE admin account name.

--password

DSE admin password.

--jmxusername

Local JMX authentication username.

--jmxpassword

Local JMX authentication password.

Init phase

The init phase verifies that the cluster is ready to be migrated. It must be run from a node of the cluster because connects to the running DSE node to fetch its state. JMX access credentials are required because nodetool is used in this phase. After verifying the state, the configuration from this node is used as the base to which all of the other nodes in the managed cluster must conform. Pick a node that has a good configuration which can be used as a generic base.

It is possible to review the configuration (or a failed run), modify the setup, and rerun the init command. Rerunning init replaces the values previously written to the cluster.

If the cluster is using authentication and you wish to use existing superuser username/password combination for communication, then enter them using --username and --password parameters. When no credentials are given, DataStax Mission Control creates and instates its own superuser to the cluster. Likewise, if the JMX local authentication is enabled, this DataStax Mission Control migration tool requires the JMX useraccount and password.

  1. Run the init command:

    mcctl init

Add phase

Add is the process of migrating an existing DSE node to DataStax Mission Control. As a requirement, the DSE node must be up before continuing. You can run multiple nodes at the same with the add command. The command itself ensures that all the migrations happens sequentially with only a single node being migrated at a time.

The add command requires access to modify the data directories that the running DSE instance is using. Ownership and groups of directories must be unified prior to migration. When running under DataStax Mission Control, the nodes may use a user id or group id that differs from the existing deployment, thus requiring modification as part of migration.

The add command supports (and requires) the same parameters as does the init command if the installation is using a custom path instead of auto-detected paths.

When jmxpassword/jmxusername/username/password parameters are not available here, the values used in the init phase are also used here. If a .deb/.rpm package installation was used, the configuration and tool paths are using the defaults provided by the package manager. In some cases these values might not match values currently in use, and might require overriding.

To aid in detection, the add command uses the DSE_HOME environment variable. If either the cassandra-home or the DSE_HOME environment variable is set, DataStax Mission Control tries to detect cassandra.yaml, dse.yaml, and nodetool-path from its subdirectories.

  1. Run the add command:

    mcctl add

    or

    mcctl import add -n mission-control

    The add command sequentially deletes the DSE nodes and recreates them in a K8s cluster while the cluster remains up and running.

  2. Verify by running:

    nodetool status

    and

    kubectl get pods -n mission-control

Reversing operation

At this point, the operation is reverseable. This may be required if certain prerequisities are not met on every node and a particular node requires certain operations. The migration has not broken the existing DSE cluster installation, but you must remove the migrated node before continuing.

  • Delete the pod which this node created by running:

    kubectl delete pod/<name>
  • List and then delete all the PersistentVolumeClaim (PVC)s and PersistentVolume (PV)s that this node mounted:

    kubectl get --all-namespaces pvc
    kubectl get --all-namespaces pv
    
    kubectl delete pvc <pvc-name> -n test-logging
    kubectl delete pv <pv-name> -n test-logging
  • Modify the ClusterConfig

    kubectl edit clusterconfig cluster-migrate-config

    and set the migrated field to false instead of true.

  • Restart the node using the OpsCenter procedure.

Commit phase

Before running the commit command, all the nodes must first be migrated. DataStax Mission Control checks to see if this is the case, and if not it exits in the validation phase. At this stage, the final cluster configuration is changed and applies this same configuration to all the nodes. This causes a rolling restart of the cluster.

  1. Run the commit command:

    mcctl commit

    or

    mcctl import commit -n mission-control

    The commit command creates the Cassandra datacenter object and finalizes the migration.

  2. Verify by running:

    kubectl get <dc-name> -n mission-control

    Sample output:

    NAME           AGE
    us-west-2      8m30s

At the completion of these three phases the DSE cluster is fully migrated to DataStax Mission Control with zero down time.

Migrating Managing

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage