Configure a multi-region Mission Control environment
You can configure a multi-region Mission Control environment to deploy Mission Control across two or more Kubernetes clusters in multiple regions and datacenters. For example, you might host the Mission Control control plane in one region and the data plane in another.
A multi-region environment offers several benefits, including better data locality, increased fault tolerance, improved disaster recovery, and higher availability for distributed applications.
Prerequisites
-
Two or more Kubernetes clusters running in separate regions
-
An understanding of the planning considerations for deploying Mission Control
-
An understanding of Kubernetes concepts and how to use
kubectl
Configure network connectivity between multiple Kubernetes clusters for Mission Control
Before you can configure a multi-region Mission Control environment, you must establish network connectivity between the control plane and data plane instances running in different Kubernetes clusters.
This guide uses two clusters running in two different regions, CP_EAST
and DP_WEST
, which are registered as contexts in your local kubeconfig
file.
-
Verify the contexts for your two Kubernetes clusters:
kubectl config get-contexts
Sample result
CURRENT NAME CLUSTER AUTHINFO NAMESPACE * **CP_EAST** **CP_EAST** **CP_EAST** default **DP_WEST** **DP_WEST** **DP_WEST** default
In this example, the
CP_EAST
Kubernetes cluster is the Mission Control control plane and also serves as a data plane, hosting database datacenters. TheDP_WEST
Kubernetes cluster is a Mission Control data plane, which hosts datacenters but is controlled by the control plane of theCP_EAST
cluster. -
Ensure the following ports are open between the two Kubernetes clusters to allow communication between the control plane and the data plane:
-
tcp:7000
: Internode communications. This port is used for direct node-to-node communication within the Mission Control infrastructure. It facilitates coordination between nodes in different clusters, ensuring data consistency and system reliability. -
tcp:7001
: Encrypted internode communications. Similar to port 7000, this port is used for internode communication, but with encryption enabled. This ensures secure data transfer between the control plane and data plane, protecting sensitive information from interception. -
tcp:8080
: Management API. This port allows access to the management API for configuring Mission Control. It allows administrators to perform operations such as scaling, monitoring, and managing resources. -
tcp:30600
: Vector aggregator. The vector aggregator collects and processes telemetry data from different nodes. This port is essential for log aggregation, monitoring, and analytics, providing visibility into the health and performance of the system. A vector aggregator runs in each data plane datacenter. Each instance aggregates data locally and then forwards logs and metrics to the control plane aggregator.
-
Now that you have verified your contexts and opened the necessary ports for your existing Kubernetes clusters, you can install the Mission Control control plane and data planes.
Install the Mission Control control plane and data planes
-
Select your Mission Control installation method. You can either Bring your own Kubernetes cluster, use the embedded Kubernetes cluster, or use Helm to install Mission Control in your Kubernetes clusters.
-
Change the context to the
CP_EAST
cluster:kubectl config use-context CP_EAST
Replace
CP_EAST
with the name of the context for your control plane cluster. -
Install Mission Control in control plane mode in the
CP_EAST
cluster.You must specify the URL of the Vector Aggregator service in the control plane. The control plane cluster exposes this service on all nodes on port 30600. Specify a hostname that can load balance between all nodes in the control plane cluster.
-
Change the context to the
DP_WEST
cluster:kubectl config use-context DP_WEST
Replace
DP_WEST
with the name of the context for the data plane cluster. -
Install Mission Control in data plane mode in the
DP_WEST
cluster. -
Use the Mission Control CLI,
mcctl
, to register the data plane to the control plane:mcctl register --source-context CP_EAST --dest-context DP_WEST
Replace the following:
-
CP_EAST
: The context of the control plane cluster -
DP_WEST
: The context of the data plane clusterThe Mission Control control plane recognizes the registered data plane. This allows you to manage resources across the two planes.
-
-
Optional: Validate the connection between the control plane and data plane by checking the
clientconfig
andsecret
resources in themission-control
namespace.-
Check the
clientconfig
resource for the registered data plane:kubectl get -n mission-control clientconfig
Result
Returns a list of registered data planes.
NAME AGE DP_WEST-dataplane1 392d
-
Check the
secret
resource for the registered data plane:kubectl get -n mission-control secret
Result
Returns a list of registered data planes.
NAME TYPE DATA AGE DP_WEST-dataplane1-config Opaque 1 392d
-
You can also verify that the data plane context registration in the Mission Control UI. To do this, check the Data Plane Context field in the cluster creation and modification forms. |
Next, create a multi-datacenter Mission Control cluster environment spanning both Kubernetes clusters in your two regions.
Create a multi-datacenter cluster across Kubernetes clusters
The MissionControlCluster
object defines the configuration for the Mission Control control plane and data planes.
To configure your environment, do the following:
-
Create a
MissionControlCluster
object that spans both Kubernetes clusters:This example creates the Mission Control cluster using the CLI. The Mission Control UI uses the Data Plane Context field to specify the Kubernetes context where the datacenter should be created. If you don’t set a value for the Data Plane Context, the datacenter is created in the control plane context.
apiVersion: missioncontrol.datastax.com/v1beta1 kind: MissionControlCluster metadata: name: PROJECT_NAME namespace: PROJECT_NAMESPACE spec: createIssuer: true encryption: internodeEncryption: certs: createCerts: true enabled: true k8ssandra: auth: true cassandra: config: jvmOptions: gc: G1GC heapSize: 1Gi datacenters: - config: jvmOptions: gc: G1GC metadata: name: dc1 racks: - name: r1 size: 2 stopped: false - config: jvmOptions: gc: G1GC k8sContext: CP_EAST metadata: name: dc2 racks: - name: r1 size: 2 stopped: false resources: requests: cpu: 100m memory: 2Gi serverType: cassandra serverVersion: 4.0.11 storageConfig: cassandraDataVolumeClaimSpec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi storageClassName: standard
Replace the following:
-
PROJECT_NAME
: The name of theMissionControlCluster
object -
PROJECT_NAMESPACE
: The namespace where theMissionControlCluster
object is created -
CP_EAST
: The name of the context where the data plane is running
This sample
MissionControlCluster
object defines two datacenters, the first created in the control plane directly, and the other using the Kubernetes context of the data plane registered in the control plane. This configuration ensures a distributed architecture while maintaining centralized control. -
-
Apply the
MissionControlCluster
object to create the multi-datacenter Mission Control cluster:kubectl apply -f PROJECT_NAME.yaml
Replace
PROJECT_NAME.yaml
with the name of theMissionControlCluster
object manifest file.When the cluster starts, the cluster overview screen in the Mission Control UI displays the datacenters in each region: