Plan a Mission Control installation
Planning is crucial to ensure a successful deployment of Mission Control. Review the following information to identify the optimal topology and deployment roles for each target cluster, and outline the necessary steps for a smooth installation process. Use the provided guidance to determine the exact storage and networking requirements needed for provisioning.
Plan your Mission Control deployment
Mission Control serves as a platform and is an interface for deploying and managing DataStax Enterprise (DSE), Hyper-Converged Database (HCD), and Apache Cassandra® database clusters across logical and physical regions.
Here are some examples of deployment topologies:
-
Deploy one instance of Mission Control for the entire organization.
-
Dedicate multiple targeted installations to specific groups.
-
Dedicate one instance per DEV, TEST, and PROD environments.
At the top-level, Mission Control is composed of a centralized Control Plane and optional regionally deployed Data Planes.
- Control Plane
-
Provides the core interface for Mission Control, and includes the top-level API service, user interface, and observability endpoints. The Control Plane manages coordination across each Data Plane’s region boundaries.
- Data Plane
-
Each deployed Data Plane includes DSE, HCD, or Cassandra nodes and offers a self-contained environment for local resource management.
The Control Plane can operate as a Data Plane because it includes all of the necessary components. Therefore a Data Plane is not required. Opt for a single Control Plane in the case of a simple deployment within a single region. |
Installation types
Mission Control covers your deployment in one of two environment types:
-
Dedicated bare-metal or Virtual Machines (VMs).
-
Invoke the Runtime Installer and set up an embedded runtime environment. Prerequisite services are set up on a cluster (or multiple clusters depending on the size of your deployment). See Server-based Runtime Installer.
-
-
Existing Kubernetes services within your organization (on-premises or cloud-based).
-
Invoke the Kubernetes KOTS plugin / Admin Console to configure and install Mission Control. See install on a Kubernetes cluster.
This installation requires more capacity planning due to additional components. For more information, see Kubernetes management instances.
-
Either installation choice provides a unified interface for automated operations with Kubernetes handling the scheduling of containerized DSE, HCD, or Cassandra nodes across a fleet of servers deployed across multiple regions and providers.
Repeat the installation process to install Data Planes
on each cluster where nodes are to be deployed.
In environments involving multiple infrastructure regions, a separate Data Plane
installation is required per geographic region or geo-distributed datacenter, with one installation per regional cluster.
Data Plane
clusters represent the locations where the database instances are provisioned and run.
Capacity planning
Installation of Mission Control involves different instance types and varied resource requirements. The suggested guidelines are the minimum required. You might need to increase CPU capacity, memory, and disk space beyond the recommended minimums.
Platform instances
Mission Control observability includes metrics and logging from all components within the platform. These are centralized, aggregated, then shipped out to an external object store. For more information, see Storage requirements. Most Mission Control metrics and logs are generated by the managed database instances. Therefore the values recommended are tightly coupled to the number of clusters deployed and their individual schemas because each table holds a number of metrics which are tracked. The following recommendations assume minimal deployment in 1 region with a single managed cluster and associated logical database datacenter.
-
2 Instances
-
32 CPU cores
-
64 GB RAM
-
1 TB storage
To schedule observability components, platform instances must be assigned the Kubernetes label |
Database instances
Database instances are where all HCD, DSE, and Cassandra nodes are scheduled to run.
The sizing guidance below assumes a one-to-one relationship between nodes and Kubernetes workers. Alternatively you may co-locate multiple instances on a single worker.
Resource contention is common when tuning this deployment. |
Adjust the values according to your expected database node resource requirements.
-
3 Instances
-
16 physical CPU cores-32 hyper-threaded cores
-
64 GB RAM minimal
-
1.5 TB storage
DataStax benchmark testing shows that resource requirements and performance are similar between bare-metal VMs and Kubernetes install types. The recommendation for running Cassandra in production is:
-
8 to 16 cores
-
32GB to 128GB RAM
CPU and RAM setting concepts
The precise amount of RAM required depends on the workload and is usually tuned based on observation of Garbage Collection (GC) metrics. For RAM, you should set the memory requirements to double the heap size. By default, the JVM allows processes to use as much off-heap memory as the configured on heap memory. With a heap size of 8GB, the JVM may use up to 16GB of RAM, depending on off-heap memory usage. The following rule simplifies this concept - set the heap size to a reasonable value that matches the available memory on the worker nodes. For example, given 32GB RAM per worker node and extra processes such as metrics or log collectors and Kubernetes system components that need to run, you should set your heap size to 8GB, and memory requests to 16GB. If GC pressure is too high, you can raise those values to 12 and 24GB, respectively. During the installation process, allow the monitoring stack to run on the DSE, HCD, or Cassandra nodes to enable scheduling Mimir and all on the same worker nodes. This considers all infrastructure instances available for scheduling, which may lead to unexpected utilization of hardware resources. In production, you should separate these into different node pools. You can set the CPU requests to a minimum-for example, 1 core, to allow scheduling all components concurrently. Processes still use as many cores as possible. Note that in production, despite scheduling the monitoring stack and the DSE, HCD, or Cassandra pods on separate node pools, you still need to leave some RAM available for system pods that allow Kubernetes operations such as networking. For example, if you have 48GB RAM per worker, leave 1GB to 2GB for those extra pods by setting your resource requests to around 46GB.
Storage requirements
In addition to the local storage requirements for Database and Observability instances, Mission Control requires one of the following bucket types for object storage:
-
Amazon Web Services Simple Storage Service (AWS S3) or S3 compatible-for example MinIO, OpenStack Swift
-
Google Cloud Storage (GCS)
-
Azure Blob Storage
Mission Control utilizes object storage for long-term retention of observability data and for database backups.
For EKS installations, you must define a default storage class before you install Mission Control. |
Load balancer
The Mission Control User Interface (UI) is accessed through a specific port on any of the cluster worker nodes. It is highly recommended to place this service behind a load balancer directing traffic of all worker nodes within the cluster on port 30880. This ensures that the UI remains available should an instance go down.
Load balancing in front of database instances is not supported. |
Runtime installs
Runtime installations embed their own Kubernetes environment and therefore additional requirements must be met.
Mission Control utilizes Kurl
for the installation and management of core Kubernetes runtime.
See the Kurl system requirements for the supported operating systems, additional packages, and host-level configuration.
Kubernetes management instances
- Kubernetes Management nodes
-
In addition to the number and types of instances determined during Capacity planning, you must account for the overhead of Kubernetes Management nodes. These nodes contain metadata and services required to deploy workloads across all worker nodes. Mission Control requires three Management node instances, with each node set to:
-
4 AMD64 (x86_64) CPUs or equivalent
-
8 GB of RAM
-
256 GB of disk space
-
- Load balancers
-
To provide high-availability of services, Mission Control Runtime-based installations require an additional load balancer. Place it in front of the Management Kubernetes nodes for core Kubernetes API traffic running on port 6443.
Required networking firewall ports
Open TCP and UDP ports 1024-65535
between all hosts within a given region, a given Control Plane, or a given Data Plane.
This allows for the seamless shifting of workloads between instances due to changing environments.
Instance types | Server-runtime installation | Kubernetes installation |
---|---|---|
Management |
|
— |
Platform |
|
|
Database |
|
|
See also
Proceed to the instance configuration step based on your installation type.
-
Kubernetes Installations: Online and Offline installation paths
-
Runtime Installations: Online and Offline installation paths