DataStax Mission Control is currently in Public Preview. DataStax Mission Control is not intended for production use, has not been certified for production workloads, and might contain bugs and other functional issues. There is no guarantee that DataStax Mission Control will ever become generally available. DataStax Mission Control is provided on an “AS IS” basis, without warranty or indemnity of any kind.
If you are interested in trying out DataStax Mission Control please join the Public Preview.
DataStax Mission Control utilizes Grafana Mimir as the metrics engine to
observe metrics across all components and deploys it as a microservice on the
Metrics components are installed at the same time as the DataStax Mission Control
Control-Plane and are scaled independently:
Grafana-Mimir: centralized indexing and query support for metrics
Vector aggregator or agent: aggregation, routing, and enrichment of metrics
These components enable you to monitor metrics from many sources within DataStax Mission Control, including:
DataStax Mission Control
Kubernetes (K8s) API server
Collected metrics can be seen in a Prometheus-native format.
You must provide an AWS S3-compatible object store - all metrics are stored within an S3-compatible object store.
While installing the DataStax Mission Control
Control-Plane, a configuration screen is presented with controls for:
Vector Agent (enabled by default)
Vector Aggregator enablement
Mimir Topology with fields allowing overrides of the default value of
1for the following instances:
Number of ingester instances
Number of compactor instances
Number of querier instances
Number of queryfrontend instances
Number of storegateway instances
Number of query scheduler instances
Mimir Replication Factor
DataStax Mission Control sets specific resource requirements for the Mimir components on all Mimir pods and only allows the user to control the number of instances in the deployment.
Two empty checkboxes are presented. You must select a box on the configuration screen to enable pods to run on the master. By default, this is disabled to conserve
etcdand API server resources.
For example, check the box to
Allow monitoring processes to run on the `Control-Plane. However, for constrained deployments, check the box to
Allow monitoring components on DSE nodes.
By default, DataStax Mission Control does not allow a DSE node to run monitoring microservices because it is preferable to have an exclusive DSE worker node with full access to its allotted resources.
DataStax Mission Control relies on affinities by default to prevent monitoring of pods to be scheduled on DSE nodes.
Mimir Resource Requirements
Use vertical scaling to support resource-constrained environments by allowing more metrics per observability node.
Mimir supports S3, GCP, AWS, and Azure block storage.
Example field entries:
Access Key ID:
Secret Access Key:
*(entry is obfuscated)
Mimir Bucket Endpoint:
To use on-premises storage versus cloud storage, you can utilize an S3 API on top of storage. Contact your DataStax account team for possible solutions.
Mimir stores metrics in Storage Software Virtualization (SV) with a local cache in each of the microservices local storage. That local cache is used to answer queries.
No storage configuration is required. Object storage provides long-term storage of metrics data.
== How do I access metrics? === Programmatically
[Private Preview versions] .. Configure a NodePort service.
Use that NodePort service defined with a type:Nodeport and a statically set port:
kubectl apply -n cass-operator -f my-nodeport-service-dc.yaml
+ . [Public preview versions] Use the following CLI port-forward command to the Grafana service running in DataStax Mission Control:
kubectl port-forward -n mc-mimir svc/mission-control-grafana
Log onto the
Control Planeand look for the
mc-mimirnamespace, and search the list of all of the microservices that are deployed for
mission-control-grafana-<alphanumeric-string>. Click to open detailed information about that microservice.
In the Ports section of that information window, click Forward. In the pop-up window, select
Open in Browser. image::port-forward-popup.png[Port forward Grafana instance]
This ports forward the installed Grafana instance into a browser. Click General to open Grafana dashboards. image::grafana-browser.png[Grafana browser instance]
Click the Cassandra Overview dashboard. image::grafana-dashboard.png[Grafana dashboards]
Similar to Cassandra, the DataStax Mission Control Overview dashboard reveals status about nodes and data:
Availabilty of nodes
Cassandra cluster Data Size
Disk Space Usage
Host-level metrics extracted from and available in Vector
Click the Pods tab to observe an individual cluster.
The Cluster Condensed Dashboard reveals metrics such as Requests Served per Cluster, Memtable Space, Compactions, Streaming, and Latencies.
Click through the tabs on the Cluster Condensed Dashboard to observe detailed information. For example, in the Group By banner, use the Table pulldown to filter by table and observe granular details.