• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

DataStax Project Mission Control

    • Overview
      • Release notes
      • FAQs
      • Get support
    • Installing DataStax Mission Control
      • Plan your install
      • Server-based Runtime Installer
        • Services setup with DataStax Mission Control Runtime Installer
      • Bring your own Kubernetes
        • Install Control Plane
        • Install Data Plane
    • Securing
      • Authentication
      • Connect LDAP authentication backend
      • Connect OpenID authentication backend
      • Client-to-node encryption
      • Internode encryption
      • Secure DataStax Mission Control
      • DSE Unified Authorization
    • Migrating
      • Migrate a DSE Cluster to DataStax Mission Control
    • Managing
      • Manage DSE clusters
        • Configure DSE
        • Cluster lifecycle
          • Create a cluster
          • Create a single-token cluster
          • Create a multi-token cluster
          • Changing cluster Replication Factor
          • Terminate a DSE cluster
          • Upgrade a DSE cluster
        • Datacenter lifecycle
          • Add a DSE datacenter
          • Terminate a DSE datacenter
        • Node lifecycle
          • Add DSE nodes
          • Terminate DSE nodes
          • Use per-node configurations
      • Manage DataStax Mission Control infrastructure
        • Manage projects
        • Manage clusters
        • Add a node to DataStax Mission Control clusters
        • Terminate a node from DataStax Mission Control clusters
        • Storage classes defined
      • Manage DataStax Mission Control resources
        • Access Admin Console
        • Configure DataStax Mission Control
        • Generate a support bundle
      • Observability
        • Metrics
    • Operations
      • Cleanup
      • Rebuild
      • Replace a node
      • Rolling restart
      • Upgrade SSTables
    • Reference
      • MissionControlCluster manifest
      • CassandraTask manifest
  • DataStax Project Mission Control
  • Managing
  • Observability
  • Metrics

Metrics

DataStax Mission Control is currently in Public Preview. DataStax Mission Control is not intended for production use, has not been certified for production workloads, and might contain bugs and other functional issues. There is no guarantee that DataStax Mission Control will ever become generally available. DataStax Mission Control is provided on an “AS IS” basis, without warranty or indemnity of any kind.

If you are interested in trying out DataStax Mission Control please join the Public Preview.

DataStax Mission Control utilizes Grafana Mimir as the metrics engine to observe metrics across all components and deploys it as a microservice on the Control-Plane. Metrics components are installed at the same time as the DataStax Mission Control Control-Plane and are scaled independently:

  • Grafana-Mimir: centralized indexing and query support for metrics

  • Vector aggregator or agent: aggregation, routing, and enrichment of metrics

These components enable you to monitor metrics from many sources within DataStax Mission Control, including:

  • infrastructure hosts

  • DSE nodes

  • DataStax Mission Control Control-Plane

  • Kubernetes (K8s) API server

Collected metrics can be seen in a Prometheus-native format.

Prerequisites

You must provide an AWS S3-compatible object store - all metrics are stored within an S3-compatible object store.

Configuration

  1. While installing the DataStax Mission Control Control-Plane, a configuration screen is presented with controls for:

    • Vector Agent (enabled by default)

    • Vector Aggregator enablement

    • Mimir Topology with fields allowing overrides of the default value of 1 for the following instances:

      • Number of ingester instances

      • Number of compactor instances

      • Number of querier instances

      • Number of queryfrontend instances

      • Number of storegateway instances

      • Number of query scheduler instances

      • Mimir Replication Factor

        DataStax Mission Control sets specific resource requirements for the Mimir components on all Mimir pods and only allows the user to control the number of instances in the deployment.

  2. Advanced Options

    Two empty checkboxes are presented. You must select a box on the configuration screen to enable pods to run on the master. By default, this is disabled to conserve etcd and API server resources.

    For example, check the box to Allow monitoring processes to run on the `Control-Plane. However, for constrained deployments, check the box to Allow monitoring components on DSE nodes.

    By default, DataStax Mission Control does not allow a DSE node to run monitoring microservices because it is preferable to have an exclusive DSE worker node with full access to its allotted resources.

    DataStax Mission Control relies on affinities by default to prevent monitoring of pods to be scheduled on DSE nodes.

    • Mimir Resource Requirements

      Use vertical scaling to support resource-constrained environments by allowing more metrics per observability node.

    • Mimir Storage

      Mimir supports S3, GCP, AWS, and Azure block storage.

      Example field entries:

      Backend: s3

      Bucket Name: mimir-mc

      Region: us-west-2

      Access Key ID: <text-string>

      Secret Access Key: * (entry is obfuscated)

      Mimir Bucket Endpoint: s3.us-est-2.amazonaws.com

      To use on-premises storage versus cloud storage, you can utilize an S3 API on top of storage. Contact your DataStax account team for possible solutions.

      Mimir stores metrics in Storage Software Virtualization (SV) with a local cache in each of the microservices local storage. That local cache is used to answer queries.

      No storage configuration is required. Object storage provides long-term storage of metrics data.

      == How do I access metrics? === Programmatically

  3. [Private Preview versions] .. Configure a NodePort service.

    1. Use that NodePort service defined with a type:Nodeport and a statically set port:

kubectl apply -n cass-operator -f my-nodeport-service-dc.yaml

+ . [Public preview versions] Use the following CLI port-forward command to the Grafana service running in DataStax Mission Control:

kubectl port-forward -n mc-mimir svc/mission-control-grafana

Control-Plane access

  1. Log onto the Control Plane and look for the mc-mimir namespace, and search the list of all of the microservices that are deployed for mission-control-grafana-<alphanumeric-string>. Click to open detailed information about that microservice.

  2. In the Ports section of that information window, click Forward. In the pop-up window, select Open in Browser. image::port-forward-popup.png[Port forward Grafana instance]

  3. This ports forward the installed Grafana instance into a browser. Click General to open Grafana dashboards. image::grafana-browser.png[Grafana browser instance]

  4. Click the Cassandra Overview dashboard. image::grafana-dashboard.png[Grafana dashboards]

What metrics can I see?

Similar to Cassandra, the DataStax Mission Control Overview dashboard reveals status about nodes and data:

  • Availabilty of nodes

  • Cassandra cluster Data Size

  • Disk Space Usage

  • Host-level metrics extracted from and available in Vector

  • Latency throughput

  • SSTable Count

Click the Pods tab to observe an individual cluster.

The Cluster Condensed Dashboard reveals metrics such as Requests Served per Cluster, Memtable Space, Compactions, Streaming, and Latencies.

Click through the tabs on the Cluster Condensed Dashboard to observe detailed information. For example, in the Group By banner, use the Table pulldown to filter by table and observe granular details.

Observability Operations

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage