Metrics

Mission Control collects metrics across all components and aggregates them across projects and clusters. Review this unified observability data in the centralized user interface. Mission Control installs and configures metrics components at the same time as the Mission Control Control Plane and scales those components independently.

These components enable you to monitor metrics from many sources within Mission Control, including:

  • Platform services

  • Operators

  • Observability components

  • Reaper

  • Database instances

Mission Control only deploys observability components to Platform instances.

Metrics are read from the database as the scraping occurs, providing the most up-to-date metrics. Metrics are scraped every 30 seconds and logs are collected as they are written to disk. This data is then pushed to aggregator instances. The aggregator handles applying configured transforms and sinks to the data stream.

See collected metrics with Mission Control’s graphical metrics view.

You can use Mission Control to push metrics to existing monitoring stacks. Manipulate and send observability data externally by adding custom transforms and sinks in the Mission Control configuration.

Prerequisites

You must provide an AWS S3 or S3-compatible, Google Cloud Storage, or an Azure Blob Storage object store during installation and configuration. All metrics are stored within an object store, providing long-term storage for metrics.

When cloud-based metrics storage is a concern, for example if you don’t use a cloud provider, you can use an S3 API to store objects. For companies that do not have an S3 endpoint, it is possible to use MinIO to provide an S3-compatible object store within the Mission Control platform.

Metrics collection

Vector is an observability pipeline framework from Datadog that collects metrics and logs from various Mission Control services. Vector transforms and sends the metrics to destinations like Loki and Mimir. The mission-control-aggregator ConfigMap in the mission-control namespace stores configuration data for Vector, including the vector.yaml file. The vector.yaml file defines Vector’s behavior, such as configured transforms and sinks.

Each database instance within a Control or Data Plane includes a server-system-logger sidecar container. The server-system-logger collects metrics and logs generated by the local database instance.

This configuration is part of a larger Vector configuration that defines other components, such as sources, transforms, and other sinks.

vector.toml
[sinks.vector_aggregator]
type = "vector"
inputs = ["cassandra_metrics", "enrich_host_metrics", "add_source_to_systemlog", "gclog_parser"]
address = "mission-control-aggregator.mission-control.svc:6000"

[sinks.console_log]
type = "console"
inputs = ["systemlog"]
target = "stdout"
encoding.codec = "text"

This TOML configuration defines two sinks within a Vector configuration. In this case, the configuration specifies how to send data to the vector_aggregator and the console.

This configuration instructs Vector to:

  1. Collect data from the specified input sources.

    • cassandra_metrics

    • enrich_host_metrics

    • add_source_to_systemlog

    • gclog_parser

  2. Forward the collected data to the vector_aggregator service at the specified address.

  3. Forward the systemlog data to the console for immediate inspection.

The following is a breakdown of the configuration:

  • [sinks.vector_aggregator]: Defines the vector_aggregator sink. The aggregator forwards direct sources, enriched sources, and parsed logs to the defined address.

    • type = "vector": Specifies that this sink is of type "vector", indicating that it forwards data to another Vector instance.

    • inputs = ["cassandra_metrics", "enrich_host_metrics", "add_source_to_systemlog", "gclog_parser"]: Specifies the input sources that are forwarded to the vector_aggregator. These sources represent different types of metrics or logs collected by Vector.

    • address = "mission-control-aggregator.mission-control.svc:6000": Sets the address of the vector_aggregator service. In this example, the mission-control-aggregator service in the mission-control namespace listening on port 6000.

  • [sinks.console_log]: Defines the console_log sink.

    • type = "console": Specifies that this sink is of type "console", indicating that it will output data to the console.

    • inputs = ["systemlog"]: Specifies that the systemlog input source will be forwarded to the console.

    • target = "stdout": Sets the target output stream to the standard output (stdout). encoding.codec = "text": Specifies that the output data should be encoded in text format.

View metrics

Use the Mission Control UI to view metrics.

  1. In the Mission Control UI, go to Home, and then select your target cluster’s project.

  2. Click Observability.

  3. In the Health Metrics tab, hover on any part of a chart to view more details. Review details for a specific time by moving your cursor along the horizontal time line.

  4. Optional: In the Filter list, select the datacenter to monitor.

  5. Optional: In the Frequency list, select the duration in which to refresh the metrics.

  6. Optional: In the Time Period list, select the monitoring time period.

What metrics can I see?

Various Mission Control views reveal real-time and historical performance status about clusters, datacenters, nodes, tables, data, and storage tiers.

  • Overview view

  • Node view

  • Observability view

  1. In the Mission Control UI, go to Home, and then select your target cluster’s project.

  2. In the Overview tab, the Mission Control Overview view reveals datacenter and node information.

  1. In the Mission Control UI, go to Home, and then select your target cluster’s project.

  2. In the Nodes section of the Overview tab, in the Name column, click on a node.

  3. Monitor node specifics such as:

    • Availability of nodes - the status is next to the node name

    • Type of database - HCD, DSE, or Cassandra

    • Storage Capacity - largely measured in gigabytes (GB)

    • Load

    • Memory Usage - with details about System, Heap, and In Memory usage

    • Gossip activity

    • Pending Tasks

    • Number of Native clients

    • Days of Uptime

    • Running Tasks - with Type, SSTable, and Progress

    • Incoming Streams - with Operation, Peer, and Progress

    • Outgoing Streams - with Operation, Peer, and Progress

    • Thread Pool Stats - with Name, Active, Pending, Completed, Blocked, and Total Blocked

  1. In the Mission Control UI, go to Home, and then select your target cluster’s project.

  2. Click Observability.

  3. In Examine metrics and logs, monitor datacenter activity in the cluster for a specific Frequency and Time Period:

    • Read/Write Throughput

    • Read/Write Latencies

    • Other Latencies

    • Errors

    • CPU Utilization

    • Unix Load

    • Garbage Collection Time

    • Disk Read Throughput

    • Disk Write Throughput

    • Network IO - with Receive (RX) and Transmit (TX) values

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com