Monitor database metrics

You can view and export health metrics for your databases, such as latency and throughput. These metrics provide insights into database performance and workload distribution.

View metrics in the Astra Portal

In the Astra Portal, go to your Astra DB Classic database, and then click the Health tab to explore metrics such as total requests, errors, latency, lightweight transactions (LWTs), and tombstones.

Read and write latencies display in nanoseconds. To view more granular metrics, hover over a specific time in the graph. Gaps in read/write metrics indicate periods when there were no requests.

To change the reporting period, use the schedule Time Frame Section menu. The default time range is 10 minutes.

For multi-region databases, use the Region menu to inspect data for a different region.

To view the dashboard in a full window, click Cycle View Mode, press Esc, click Share, and then open the URL in a new browser tab or window.

Request Overview widget

Request Overview

Requests Combined

Displays request rates (req/s) for different types of requests. The rates are summed over all database coordinators.

Request Errors

Displays the request error rates (req/s) for different types of requests. The rates are summed over all database coordinators.

  • Timeouts indicate that the queries are taking too long to complete.

  • Unavailables indicate that the coordinator did not have enough alive data nodes to work with.

  • Failures can be caused by queries violating certain guardrails or other error conditions. For more, see Astra DB database guardrails and limits.

Average Request Complexity

This panel provides a high-level view of the average complexity of database requests. It measures the ratio of the average request units to the average request rate. A higher complexity score indicates that a request is more resource-intensive and expensive in terms of read/write activity. This metric combines write operations, including regular writes, counter writes, and index writes, to provide a comprehensive view of the write request complexity.

Writes widget

Writes

Write Latency

Displays coordinator write request latency quantiles on the left y-axis and the total write request rate (req/s) on the right y-axis.

Write Size Distribution

Displays different write request mutation size quantiles. Large mutations can cause performance problems and might even be rejected.

Reads widget

Reads

Read Latency

Displays coordinator read request latency quantiles on the left y-axis, and the total read request rate (req/s) on the right y-axis.

Range Latency

Displays coordinator range request latency quantiles on the left y-axis, and the total range request rate (req/s) on the right y-axis.

Stargate widget

This widget includes Connected Clients, which reports the number of CQL connections for the database.

Lightweight Transactions (LWTs) widget

health lwt

Compare and Set (CAS) Write Latency

Displays coordinator CAS write request latency quantiles on the left y-axis, and the total CAS write request rate (req/s) on the right y-axis.

CAS Write Contention

Displays coordinator CAS write request contention quantiles on the left y-axis and the number of unfinished commits on the right y-axis.

A high number of contended requests negatively affect request latency and cause timeouts. Reduce the number of concurrent requests to the same partition. Unfinished commits cause increased latency. Reducing contention can help reduce the number of unfinished commits.

CAS Read Latency

Displays coordinator CAS read request latency quantiles on the left y-axis, and the total CAS read request rate (req/s) on the right y-axis.

CAS Read Contention

Displays coordinator CAS read request contention quantiles on the left y-axis, and the number of unfinished commits on the right y-axis.

A high number of contended requests negatively affect request latency and cause timeouts. Reduce the number of concurrent requests to the same partition. Unfinished commits cause increased latency. Reducing contention can help reduce the number of unfinished commits.

Tombstones widget

Tombstones

Tombstones Scanned / s

Displays the number of tombstones being scanned per keyspace, table, and second. A large number of tombstones can cause increased latency or query failures.

Tombstone Guardrail Warnings / s

Displays the number of queries exceeding the tombstone guardrail warning threshold per keyspace, table, and second. For more, see Astra DB database guardrails and limits.

Tombstone Guardrail Failures / s

Displays the number of queries exceeding the tombstone guardrail failure threshold per keyspace, table, and second. For more, see Astra DB database guardrails and limits.

Export metrics to third-party services

Databases in active status can forward health metrics to a third-party observability service.

If you use a private endpoint, exported metrics traffic doesn’t use the private connection.

Export metrics to Prometheus

You can use the Astra Portal or the DevOps API to manage the metrics export destinations for your databases. To export metrics for multiple databases, you must configure an export destination for each database.

You must use Prometheus v2.25 or later. DataStax recommends Prometheus v2.33 or later.

You must enable remote-write-receiver in the destination app. For more information, see Remote storage integrations and <remote_write>.

  • Astra Portal

  • DevOps API

  1. In the Astra Portal navigation menu, select your database, and then click Settings.

  2. In the Export Metrics section, click Add Destination.

  3. Select Prometheus.

  4. For Prometheus Strategy, select your authentication method, and then provide the required credentials:

    • Bearer: Provide a Prometheus Token and Prometheus Endpoint.

    • Basic: Provide a Prometheus Username, Prometheus Password, and Prometheus Endpoint.

  5. Click Add Destination.

    The new destination appears in the Export Metrics section. To edit or delete this configuration, click more_vert More, and then select Edit or Delete.

  1. Create an application token that has Manage Metrics permission for the database that you want export metrics.

    If you plan to export metrics for multiple databases, you can customize each token’s scope. For example, you can create separate tokens for each database or create one token with the Organization Administrator role.

  2. Get the database ID. In the Astra Portal, go to your database, and then copy the Database ID, or use the DevOps API List databases endpoint to get multiple database IDs at once.

    curl --location --request GET 'https://api.astra.datastax.com/v2/databases' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN_WITH_ORG_ADMIN_ROLE' \
      --include
  3. Use the DevOps API Get Telemetry Configuration endpoint to get the database’s existing metrics export configuration:

    curl --request GET \
      --url 'https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN' \
      --include

    A database’s telemetry configuration is a desired state list. You must always send the list of all export destinations when you add, remove, or change any destinations for a database. For example, if a database has five destinations, but your POST body contains only one destination, then all destinations are removed except the one included in your request.

  4. To add a Prometheus metrics export destination to a database, send a POST request to the https://docs.datastax.com/en/astra-api-docs/attachments/devops-api/index.html#tag/Database-Operations/operation/configureTelemetry[DevOps API Configure Telemetry endpoint]. The request body must contain the database’s _entire telemetry configuration, including all existing destinations, as returned by the Get Telemetry Configuration endpoint.

    • Prometheus token

    • Prometheus username and password

    curl --request POST \
      --url 'https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN' \
      --include \
      --data '{
                "prometheus_remote":  {
                  "endpoint": "PROMETHEUS_ENDPOINT",
                  "auth_strategy": "bearer",
                  "token": PROMETHEUS_TOKEN
                }
              }'

    Replace the following:

    • ASTRA_DB_ID: Your database ID.

    • APPLICATION_TOKEN: Your Astra DB application token.

    • PROMETHEUS_ENDPOINT: Your Prometheus endpoint URL.

    • PROMETHEUS_TOKEN: Your Prometheus authentication token.

    curl --request POST \
      --url 'https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN' \
      --include \
      --data '{
                "prometheus_remote":  {
                  "endpoint": "PROMETHEUS_ENDPOINT",
                  "auth_strategy": "basic",
                  "password": PROMETHEUS_PASSWORD,
                  "user": PROMETHEUS_USERNAME
                }
              }'

    Replace the following:

    • ASTRA_DB_ID: Your database ID.

    • APPLICATION_TOKEN: Your Astra DB application token.

    • PROMETHEUS_ENDPOINT: Your Prometheus endpoint URL.

    • PROMETHEUS_PASSWORD and PROMETHEUS_USERNAME: Your Prometheus authentication credentials.

  5. Use the Get Telemetry Configuration endpoint to review the updated configuration:

    curl --request GET \
      --url 'https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN' \
      --include
    Response

    A successful response includes information about all metrics export destinations for the specified database:

    {
        "prometheus_remote": {
            "endpoint": "https://prometheus.example.com/api/prom/push",
            "auth_strategy": "basic",
            "user": "PROMETHEUS_USERNAME",
            "password": "PROMETHEUS_PASSWORD"
      }
    }
  6. To edit or delete a telemetry configuration, do the following:

    1. Use the Get Telemetry Configuration endpoint to get the database’s existing telemetry configuration.

    2. Send a POST request to the Configure Telemetry endpoint with the entire updated configuration. This is a desired state list; the POST request body replaces the database’s telemetry configuration.

Export metrics to Apache Kafka

You can use the Astra Portal or the DevOps API to manage the metrics export destinations for your databases. To export metrics for multiple databases, you must configure an export destination for each database.

This configuration uses SASL authentication. For more information about telemetry with Kafka, see Kafka metrics overview and Kafka Monitoring.

  • Astra Portal

  • DevOps API

  1. In the Astra Portal navigation menu, select your database, and then click Settings.

  2. In the Export Metrics section, click Add Destination.

  3. Select Kafka.

  4. If required, select a Kafka Security Protocol:

    • SASL_PLAINTEXT: SASL authenticated, non-encrypted channel

    • SASL_SSL: SASL authenticated, encrypted channel

    This isn’t required for most Kafka installations. If you use hosted Kafka on Confluent Cloud, you might need to use SASL_SSL. Non-Authenticated options (SSL and PLAINTEXT) aren’t supported.

  5. For SASL Mechanism, enter the appropriate SASL mechanism property for your security protocol, such as GSSAPI, PLAIN, SCRAM-SHA-256, or SCRAM-SHA-512.

  6. For SASL Username and SASL Password, enter your Kafka authentication credentials.

  7. For Topic, enter the Kafka topic where you want Astra DB to export metrics. This topic must exist on your Kafka servers.

  8. For Bootstrap Servers, add one or more Kafka Bootstrap Server entries, such as pkc-9999e.us-east-1.aws.confluent.cloud:9092.

  9. Click Add Destination.

    The new destination appears in the Export Metrics section. To edit or delete this configuration, click more_vert More, and then select Edit or Delete.

  1. Create an application token that has Manage Metrics permission for the database that you want export metrics.

    If you plan to export metrics for multiple databases, you can customize each token’s scope. For example, you can create separate tokens for each database or create one token with the Organization Administrator role.

  2. Get the database ID. In the Astra Portal, go to your database, and then copy the Database ID, or use the DevOps API List databases endpoint to get multiple database IDs at once.

    curl --location --request GET 'https://api.astra.datastax.com/v2/databases' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN_WITH_ORG_ADMIN_ROLE' \
      --include
  3. Use the DevOps API Get Telemetry Configuration endpoint to get the database’s existing metrics export configuration:

    curl --request GET \
      --url 'https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN' \
      --include

    A database’s telemetry configuration is a desired state list. You must always send the list of all export destinations when you add, remove, or change any destinations for a database. For example, if a database has five destinations, but your POST body contains only one destination, then all destinations are removed except the one included in your request.

  4. To add a Kafka metrics export destination to a database, send a POST request to the https://docs.datastax.com/en/astra-api-docs/attachments/devops-api/index.html#tag/Database-Operations/operation/configureTelemetry[DevOps API Configure Telemetry endpoint]. The request body must contain the database’s _entire telemetry configuration, including all existing destinations, as returned by the Get Telemetry Configuration endpoint.

    curl --request POST \
      --url 'https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN' \
      --include \
      --data '{
          "kafka": {
              "bootstrap_servers": [
                  "BOOTSTRAP_SERVER_URL"
              ],
              "topic": "KAFKA_TOPIC",
              "sasl_mechanism": "SASL_MECHANISM_PROPERTY",
              "sasl_username": "KAFKA_USERNAME",
              "sasl_password": "KAFKA_PASSWORD",
              "security_protocol": "SASL_SECURITY_PROTOCOL"
          }
        }'

    Replace the following:

    • ASTRA_DB_ID: Your database ID.

    • APPLICATION_TOKEN: Your Astra DB application token.

    • BOOTSTRAP_SERVER_URL: A Kafka Bootstrap Server URL, such as pkc-9999e.us-east-1.aws.confluent.cloud:9092. You can provide a list of URLs.

    • KAFKA_TOPIC: The Kafka topic where you want Astra DB to export metrics. This topic must exist on your Kafka servers.

    • SASL_MECHANISM_PROPERTY: The appropriate SASL mechanism property for your security protocol, such as GSSAPI, PLAIN, SCRAM-SHA-256, or SCRAM-SHA-512. For more information, see Kafka Authentication Basics and SASL Authentication in Confluent Platform.

    • KAFKA_USERNAME and KAFKA_PASSWORD: Your Kafka authentication credentials.

    • SASL_SECURITY_PROTOCOL: If required, specify SASL_PLAINTEXT or SASL_SSL. This isn’t required for most Kafka installations. If you use hosted Kafka on Confluent Cloud, you might need to use SASL_SSL. Non-Authenticated options (SSL and PLAINTEXT) aren’t supported.

  5. Use the Get Telemetry Configuration endpoint to review the updated configuration:

    curl --request GET \
      --url 'https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN' \
      --include
    Response

    A successful response includes information about all metrics export destinations for the specified database:

    {
      "kafka": {
        "bootstrap_servers": [
            "BOOTSTRAP_SERVER_URL"
        ],
        "topic": "astra_metrics_events",
        "sasl_mechanism": "PLAIN",
        "sasl_username": "KAFKA_USERNAME",
        "sasl_password": "KAFKA_PASSWORD",
        "security_protocol": "SASL_PLAINTEXT"
      }
    }
  6. To edit or delete a telemetry configuration, do the following:

    1. Use the Get Telemetry Configuration endpoint to get the database’s existing telemetry configuration.

    2. Send a POST request to the Configure Telemetry endpoint with the entire updated configuration. This is a desired state list; the POST request body replaces the database’s telemetry configuration.

Export metrics to Amazon CloudWatch

You can use the Astra Portal or the DevOps API to manage the metrics export destinations for your databases. To export metrics for multiple databases, you must configure an export destination for each database.

  • Astra Portal

  • DevOps API

  1. In the Astra Portal navigation menu, select your database, and then click Settings.

  2. In the Export Metrics section, click Add Destination.

  3. Select Amazon CloudWatch.

  4. For Access Key and Secret Key, enter your Amazon CloudWatch authentication credentials.

    The secret key user must have the cloudwatch:PutMetricData permission.

    Example IAM policy
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "AstraDBMetrics",
          "Effect": "Allow",
          "Action": "cloudwatch:PutMetricData",
          "Resource": "*"
        }
      ]
    }
  5. For Region, select the region where you want to export the metrics. This doesn’t have to match your database’s region.

  6. Click Add Destination.

    The new destination appears in the Export Metrics section. To edit or delete this configuration, click more_vert More, and then select Edit or Delete.

  1. Create an application token that has Manage Metrics permission for the database that you want export metrics.

    If you plan to export metrics for multiple databases, you can customize each token’s scope. For example, you can create separate tokens for each database or create one token with the Organization Administrator role.

  2. Get the database ID. In the Astra Portal, go to your database, and then copy the Database ID, or use the DevOps API List databases endpoint to get multiple database IDs at once.

    curl --location --request GET 'https://api.astra.datastax.com/v2/databases' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN_WITH_ORG_ADMIN_ROLE' \
      --include
  3. Use the DevOps API Get Telemetry Configuration endpoint to get the database’s existing metrics export configuration:

    curl --request GET \
      --url 'https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN' \
      --include

    A database’s telemetry configuration is a desired state list. You must always send the list of all export destinations when you add, remove, or change any destinations for a database. For example, if a database has five destinations, but your POST body contains only one destination, then all destinations are removed except the one included in your request.

  4. To add an Amazon CloudWatch metrics export destination to a database, send a POST request to the https://docs.datastax.com/en/astra-api-docs/attachments/devops-api/index.html#tag/Database-Operations/operation/configureTelemetry[DevOps API Configure Telemetry endpoint]. The request body must contain the database’s _entire telemetry configuration, including all existing destinations, as returned by the Get Telemetry Configuration endpoint.

    curl --request POST \
      --url 'https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN' \
      --include \
      --verbose \
      --data '{
      "cloudwatch": {
        "access_key": "AWS_ACCESS_KEY",
        "secret_key": "AWS_SECRET_KEY",
        "region": "AWS_REGION"
      }
    }'

    Replace the following:

    • ASTRA_DB_ID: Your database ID.

    • APPLICATION_TOKEN: Your Astra DB application token.

    • AWS_ACCESS_KEY and AWS_SECRET_KEY: Your Amazon CloudWatch authentication credentials. The secret key user must have the cloudwatch:PutMetricData permission, for example:

      Example IAM policy
      {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Sid": "AstraDBMetrics",
            "Effect": "Allow",
            "Action": "cloudwatch:PutMetricData",
            "Resource": "*"
          }
        ]
      }
    • AWS_REGION: Enter the region where you want to export the metrics. This doesn’t have to match your database’s region.

  5. Use the Get Telemetry Configuration endpoint to review the updated configuration:

    curl --request GET \
      --url 'https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN' \
      --include
    Response

    A successful response includes information about all metrics export destinations for the specified database:

    {
      "cloudwatch": {
        "access_key": "AWS_ACCESS_KEY",
        "secret_key": "AWS_SECRET_KEY",
        "region": "AWS_REGION"
      }
    }
  6. To edit or delete a telemetry configuration, do the following:

    1. Use the Get Telemetry Configuration endpoint to get the database’s existing telemetry configuration.

    2. Send a POST request to the Configure Telemetry endpoint with the entire updated configuration. This is a desired state list; the POST request body replaces the database’s telemetry configuration.

Export metrics to Splunk

You can use the Astra Portal or the DevOps API to manage the metrics export destinations for your databases. To export metrics for multiple databases, you must configure an export destination for each database.

  • Astra Portal

  • DevOps API

  1. In the Astra Portal navigation menu, select your database, and then click Settings.

  2. In the Export Metrics section, click Add Destination.

  3. Select Splunk.

  4. For Endpoint, enter the full HTTP address and path for the Splunk HTTP Event Collector (HEC) endpoint.

  5. For Index, enter the Splunk index where you want to export metrics.

  6. For Token, enter the Splunk HTTP Event Collector (HEC) token for Splunk authentication. This token must have permission to write to the specified index.

  7. For Source, enter the source for the events sent to the sink. If you don’t specify a source, the default is astradb.

  8. For Source Type, enter the type of events sent to the sink. If you don’t specify a source type, the default is astradb-metrics.

  9. Click Add Destination.

    The new destination appears in the Export Metrics section. To edit or delete this configuration, click more_vert More, and then select Edit or Delete.

  1. Create an application token that has Manage Metrics permission for the database that you want export metrics.

    If you plan to export metrics for multiple databases, you can customize each token’s scope. For example, you can create separate tokens for each database or create one token with the Organization Administrator role.

  2. Get the database ID. In the Astra Portal, go to your database, and then copy the Database ID, or use the DevOps API List databases endpoint to get multiple database IDs at once.

    curl --location --request GET 'https://api.astra.datastax.com/v2/databases' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN_WITH_ORG_ADMIN_ROLE' \
      --include
  3. Use the DevOps API Get Telemetry Configuration endpoint to get the database’s existing metrics export configuration:

    curl --request GET \
      --url 'https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN' \
      --include

    A database’s telemetry configuration is a desired state list. You must always send the list of all export destinations when you add, remove, or change any destinations for a database. For example, if a database has five destinations, but your POST body contains only one destination, then all destinations are removed except the one included in your request.

  4. To add a Splunk metrics export destination to a database, send a POST request to the https://docs.datastax.com/en/astra-api-docs/attachments/devops-api/index.html#tag/Database-Operations/operation/configureTelemetry[DevOps API Configure Telemetry endpoint]. The request body must contain the database’s _entire telemetry configuration, including all existing destinations, as returned by the Get Telemetry Configuration endpoint.

    curl --request POST \
      --url 'https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN' \
      --include \
      --verbose \
      --data '{
      "splunk": {
        "endpoint": "SPLUNK_HEC_ENDPOINT",
        "index": "SPLUNK_INDEX",
        "token": "SPLUNK_TOKEN",
        "source": "SPLUNK_SOURCE",
        "sourcetype": "SPLUNK_SOURCETYPE"
      }
    }'

    Replace the following:

    • ASTRA_DB_ID: Your database ID.

    • APPLICATION_TOKEN: Your Astra DB application token.

    • SPLUNK_HEC_ENDPOINT: The full HTTP address and path for the Splunk HTTP Event Collector (HEC) endpoint. You can get this from your Splunk Administrator.

    • SPLUNK_INDEX: The Splunk index where you want to export metrics.

    • SPLUNK_TOKEN: The Splunk HEC token for Splunk authentication. This token must have permission to write to the specified index.

    • SPLUNK_SOURCE: The source for the events sent to the sink. If unset, the default is astradb.

    • SPLUNK_SOURCETYPE: The type of events sent to the sink. If unset, the default is astradb-metrics.

  5. Use the Get Telemetry Configuration endpoint to review the updated configuration:

    curl --request GET \
      --url 'https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN' \
      --include
    Response

    A successful response includes information about all metrics export destinations for the specified database:

    {
      "splunk": {
        "endpoint": "https://http-inputs-COMPANY.splunkcloud.com",
        "index": "astra_db_metrics",
        "token": "SPLUNK_TOKEN",
        "source": "astradb",
        "sourcetype": "astradb-metrics"
      }
    }
  6. To edit or delete a telemetry configuration, do the following:

    1. Use the Get Telemetry Configuration endpoint to get the database’s existing telemetry configuration.

    2. Send a POST request to the Configure Telemetry endpoint with the entire updated configuration. This is a desired state list; the POST request body replaces the database’s telemetry configuration.

Export metrics to Apache Pulsar

You can use the Astra Portal or the DevOps API to manage the metrics export destinations for your databases. To export metrics for multiple databases, you must configure an export destination for each database.

  • Astra Portal

  • DevOps API

  1. In the Astra Portal navigation menu, select your database, and then click Settings.

  2. In the Export Metrics section, click Add Destination.

  3. Select Pulsar.

  4. For Endpoint, enter your Pulsar Broker URL.

  5. For Topic, enter the Pulsar topic where you want to publish metrics.

  6. Enter an Auth Name, select the Auth Strategy used by your Pulsar Broker, and then provide the required credentials:

    • Token: Provide a Pulsar authentication token.

    • Oauth2: Provide your OAuth2 Credentials URL and OAuth2 Issuer URL. Optionally, you can provide your OAuth2 Audience and OAuth2 Scope. For more information, see Authentication using OAuth 2.0 access tokens.

  7. Click Add Destination.

    The new destination appears in the Export Metrics section. To edit or delete this configuration, click more_vert More, and then select Edit or Delete.

  1. Create an application token that has Manage Metrics permission for the database that you want export metrics.

    If you plan to export metrics for multiple databases, you can customize each token’s scope. For example, you can create separate tokens for each database or create one token with the Organization Administrator role.

  2. Get the database ID. In the Astra Portal, go to your database, and then copy the Database ID, or use the DevOps API List databases endpoint to get multiple database IDs at once.

    curl --location --request GET 'https://api.astra.datastax.com/v2/databases' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN_WITH_ORG_ADMIN_ROLE' \
      --include
  3. Use the DevOps API Get Telemetry Configuration endpoint to get the database’s existing metrics export configuration:

    curl --request GET \
      --url 'https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN' \
      --include

    A database’s telemetry configuration is a desired state list. You must always send the list of all export destinations when you add, remove, or change any destinations for a database. For example, if a database has five destinations, but your POST body contains only one destination, then all destinations are removed except the one included in your request.

  4. To add a Pulsar metrics export destination to a database, send a POST request to the https://docs.datastax.com/en/astra-api-docs/attachments/devops-api/index.html#tag/Database-Operations/operation/configureTelemetry[DevOps API Configure Telemetry endpoint]. The request body must contain the database’s _entire telemetry configuration, including all existing destinations, as returned by the Get Telemetry Configuration endpoint.

    • Pulsar token

    • Pulsar OAuth2

    curl --request POST \
      --url 'https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN' \
      --include \
      --verbose \
      --data '{
      "pulsar": {
        "endpoint": "PULSAR_ENDPOINT",
        "topic": "PULSAR_TOPIC",
        "auth_strategy": "token",
        "token": "PULSAR_TOKEN",
        "auth_name": "PULSAR_AUTH_NAME"
      }
    }'

    Replace the following:

    • ASTRA_DB_ID: Your database ID.

    • APPLICATION_TOKEN: Your Astra DB application token.

    • PULSAR_ENDPOINT: Your Pulsar Broker URL.

    • PULSAR_TOPIC: The Pulsar topic where you want to publish metrics.

    • PULSAR_TOKEN: A Pulsar authentication token.

    • PULSAR_AUTH_NAME: A Pulsar authentication name.

    curl --request POST \
      --url 'https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN' \
      --include \
      --verbose \
      --data '{
      "pulsar": {
        "endpoint": "PULSAR_ENDPOINT",
        "topic": "PULSAR_TOPIC",
        "auth_strategy": "oauth2",
        "oauth2_credentials_url": "CREDENTIALS_URL",
        "oauth2_issuer_url": "ISSUER_URL"
        }
    }'

    Replace the following:

    • ASTRA_DB_ID: Your database ID.

    • APPLICATION_TOKEN: Your Astra DB application token.

    • PULSAR_ENDPOINT: Your Pulsar Broker URL.

    • PULSAR_TOPIC: The Pulsar topic where you want to publish metrics.

    • CREDENTIALS_URL and ISSUER_URL: Your Pulsar OAuth2 credentials and issuer URLs. Optionally, you can provide oauth_audience and oauth2_scope. For more information, see Authentication using OAuth 2.0 access tokens.

  5. Use the Get Telemetry Configuration endpoint to review the updated configuration:

    curl --request GET \
      --url 'https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN' \
      --include
    Response

    A successful response includes information about all metrics export destinations for the specified database:

    {
      "pulsar": {
        "endpoint": "PULSAR_ENDPOINT_URL",
        "topic": "PULSAR_TOPIC",
        "auth_strategy": "token",
        "token": "PULSAR_TOKEN",
        "auth_name": "PULSAR_AUTH_NAME"
      }
    }
  6. To edit or delete a telemetry configuration, do the following:

    1. Use the Get Telemetry Configuration endpoint to get the database’s existing telemetry configuration.

    2. Send a POST request to the Configure Telemetry endpoint with the entire updated configuration. This is a desired state list; the POST request body replaces the database’s telemetry configuration.

Export metrics to Datadog

You can use the Astra Portal or the DevOps API to manage the metrics export destinations for your databases. To export metrics for multiple databases, you must configure an export destination for each database.

  • Astra Portal

  • DevOps API

  1. In the Astra Portal navigation menu, select your database, and then click Settings.

  2. In the Export Metrics section, click Add Destination.

  3. Select Datadog.

  4. For API Key, enter your Datadog API key.

  5. Optional: For Site, enter the Datadog site parameter for the site where you want to export metrics. The site parameter format depends on your site URL, for example:

    • If your site URL begins with https://app., remove this entire clause from your site URL to form the site parameter.

    • If your site URL begins with a different subdomain, such as https://us5 or https://ap1, remove only https:// from your site URL to form the site parameter.

  6. Click Add Destination.

    The new destination appears in the Export Metrics section. To edit or delete this configuration, click more_vert More, and then select Edit or Delete.

  1. Create an application token that has Manage Metrics permission for the database that you want export metrics.

    If you plan to export metrics for multiple databases, you can customize each token’s scope. For example, you can create separate tokens for each database or create one token with the Organization Administrator role.

  2. Get the database ID. In the Astra Portal, go to your database, and then copy the Database ID, or use the DevOps API List databases endpoint to get multiple database IDs at once.

    curl --location --request GET 'https://api.astra.datastax.com/v2/databases' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN_WITH_ORG_ADMIN_ROLE' \
      --include
  3. Use the DevOps API Get Telemetry Configuration endpoint to get the database’s existing metrics export configuration:

    curl --request GET \
      --url 'https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN' \
      --include

    A database’s telemetry configuration is a desired state list. You must always send the list of all export destinations when you add, remove, or change any destinations for a database. For example, if a database has five destinations, but your POST body contains only one destination, then all destinations are removed except the one included in your request.

  4. To add a Datadog metrics export destination to a database, send a POST request to the https://docs.datastax.com/en/astra-api-docs/attachments/devops-api/index.html#tag/Database-Operations/operation/configureTelemetry[DevOps API Configure Telemetry endpoint]. The request body must contain the database’s _entire telemetry configuration, including all existing destinations, as returned by the Get Telemetry Configuration endpoint.

    curl --request POST \
      --url 'https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN' \
      --include \
      --verbose \
      --data '{
      "Datadog": {
        "api_key": "DATADOG_API_KEY",
        "site": "DATADOG_SITE"
      }
    }'

    Replace the following:

    • ASTRA_DB_ID: Your database ID.

    • APPLICATION_TOKEN: Your Astra DB application token.

    • DATADOG_API_KEY: Your Datadog API key.

    • DATADOG_SITE: The Datadog site parameter for the site where you want to export metrics. The site parameter format depends on your site URL, for example:

    • If your site URL begins with https://app., remove this entire clause from your site URL to form the site parameter.

    • If your site URL begins with a different subdomain, such as https://us5 or https://ap1, remove only https:// from your site URL to form the site parameter.

  5. Use the Get Telemetry Configuration endpoint to review the updated configuration:

    curl --request GET \
      --url 'https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics' \
      --header 'Accept: application/json' \
      --header 'Authorization: Bearer APPLICATION_TOKEN' \
      --include
    Response

    A successful response includes information about all metrics export destinations for the specified database:

    {
      "datadog": {
        "api_key": "DATADOG_API_KEY",
        "site": "DATADOG_SITE"
      }
    }
  6. To edit or delete a telemetry configuration, do the following:

    1. Use the Get Telemetry Configuration endpoint to get the database’s existing telemetry configuration.

    2. Send a POST request to the Configure Telemetry endpoint with the entire updated configuration. This is a desired state list; the POST request body replaces the database’s telemetry configuration.

Metric definitions

Astra DB Classic records the following metrics for your databases. Each metric is an aggregated value, calculated once per minute. Additionally, each metric has a rate1m and a rate5m variant. The rate1m variant is the rate of increase over a one minute interval, and the rate5m variant is the rate of increase over a five minute interval.

astra_db_rate_limited_requests_total:rate1m
astra_db_rate_limited_requests_total:rate5m

A calculated rate of change for the number of failed operations due to an Astra DB rate limit. Using these rates, alert if the value is greater than 0 for more than 30 minutes.

astra_db_read_requests_failures_total:rate1m
astra_db_read_requests_failures_total:rate5m

A calculated rate of change for the number of failed reads. Using these rates, alert if the value is greater than 0. Warn alert on low amount. High alert on larger amounts; determine potentially as a percentage of read throughput.

astra_db_read_requests_timeouts_total:rate1m
astra_db_read_requests_timeouts_total:rate5m

A calculated rate of change for read timeouts. Timeouts happen when operations against the database take longer than the server side timeout. Using these rates, alert if the value is greater than 0.

astra_db_read_requests_unavailables_total:rate1m
astra_db_read_requests_unavailables_total:rate5m

A calculated rate of change for reads where there were not enough data service replicas available to complete the request. Using these rates, alert if the value is greater than 0.

astra_db_write_requests_failures_total:rate1m
astra_db_write_requests_failures_total:rate5m

A calculated rate of change for the number of failed writes. Cassandra drivers retry failed operations, but significant failures can be problematic. Using these rates, alert if the value is greater than 0. Warn alert on low amount. High alert on larger amounts; determine potentially as a percentage of read throughput.

astra_db_write_requests_timeouts_total:rate1m
astra_db_write_requests_timeouts_total:rate5m

A calculated rate of change for timeouts, which occur when operations take longer than the server side timeout. Using these rates, compare with write_requests_failures.

astra_db_write_requests_unavailables_total:rate1m
astra_db_write_requests_unavailables_total:rate5m

A calculated rate of change for unavailable errors, which occur when the service is not available to service a particular request. Using these rates, compare with write_requests_failures.

astra_db_range_requests_failures_total:rate1m
astra_db_range_requests_failures_total:rate5m

A calculated rate of change for the number of range reads that failed. Cassandra drivers retry failed operations, but significant failures can be problematic. Using these rates, alert if the value is greater than 0. Warn alert on low amount. High alert on larger amounts; determine potentially as a percentage of read throughput.

astra_db_range_requests_timeouts_total:rate1m
astra_db_range_requests_timeouts_total:rate5m

A calculated rate of change for timeouts, which are a subset of total failures. Use this metric to understand if failures are due to timeouts. Using these rates, compare with range_requests_failures.

astra_db_range_requests_unavailables_total:rate1m
astra_db_range_requests_unavailables_total:rate5m

A calculated rate of change for unavailable errors, which are a subset of total failures. Use this metric to understand if failures are due to timeouts. Using these rates, compare with range_requests_failures.

astra_db_write_latency_seconds_count:rate1m
astra_db_write_latency_seconds_count:rate5m

A calculated rate of change for write throughput. Alert based on your application service level objective (SLO).

astra_db_write_latency_seconds_bucket:rate1m
astra_db_write_latency_seconds_bucket:rate5m

A calculated rate of change for write latency, where $QUANTILE is a histogram quantile of 99, 95, 90, 75, or 50 (e.g. astra_db_write_latency_seconds_P99:rate1m). Alert based on your application SLO.

astra_db_write_requests_mutation_size_bytes_bucket
astra_db_write_requests_mutation_size_bytes_bucket

A calculated rate of change for how big writes are over time, where $QUANTILE is a histogram quantile of 99, 95, 90, 75, or 50. For example, astra_db_write_requests_mutation_size_bytesP99:rate5m.

astra_db_read_latency_seconds_count:rate1m
astra_db_read_latency_seconds_count:rate5m

A calculated rate of change for read latency. Alert based on your application SLO.

astra_db_read_latency_seconds_bucket:rate1m
astra_db_read_latency_seconds_bucket:rate5m

A calculated rate of change for read latency, where $QUANTILE is a histogram quantile of 99, 95, 90, 75, or 50. For example, astra_db_read_latency_secondsP99:rate1m. Alert based on your application SLO.

astra_db_range_latency_seconds_count:rate1m
astra_db_range_latency_seconds_count:rate5m

A calculated rate of change for range read throughput. Alert based on your application SLO.

astra_db_range_latency_seconds_bucket:rate1m
astra_db_range_latency_seconds_bucket:rate5m

A calculated rate of change of range read latency, where $QUANTILE is a histogram quantile of 99, 95, 90, 75, or 50. For example, astra_db_range_latency_secondsP99. Alert based on your application SLO.

table_tombstone_read_counter_total:rate1m
table_tombstone_read_counter_total:rate5m

A calculated rate of change for the total number of tombstone reads. Tombstones are markers of deleted records or certain updates (for example collection updates). Monitoring the rate of tombstone reads can help in identifying potential performance impacts. Using these rates, alert if the value shows significant growth.

table_tombstone_read_failures_total:rate1m
table_tombstone_read_failures_total:rate5m

A calculated rate of change for the total number of read operations that failed due to hitting the the tombstone guardrail failure threshold. This metric is critical for identifying issues potentially leading to performance degradation or timeouts. Alert if the value is greater than 0.

table_tombstone_read_warnings_total:rate1m
table_tombstone_read_warnings_total:rate5m

A calculated rate of change for the total number of warnings generated due to getting close to the tombstone guardrail failure threshold. This metric helps in identifying scenarios where read operations are slowed or at risk of slowing. Alert on a significant increase as it may indicate potential read performance issues.

astra_db_cas_write_latency_seconds_count:rate1m
astra_db_cas_write_latency_seconds_count:rate5m

A calculated rate of change for the count of CAS (Compare-And-Swap) write operations in Lightweight Transactions (LWTs), measuring the throughput of CAS writes. CAS operations are used for atomic read-modify-write operations. Monitoring the rate of these operations helps in understanding the load and performance characteristics of CAS writes. Alert if the rate significantly deviates from expected patterns, indicating potential concurrency or contention issues.

astra_db_cas_write_latency_seconds_bucket:rate1m
astra_db_cas_write_latency_seconds_bucket:rate5m

A calculated rate of change for CAS write latency distributions in LWTs across predefined latency buckets. This metric provides insights into the latency characteristics of CAS write operations, helping identify latency spikes or trends over time. Alert based on application SLOs, particularly if high-latency buckets show increased counts.

astra_db_cas_read_latency_seconds_count:rate1m
astra_db_cas_read_latency_seconds_count:rate5m

A calculated rate of change for the count of CAS read operations in LWTs, measuring the throughput of CAS reads. Monitoring this rate is important for understanding the load and performance of read operations that involve conditional checks. Alert on unusual changes, which could signal issues with data access patterns or performance bottlenecks.

astra_db_cas_read_latency_seconds_bucket:rate1m
astra_db_cas_read_latency_seconds_bucket:rate5m

A calculated rate of change for CAS read latency distributions in LWTs across predefined latency buckets. This metric aids in identifying the latency performance of CAS reads, essential for diagnosing potential issues in read performance or understanding the distribution of read operation latencies. Alert if latency distribution shifts towards higher buckets, indicating potential performance issues.

astra_db_cas_write_unfinished_commit_total:rate1m
astra_db_cas_write_unfinished_commit_total:rate5m

A calculated rate of change for the total number of CAS write operations in LWTs that did not finish committing. This metric is crucial for detecting issues in the atomicity of write operations, potentially caused by network or node failures. Alert if there’s an increase, as it could impact data consistency.

astra_db_cas_write_contention_total_bucket:rate1m
astra_db_cas_write_contention_total_bucket:rate5m

A calculated rate of change for the distribution of CAS write contention in LWTs across predefined buckets. Contention during CAS write operations can significantly impact performance. This metric helps in understanding and diagnosing the levels of contention affecting CAS writes. Alert on significant increases in higher contention buckets.

astra_db_cas_read_unfinished_commit_total:rate1m
astra_db_cas_read_unfinished_commit_total:rate5m

A calculated rate of change for the total number of CAS read operations that encountered unfinished commits. Monitoring this metric is important for identifying issues with read consistency and potential data visibility problems. Alert if there’s an increase, indicating problems with the completion of write operations.

astra_db_cas_read_contention_total_bucket:rate1m
astra_db_cas_read_contention_total_bucket:rate5m

A calculated rate of change for the distribution of CAS read contention in LWTs across predefined buckets. Contention during CAS reads can indicate performance issues or high levels of concurrent access to the same data. Alert on shifts towards higher contention buckets, indicating a need for investigation and potential optimization.

astra_db_cas_read_requests_failures_total:rate1m
astra_db_cas_read_requests_failures_total:rate5m

A calculated rate of change for the total number of CAS read operations in LWTs that failed. Failures in CAS reads can signal issues with data access or consistency problems. Alert if the rate increases, indicating potential issues affecting the reliability of CAS reads.

astra_db_cas_read_requests_timeouts_total:rate1m
astra_db_cas_read_requests_timeouts_total:rate5m

A calculated rate of change for the number of CAS read operations in LWTs that timed out. Timeouts can indicate system overload or issues with data access patterns. Monitoring this metric helps in identifying and addressing potential bottlenecks. Alert if there’s an increase in timeouts.

astra_db_cas_read_requests_unavailables_total:rate1m
astra_db_cas_read_requests_unavailables_total:rate5m

A calculated rate of change for CAS read operations in LWTs that were unavailable. This metric is vital for understanding the availability of the system to handle CAS reads. An increase in unavailability can indicate cluster health issues. Alert if the rate increases.

astra_db_cas_write_requests_failures_total:rate1m
astra_db_cas_write_requests_failures_total:rate5m

A calculated rate of change for the total number of CAS write operations in LWTs that failed. Failure rates for CAS writes are critical for assessing the reliability and performance of write operations. Alert if there’s a significant increase in failures.

astra_db_cas_write_requests_timeouts_total:rate1m
astra_db_cas_write_requests_timeouts_total:rate5m

A calculated rate of change for the number of CAS write operations in LWTs that timed out. Write timeouts can significantly impact application performance and user experience. Monitoring this rate is crucial for maintaining system performance. Alert on an upward trend in timeouts.

astra_db_cas_write_requests_unavailables_total:rate1m
astra_db_cas_write_requests_unavailables_total:rate5m

A calculated rate of change for CAS write operations in LWTs that were unavailable. Increases in this metric can indicate problems with cluster capacity or health, impacting the ability to perform write operations. Alert if there’s an increase, as it could signal critical availability issues.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com