View and export metrics

You can view Astra DB Serverless health metrics to get insight into database performance and workload distribution.

Organizations on the Pay As You Go or Enterprise plans can also export and scrape health metrics to third-party services.

View metrics in the Astra Portal

  • Serverless (Vector) databases

  • Serverless (Non-Vector) databases

In the Astra Portal, go to your Serverless (Vector) database to explore the following Key Metrics:

  • Total Latency by Percentile indicates the processing time for read and write requests as histogram quantiles of 99 or 50. p50 Reads/Writes report median latency, which means that half of the requests were processed faster than this value. p99 Reads/Writes report 99th percentile latency, which means that only 1% of requests were slower this value.

  • Total Throughput is the number of requests processed per second, calculated as a sum of read and write throughput. This widget also reports the average throughput per second.

To change the reporting period, use the schedule Time Frame Section menu. The default time range is 10 minutes.

For multi-region databases, use the Region menu to inspect data for a different region.

Gaps in read/write metrics indicate periods when there were no requests.

In the Astra Portal, go to your Serverless (Non-Vector) database, and then click the Health tab to explore metrics such as total requests, errors, latency, lightweight transactions (LWTs), and tombstones. For more information, see Metric definitions.

To change the reporting period, use the schedule Time Frame Section menu. The default time range is 10 minutes.

For multi-region databases, use the Region menu to inspect data for a different region.

Gaps in read/write metrics indicate periods when there were no requests.

Scrape metrics in exposition format

This Astra DB Serverless feature is currently in public preview. Development is ongoing, and the features and functionality are subject to change. Astra DB Serverless, and the use of such, is subject to the DataStax Preview Terms.

You can use compatible third-party observability tools and services to scrape metrics in Prometheus exposition format for your Astra DB Serverless databases. You can use any tool or service that is compatible with Prometheus exposition format, such as Prometheus, Amazon CloudWatch, Datadog, Victoria Metrics, and more. The following steps use Prometheus for example purposes.

Whereas exporting metrics is push-based, this approach is pull-based. Your security policies might require you to use a pull-based configuration for metrics ingestion.

  1. Create an application token that has Manage Metrics permission for the database that you want to scrape.

    If you plan to scrape multiple databases, you can customize each token’s scope. For example, you can create separate tokens for each database or create one token with the Organization Administrator role.

  2. Get the database ID for each Astra DB Serverless database that you want to scrape.

    • Astra Portal

    • DevOps API

    In the Astra Portal navigation menu, select a database, and then copy the Database ID. Repeat for each database that you want to scrape.

    You can use the DevOps API List databases endpoint to get all database IDs at once:

    curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases" \
    --header "Authorization: Bearer APPLICATION_TOKEN" \
    --header "Accept: application/json"

    Make sure the application token has View DB permission for all relevant databases. For example, the Organization Administrator role can view all databases within an Astra DB organization. If the supplied token doesn’t have permission to view a particular database, the response won’t include any information for that database.

    Response

    A successful response contains an array of database objects. In each object, the id field contains the database ID.

    The following example is truncated for clarity.

    [
      {
        "id": "FIRST_DB_ID",
        "orgId": "organizations/ORG_ID",
        "ownerId": "users/ADMIN_USER_ID",
        "info": { ... },
        "creationTime": "2012-11-01T22:08:41+00:00",
        "terminationTime": "2019-11-01T22:08:41+00:00",
        "status": "ACTIVE",
        "storage": { ... },
        "availableActions": [ ... ],
        ...
      },
      {
        "id": "SECOND_DB_ID",
        "orgId": "organizations/ORG_ID",
        "ownerId": "users/ADMIN_USER_ID",
        "info": { ... },
        "creationTime": "2012-11-01T22:08:41+00:00",
        "terminationTime": "2019-11-01T22:08:41+00:00",
        "status": "ACTIVE",
        "storage": { ... },
        "availableActions": [ ... ],
        ...
      }
    ]
  3. In your Prometheus configuration YAML file, in the scrape_configs section, add a job for each database that you want to scrape:

    scrape_configs:
      - job_name: "prometheus-FIRST_DB_ID"
        scrape_interval: 5m
        metrics_path: /v1/databases/FIRST_DB_ID/metrics
        scheme: "https"
        authorization:
          type: "Astra-Token"
          credentials: "APPLICATION_TOKEN"
        static_configs:
          - targets: ["metrics.astra.datastax.com"]
    
      - job_name: "prometheus-SECOND_DB_ID"
        scrape_interval: 5m
        metrics_path: /v1/databases/SECOND_DB_ID/metrics
        scheme: "https"
        authorization:
          type: "Astra-Token"
          credentials: "APPLICATION_TOKEN"
        static_configs:
          - targets: ["metrics.astra.datastax.com"]
  4. Set the scrape interval to 1m or greater.

    • Global interval

    • Job interval

    In the global section, you can set a global scrape_interval:

    global:
      scrape_interval: 5m # Must be 1m or greater.

    In the scrape_configs section, you can set a scrape_interval for each job:

    scrape_configs:
      - job_name: "prometheus-DB_ID"
        scrape_interval: 5m # Must be 1m or greater.

    Because Astra DB metrics regenerate once per minute, you don’t need to set the scrape interval lower than 1m. Scraping multiple times per minute doesn’t retrieve new data.

  5. Reload your Prometheus configuration.

  6. If required, configure your observability service to receive the Astra DB metrics from your Prometheus endpoint.

Export metrics to third-party services

If your organization is on the Pay As You Go or Enterprise plan, your active databases can forward health metrics to a third-party observability service.

Exporting metrics is a push-based configuration. For a pull-based option, see Scrape metrics in exposition format.

If you use a private endpoint, exported metrics traffic doesn’t use the private connection.

Export metrics to Prometheus

You can use the Astra Portal or the DevOps API to manage the metrics export destinations for your databases. To export metrics for multiple databases, you must configure an export destination for each database.

You must use Prometheus v2.25 or later. DataStax recommends Prometheus v2.33 or later.

You must enable remote-write-receiver in the destination app. For more information, see Remote storage integrations and <remote_write>.

  • Astra Portal

  • DevOps API

  1. In the Astra Portal navigation menu, select your database, and then click Settings.

  2. In the Export Metrics section, click Add Destination.

  3. Select Prometheus.

  4. For Prometheus Strategy, select your authentication method, and then provide the required credentials:

    • Bearer: Provide a Prometheus Token and Prometheus Endpoint.

    • Basic: Provide a Prometheus Username, Prometheus Password, and Prometheus Endpoint.

  5. Click Add Destination.

    The new destination appears in the Export Metrics section. To edit or delete this configuration, click more_vert More, and then select Edit or Delete.

  1. Create an application token that has Manage Metrics permission for the database that you want export metrics.

    If you plan to export metrics for multiple databases, you can customize each token’s scope. For example, you can create separate tokens for each database or create one token with the Organization Administrator role.

  2. Get the database ID. In the Astra Portal, go to your database, and then copy the Database ID, or use the DevOps API List databases endpoint to get multiple database IDs at once.

    curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases" \
    --header "Authorization: Bearer APPLICATION_TOKEN_WITH_ORG_ADMIN_ROLE"
  3. Use the DevOps API Get Telemetry Configuration endpoint to get the database’s existing metrics export configuration:

    curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \
    --header "Authorization: Bearer APPLICATION_TOKEN"

    A database’s telemetry configuration is a desired state list. You must always send the list of all export destinations when you add, remove, or change any destinations for a database. For example, if a database has five destinations, but your POST body contains only one destination, then all destinations are removed except the one included in your request.

  4. To add a Prometheus metrics export destination to a database, send a POST request to the DevOps API Configure Telemetry endpoint. The request body must contain the database’s entire telemetry configuration, including all existing destinations, as returned by the Get Telemetry Configuration endpoint.

    • Prometheus token

    • Prometheus username and password

    curl -sS --location -X POST "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \
    --header "Authorization: Bearer APPLICATION_TOKEN" \
    --data '{
      "prometheus_remote":  {
        "endpoint": "PROMETHEUS_ENDPOINT",
        "auth_strategy": "bearer",
        "token": PROMETHEUS_TOKEN
      }
    }'

    Replace the following:

    • ASTRA_DB_ID: Your database ID.

    • APPLICATION_TOKEN: Your Astra DB application token.

    • PROMETHEUS_ENDPOINT: Your Prometheus endpoint URL.

    • PROMETHEUS_TOKEN: Your Prometheus authentication token.

    curl -sS --location -X POST "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \
    --header "Authorization: Bearer APPLICATION_TOKEN" \
    --data '{
      "prometheus_remote":  {
        "endpoint": "PROMETHEUS_ENDPOINT",
        "auth_strategy": "basic",
        "password": PROMETHEUS_PASSWORD,
        "user": PROMETHEUS_USERNAME
      }
    }'

    Replace the following:

    • ASTRA_DB_ID: Your database ID.

    • APPLICATION_TOKEN: Your Astra DB application token.

    • PROMETHEUS_ENDPOINT: Your Prometheus endpoint URL.

    • PROMETHEUS_PASSWORD and PROMETHEUS_USERNAME: Your Prometheus authentication credentials.

  5. Use the Get Telemetry Configuration endpoint to review the updated configuration:

    curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \
    --header "Authorization: Bearer APPLICATION_TOKEN"
    Response

    A successful response includes information about all metrics export destinations for the specified database:

    {
      "prometheus_remote": {
        "endpoint": "https://prometheus.example.com/api/prom/push",
        "auth_strategy": "basic",
        "user": "PROMETHEUS_USERNAME",
        "password": "PROMETHEUS_PASSWORD"
      }
    }
  6. To edit or delete a telemetry configuration, do the following:

    1. Use the Get Telemetry Configuration endpoint to get the database’s existing telemetry configuration.

    2. Send a POST request to the Configure Telemetry endpoint with the entire updated configuration. This is a desired state list; the POST request body replaces the database’s telemetry configuration.

Export metrics to Apache Kafka

You can use the Astra Portal or the DevOps API to manage the metrics export destinations for your databases. To export metrics for multiple databases, you must configure an export destination for each database.

This configuration uses SASL authentication. For more information about telemetry with Kafka, see Kafka metrics overview and Kafka Monitoring.

  • Astra Portal

  • DevOps API

  1. In the Astra Portal navigation menu, select your database, and then click Settings.

  2. In the Export Metrics section, click Add Destination.

  3. Select Kafka.

  4. If required, select a Kafka Security Protocol:

    • SASL_PLAINTEXT: SASL authenticated, non-encrypted channel

    • SASL_SSL: SASL authenticated, encrypted channel

    This isn’t required for most Kafka installations. If you use hosted Kafka on Confluent Cloud, you might need to use SASL_SSL. Non-Authenticated options (SSL and PLAINTEXT) aren’t supported.

  5. For SASL Mechanism, enter the appropriate SASL mechanism property for your security protocol, such as GSSAPI, PLAIN, SCRAM-SHA-256, or SCRAM-SHA-512.

  6. For SASL Username and SASL Password, enter your Kafka authentication credentials.

  7. For Topic, enter the Kafka topic where you want Astra DB to export metrics. This topic must exist on your Kafka servers.

  8. For Bootstrap Servers, add one or more Kafka Bootstrap Server entries, such as pkc-9999e.us-east-1.aws.confluent.cloud:9092.

  9. Click Add Destination.

    The new destination appears in the Export Metrics section. To edit or delete this configuration, click more_vert More, and then select Edit or Delete.

  1. Create an application token that has Manage Metrics permission for the database that you want export metrics.

    If you plan to export metrics for multiple databases, you can customize each token’s scope. For example, you can create separate tokens for each database or create one token with the Organization Administrator role.

  2. Get the database ID. In the Astra Portal, go to your database, and then copy the Database ID, or use the DevOps API List databases endpoint to get multiple database IDs at once.

    curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases" \
    --header "Authorization: Bearer APPLICATION_TOKEN_WITH_ORG_ADMIN_ROLE"
  3. Use the DevOps API Get Telemetry Configuration endpoint to get the database’s existing metrics export configuration:

    curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \
    --header "Authorization: Bearer APPLICATION_TOKEN"

    A database’s telemetry configuration is a desired state list. You must always send the list of all export destinations when you add, remove, or change any destinations for a database. For example, if a database has five destinations, but your POST body contains only one destination, then all destinations are removed except the one included in your request.

  4. To add a Kafka metrics export destination to a database, send a POST request to the DevOps API Configure Telemetry endpoint. The request body must contain the database’s entire telemetry configuration, including all existing destinations, as returned by the Get Telemetry Configuration endpoint.

    curl -sS --location -X POST "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \
    --header "Authorization: Bearer APPLICATION_TOKEN" \
    --data '{
      "kafka": {
        "bootstrap_servers": [
          "BOOTSTRAP_SERVER_URL"
        ],
        "topic": "KAFKA_TOPIC",
        "sasl_mechanism": "SASL_MECHANISM_PROPERTY",
        "sasl_username": "KAFKA_USERNAME",
        "sasl_password": "KAFKA_PASSWORD",
        "security_protocol": "SASL_SECURITY_PROTOCOL"
      }
    }'

    Replace the following:

    • ASTRA_DB_ID: Your database ID.

    • APPLICATION_TOKEN: Your Astra DB application token.

    • BOOTSTRAP_SERVER_URL: A Kafka Bootstrap Server URL, such as pkc-9999e.us-east-1.aws.confluent.cloud:9092. You can provide a list of URLs.

    • KAFKA_TOPIC: The Kafka topic where you want Astra DB to export metrics. This topic must exist on your Kafka servers.

    • SASL_MECHANISM_PROPERTY: The appropriate SASL mechanism property for your security protocol, such as GSSAPI, PLAIN, SCRAM-SHA-256, or SCRAM-SHA-512. For more information, see Kafka Authentication Basics and SASL Authentication in Confluent Platform.

    • KAFKA_USERNAME and KAFKA_PASSWORD: Your Kafka authentication credentials.

    • SASL_SECURITY_PROTOCOL: If required, specify SASL_PLAINTEXT or SASL_SSL. This isn’t required for most Kafka installations. If you use hosted Kafka on Confluent Cloud, you might need to use SASL_SSL. Non-Authenticated options (SSL and PLAINTEXT) aren’t supported.

  5. Use the Get Telemetry Configuration endpoint to review the updated configuration:

    curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \
    --header "Authorization: Bearer APPLICATION_TOKEN"
    Response

    A successful response includes information about all metrics export destinations for the specified database:

    {
      "kafka": {
        "bootstrap_servers": [
          "BOOTSTRAP_SERVER_URL"
        ],
        "topic": "astra_metrics_events",
        "sasl_mechanism": "PLAIN",
        "sasl_username": "KAFKA_USERNAME",
        "sasl_password": "KAFKA_PASSWORD",
        "security_protocol": "SASL_PLAINTEXT"
      }
    }
  6. To edit or delete a telemetry configuration, do the following:

    1. Use the Get Telemetry Configuration endpoint to get the database’s existing telemetry configuration.

    2. Send a POST request to the Configure Telemetry endpoint with the entire updated configuration. This is a desired state list; the POST request body replaces the database’s telemetry configuration.

Export metrics to Amazon CloudWatch

You can use the Astra Portal or the DevOps API to manage the metrics export destinations for your databases. To export metrics for multiple databases, you must configure an export destination for each database.

  • Astra Portal

  • DevOps API

  1. In the Astra Portal navigation menu, select your database, and then click Settings.

  2. In the Export Metrics section, click Add Destination.

  3. Select Amazon CloudWatch.

  4. For Access Key and Secret Key, enter your Amazon CloudWatch authentication credentials.

    The secret key user must have the cloudwatch:PutMetricData permission.

    Example IAM policy
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "AstraDBMetrics",
          "Effect": "Allow",
          "Action": "cloudwatch:PutMetricData",
          "Resource": "*"
        }
      ]
    }
  5. For Region, select the region where you want to export the metrics. This doesn’t have to match your database’s region.

  6. Click Add Destination.

    The new destination appears in the Export Metrics section. To edit or delete this configuration, click more_vert More, and then select Edit or Delete.

  1. Create an application token that has Manage Metrics permission for the database that you want export metrics.

    If you plan to export metrics for multiple databases, you can customize each token’s scope. For example, you can create separate tokens for each database or create one token with the Organization Administrator role.

  2. Get the database ID. In the Astra Portal, go to your database, and then copy the Database ID, or use the DevOps API List databases endpoint to get multiple database IDs at once.

    curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases" \
    --header "Authorization: Bearer APPLICATION_TOKEN_WITH_ORG_ADMIN_ROLE"
  3. Use the DevOps API Get Telemetry Configuration endpoint to get the database’s existing metrics export configuration:

    curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \
    --header "Authorization: Bearer APPLICATION_TOKEN"

    A database’s telemetry configuration is a desired state list. You must always send the list of all export destinations when you add, remove, or change any destinations for a database. For example, if a database has five destinations, but your POST body contains only one destination, then all destinations are removed except the one included in your request.

  4. To add an Amazon CloudWatch metrics export destination to a database, send a POST request to the DevOps API Configure Telemetry endpoint. The request body must contain the database’s entire telemetry configuration, including all existing destinations, as returned by the Get Telemetry Configuration endpoint.

    curl -sS --location -X POST "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \
    --header "Authorization: Bearer APPLICATION_TOKEN" \
    --data '{
      "cloudwatch": {
        "access_key": "AWS_ACCESS_KEY",
        "secret_key": "AWS_SECRET_KEY",
        "region": "AWS_REGION"
      }
    }'

    Replace the following:

    • ASTRA_DB_ID: Your database ID.

    • APPLICATION_TOKEN: Your Astra DB application token.

    • AWS_ACCESS_KEY and AWS_SECRET_KEY: Your Amazon CloudWatch authentication credentials. The secret key user must have the cloudwatch:PutMetricData permission, for example:

      Example IAM policy
      {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Sid": "AstraDBMetrics",
            "Effect": "Allow",
            "Action": "cloudwatch:PutMetricData",
            "Resource": "*"
          }
        ]
      }
    • AWS_REGION: Enter the region where you want to export the metrics. This doesn’t have to match your database’s region.

  5. Use the Get Telemetry Configuration endpoint to review the updated configuration:

    curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \
    --header "Authorization: Bearer APPLICATION_TOKEN"
    Response

    A successful response includes information about all metrics export destinations for the specified database:

    {
      "cloudwatch": {
        "access_key": "AWS_ACCESS_KEY",
        "secret_key": "AWS_SECRET_KEY",
        "region": "AWS_REGION"
      }
    }
  6. To edit or delete a telemetry configuration, do the following:

    1. Use the Get Telemetry Configuration endpoint to get the database’s existing telemetry configuration.

    2. Send a POST request to the Configure Telemetry endpoint with the entire updated configuration. This is a desired state list; the POST request body replaces the database’s telemetry configuration.

Export metrics to Splunk

You can use the Astra Portal or the DevOps API to manage the metrics export destinations for your databases. To export metrics for multiple databases, you must configure an export destination for each database.

  • Astra Portal

  • DevOps API

  1. In the Astra Portal navigation menu, select your database, and then click Settings.

  2. In the Export Metrics section, click Add Destination.

  3. Select Splunk.

  4. For Endpoint, enter the full HTTP address and path for the Splunk HTTP Event Collector (HEC) endpoint.

  5. For Index, enter the Splunk index where you want to export metrics.

  6. For Token, enter the Splunk HTTP Event Collector (HEC) token for Splunk authentication. This token must have permission to write to the specified index.

  7. For Source, enter the source for the events sent to the sink. If you don’t specify a source, the default is astradb.

  8. For Source Type, enter the type of events sent to the sink. If you don’t specify a source type, the default is astradb-metrics.

  9. Click Add Destination.

    The new destination appears in the Export Metrics section. To edit or delete this configuration, click more_vert More, and then select Edit or Delete.

  1. Create an application token that has Manage Metrics permission for the database that you want export metrics.

    If you plan to export metrics for multiple databases, you can customize each token’s scope. For example, you can create separate tokens for each database or create one token with the Organization Administrator role.

  2. Get the database ID. In the Astra Portal, go to your database, and then copy the Database ID, or use the DevOps API List databases endpoint to get multiple database IDs at once.

    curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases" \
    --header "Authorization: Bearer APPLICATION_TOKEN_WITH_ORG_ADMIN_ROLE"
  3. Use the DevOps API Get Telemetry Configuration endpoint to get the database’s existing metrics export configuration:

    curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \
    --header "Authorization: Bearer APPLICATION_TOKEN"

    A database’s telemetry configuration is a desired state list. You must always send the list of all export destinations when you add, remove, or change any destinations for a database. For example, if a database has five destinations, but your POST body contains only one destination, then all destinations are removed except the one included in your request.

  4. To add a Splunk metrics export destination to a database, send a POST request to the DevOps API Configure Telemetry endpoint. The request body must contain the database’s entire telemetry configuration, including all existing destinations, as returned by the Get Telemetry Configuration endpoint.

    curl -sS --location -X POST "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \
    --header "Authorization: Bearer APPLICATION_TOKEN" \
    --data '{
      "splunk": {
        "endpoint": "SPLUNK_HEC_ENDPOINT",
        "index": "SPLUNK_INDEX",
        "token": "SPLUNK_TOKEN",
        "source": "SPLUNK_SOURCE",
        "sourcetype": "SPLUNK_SOURCETYPE"
      }
    }'

    Replace the following:

    • ASTRA_DB_ID: Your database ID.

    • APPLICATION_TOKEN: Your Astra DB application token.

    • SPLUNK_HEC_ENDPOINT: The full HTTP address and path for the Splunk HTTP Event Collector (HEC) endpoint. You can get this from your Splunk Administrator.

    • SPLUNK_INDEX: The Splunk index where you want to export metrics.

    • SPLUNK_TOKEN: The Splunk HEC token for Splunk authentication. This token must have permission to write to the specified index.

    • SPLUNK_SOURCE: The source for the events sent to the sink. If unset, the default is astradb.

    • SPLUNK_SOURCETYPE: The type of events sent to the sink. If unset, the default is astradb-metrics.

  5. Use the Get Telemetry Configuration endpoint to review the updated configuration:

    curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \
    --header "Authorization: Bearer APPLICATION_TOKEN"
    Response

    A successful response includes information about all metrics export destinations for the specified database:

    {
      "splunk": {
        "endpoint": "https://http-inputs-COMPANY.splunkcloud.com",
        "index": "astra_db_metrics",
        "token": "SPLUNK_TOKEN",
        "source": "astradb",
        "sourcetype": "astradb-metrics"
      }
    }
  6. To edit or delete a telemetry configuration, do the following:

    1. Use the Get Telemetry Configuration endpoint to get the database’s existing telemetry configuration.

    2. Send a POST request to the Configure Telemetry endpoint with the entire updated configuration. This is a desired state list; the POST request body replaces the database’s telemetry configuration.

Export metrics to Apache Pulsar

You can use the Astra Portal or the DevOps API to manage the metrics export destinations for your databases. To export metrics for multiple databases, you must configure an export destination for each database.

  • Astra Portal

  • DevOps API

  1. In the Astra Portal navigation menu, select your database, and then click Settings.

  2. In the Export Metrics section, click Add Destination.

  3. Select Pulsar.

  4. For Endpoint, enter your Pulsar Broker URL.

  5. For Topic, enter the Pulsar topic where you want to publish metrics.

  6. Enter an Auth Name, select the Auth Strategy used by your Pulsar Broker, and then provide the required credentials:

    • Token: Provide a Pulsar authentication token.

    • Oauth2: Provide your OAuth2 Credentials URL and OAuth2 Issuer URL. Optionally, you can provide your OAuth2 Audience and OAuth2 Scope. For more information, see Authentication using OAuth 2.0 access tokens.

  7. Click Add Destination.

    The new destination appears in the Export Metrics section. To edit or delete this configuration, click more_vert More, and then select Edit or Delete.

  1. Create an application token that has Manage Metrics permission for the database that you want export metrics.

    If you plan to export metrics for multiple databases, you can customize each token’s scope. For example, you can create separate tokens for each database or create one token with the Organization Administrator role.

  2. Get the database ID. In the Astra Portal, go to your database, and then copy the Database ID, or use the DevOps API List databases endpoint to get multiple database IDs at once.

    curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases" \
    --header "Authorization: Bearer APPLICATION_TOKEN_WITH_ORG_ADMIN_ROLE"
  3. Use the DevOps API Get Telemetry Configuration endpoint to get the database’s existing metrics export configuration:

    curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \
    --header "Authorization: Bearer APPLICATION_TOKEN"

    A database’s telemetry configuration is a desired state list. You must always send the list of all export destinations when you add, remove, or change any destinations for a database. For example, if a database has five destinations, but your POST body contains only one destination, then all destinations are removed except the one included in your request.

  4. To add a Pulsar metrics export destination to a database, send a POST request to the DevOps API Configure Telemetry endpoint. The request body must contain the database’s entire telemetry configuration, including all existing destinations, as returned by the Get Telemetry Configuration endpoint.

    • Pulsar token

    • Pulsar OAuth2

    curl -sS --location -X POST "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \
    --header "Authorization: Bearer APPLICATION_TOKEN" \
    --data '{
      "pulsar": {
        "endpoint": "PULSAR_ENDPOINT",
        "topic": "PULSAR_TOPIC",
        "auth_strategy": "token",
        "token": "PULSAR_TOKEN",
        "auth_name": "PULSAR_AUTH_NAME"
      }
    }'

    Replace the following:

    • ASTRA_DB_ID: Your database ID.

    • APPLICATION_TOKEN: Your Astra DB application token.

    • PULSAR_ENDPOINT: Your Pulsar Broker URL.

    • PULSAR_TOPIC: The Pulsar topic where you want to publish metrics.

    • PULSAR_TOKEN: A Pulsar authentication token.

    • PULSAR_AUTH_NAME: A Pulsar authentication name.

    curl -sS --location -X POST "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \
    --header "Authorization: Bearer APPLICATION_TOKEN" \
    --data '{
      "pulsar": {
        "endpoint": "PULSAR_ENDPOINT",
        "topic": "PULSAR_TOPIC",
        "auth_strategy": "oauth2",
        "oauth2_credentials_url": "CREDENTIALS_URL",
        "oauth2_issuer_url": "ISSUER_URL"
        }
    }'

    Replace the following:

    • ASTRA_DB_ID: Your database ID.

    • APPLICATION_TOKEN: Your Astra DB application token.

    • PULSAR_ENDPOINT: Your Pulsar Broker URL.

    • PULSAR_TOPIC: The Pulsar topic where you want to publish metrics.

    • CREDENTIALS_URL and ISSUER_URL: Your Pulsar OAuth2 credentials and issuer URLs. Optionally, you can provide oauth_audience and oauth2_scope. For more information, see Authentication using OAuth 2.0 access tokens.

  5. Use the Get Telemetry Configuration endpoint to review the updated configuration:

    curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \
    --header "Authorization: Bearer APPLICATION_TOKEN"
    Response

    A successful response includes information about all metrics export destinations for the specified database:

    {
      "pulsar": {
        "endpoint": "PULSAR_ENDPOINT_URL",
        "topic": "PULSAR_TOPIC",
        "auth_strategy": "token",
        "token": "PULSAR_TOKEN",
        "auth_name": "PULSAR_AUTH_NAME"
      }
    }
  6. To edit or delete a telemetry configuration, do the following:

    1. Use the Get Telemetry Configuration endpoint to get the database’s existing telemetry configuration.

    2. Send a POST request to the Configure Telemetry endpoint with the entire updated configuration. This is a desired state list; the POST request body replaces the database’s telemetry configuration.

Export metrics to Datadog

You can use the Astra Portal or the DevOps API to manage the metrics export destinations for your databases. To export metrics for multiple databases, you must configure an export destination for each database.

  • Astra Portal

  • DevOps API

  1. In the Astra Portal navigation menu, select your database, and then click Settings.

  2. In the Export Metrics section, click Add Destination.

  3. Select Datadog.

  4. For API Key, enter your Datadog API key.

  5. Optional: For Site, enter the Datadog site parameter for the site where you want to export metrics. The site parameter format depends on your site URL, for example:

    • If your site URL begins with https://app., remove this entire clause from your site URL to form the site parameter.

    • If your site URL begins with a different subdomain, such as https://us5 or https://ap1, remove only https:// from your site URL to form the site parameter.

  6. Click Add Destination.

    The new destination appears in the Export Metrics section. To edit or delete this configuration, click more_vert More, and then select Edit or Delete.

  1. Create an application token that has Manage Metrics permission for the database that you want export metrics.

    If you plan to export metrics for multiple databases, you can customize each token’s scope. For example, you can create separate tokens for each database or create one token with the Organization Administrator role.

  2. Get the database ID. In the Astra Portal, go to your database, and then copy the Database ID, or use the DevOps API List databases endpoint to get multiple database IDs at once.

    curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases" \
    --header "Authorization: Bearer APPLICATION_TOKEN_WITH_ORG_ADMIN_ROLE"
  3. Use the DevOps API Get Telemetry Configuration endpoint to get the database’s existing metrics export configuration:

    curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \
    --header "Authorization: Bearer APPLICATION_TOKEN"

    A database’s telemetry configuration is a desired state list. You must always send the list of all export destinations when you add, remove, or change any destinations for a database. For example, if a database has five destinations, but your POST body contains only one destination, then all destinations are removed except the one included in your request.

  4. To add a Datadog metrics export destination to a database, send a POST request to the DevOps API Configure Telemetry endpoint. The request body must contain the database’s entire telemetry configuration, including all existing destinations, as returned by the Get Telemetry Configuration endpoint.

    curl -sS --location -X POST "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \
    --header "Authorization: Bearer APPLICATION_TOKEN" \
    --data '{
      "datadog": {
        "api_key": "DATADOG_API_KEY",
        "site": "DATADOG_SITE"
      }
    }'

    Replace the following:

    • ASTRA_DB_ID: Your database ID.

    • APPLICATION_TOKEN: Your Astra DB application token.

    • DATADOG_API_KEY: Your Datadog API key.

    • DATADOG_SITE: The Datadog site parameter for the site where you want to export metrics. The site parameter format depends on your site URL, for example:

    • If your site URL begins with https://app., remove this entire clause from your site URL to form the site parameter.

    • If your site URL begins with a different subdomain, such as https://us5 or https://ap1, remove only https:// from your site URL to form the site parameter.

  5. Use the Get Telemetry Configuration endpoint to review the updated configuration:

    curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \
    --header "Authorization: Bearer APPLICATION_TOKEN"
    Response

    A successful response includes information about all metrics export destinations for the specified database:

    {
      "datadog": {
        "api_key": "DATADOG_API_KEY",
        "site": "DATADOG_SITE"
      }
    }
  6. To edit or delete a telemetry configuration, do the following:

    1. Use the Get Telemetry Configuration endpoint to get the database’s existing telemetry configuration.

    2. Send a POST request to the Configure Telemetry endpoint with the entire updated configuration. This is a desired state list; the POST request body replaces the database’s telemetry configuration.

Metric definitions

Astra DB Serverless metrics are aggregated values calculated once per minute, and each metric has a rate1m and a rate5m variant. rate1m is the rate of increase or decrease over a one minute interval, and rate5m is the rate of increase or decrease over a five minute interval.

Astra DB Serverless captures the following metrics for your databases:

astra_db_rate_limited_requests:rate1m
astra_db_rate_limited_requests:rate5m

A calculated rate of change for the number of failed operations due to an Astra DB rate limit. Using these rates, alert if the value is greater than 0 for more than 30 minutes.

astra_db_read_requests_failures:rate1m
astra_db_read_requests_failures:rate5m

A calculated rate of change for the number of failed reads. Using these rates, alert if the value is greater than 0. Warn alert on low amount. High alert on larger amounts; determine potentially as a percentage of read throughput.

astra_db_read_requests_timeouts:rate1m
astra_db_read_requests_timeouts:rate5m

A calculated rate of change for read timeouts. Timeouts happen when operations against the database take longer than the server side timeout. Using these rates, alert if the value is greater than 0.

astra_db_read_requests_unavailables:rate1m
astra_db_read_requests_unavailables:rate5m

A calculated rate of change for reads where there were not enough data service replicas available to complete the request. Using these rates, alert if the value is greater than 0.

astra_db_write_requests_failures:rate1m
astra_db_write_requests_failures:rate5m

A calculated rate of change for the number of failed writes. Apache Cassandra® drivers retry failed operations, but significant failures can be problematic. Using these rates, alert if the value is greater than 0. Warn alert on low amount. High alert on larger amounts; determine potentially as a percentage of read throughput.

astra_db_write_requests_timeouts:rate1m
astra_db_write_requests_timeouts:rate5m

A calculated rate of change for timeouts, which occur when operations take longer than the server side timeout. Using these rates, compare with astra_db_write_requests_failures.

astra_db_write_requests_unavailables:rate1m
astra_db_write_requests_unavailables:rate5m

A calculated rate of change for unavailable errors, which occur when the service is not available to service a particular request. Using these rates, compare with astra_db_write_requests_failures.

astra_db_range_requests_failures:rate1m
astra_db_range_requests_failures:rate5m

A calculated rate of change for the number of range reads that failed. Cassandra drivers retry failed operations, but significant failures can be problematic. Using these rates, alert if the value is greater than 0. Warn alert on low amount. High alert on larger amounts; determine potentially as a percentage of read throughput.

astra_db_range_requests_timeouts:rate1m
astra_db_range_requests_timeouts:rate5m

A calculated rate of change for timeouts, which are a subset of total failures. Use this metric to understand if failures are due to timeouts. Using these rates, compare with astra_db_range_requests_failures.

astra_db_range_requests_unavailables:rate1m
astra_db_range_requests_unavailables:rate5m

A calculated rate of change for unavailable errors, which are a subset of total failures. Use this metric to understand if failures are due to timeouts. Using these rates, compare with astra_db_range_requests_failures.

astra_db_write_latency_seconds:rate1m
astra_db_write_latency_seconds:rate5m

A calculated rate of change for write throughput. Alert based on your application service level objective (SLO).

astra_db_write_latency_seconds_QUANTILE:rate1m
astra_db_write_latency_seconds_QUANTILE:rate5m

A calculated rate of change for write latency. QUANTILE is a histogram quantile of 99, 95, 90, 75, or 50, such as astra_db_write_latency_seconds_P99:rate1m. Alert based on your application SLO.

astra_db_write_requests_mutation_size_bytes_QUANTILE
astra_db_write_requests_mutation_size_bytes_QUANTILE

A calculated rate of change for write size. QUANTILE is a histogram quantile of 99, 95, 90, 75, or 50, such as astra_db_write_requests_mutation_size_bytes_P99:rate5m.

astra_db_read_latency_seconds:rate1m
astra_db_read_latency_seconds:rate5m

A calculated rate of change for read latency. Alert based on your application SLO.

astra_db_read_latency_seconds_QUANTILE:rate1m
astra_db_read_latency_seconds_QUANTILE:rate5m

A calculated rate of change for read latency. QUANTILE is a histogram quantile of 99, 95, 90, 75, or 50, such as astra_db_read_latency_seconds_P99:rate1m. Alert based on your application SLO.

astra_db_range_latency_seconds:rate1m
astra_db_range_latency_seconds:rate5m

A calculated rate of change for range read throughput. Alert based on your application SLO.

astra_db_range_latency_seconds_QUANTILE:rate1m
astra_db_range_latency_seconds_QUANTILE:rate5m

A calculated rate of change of range read latency. QUANTILE is a histogram quantile of 99, 95, 90, 75, or 50, such as astra_db_range_latency_seconds_P99. Alert based on your application SLO.

astra_db_read_counter_tombstone:rate1m
astra_db_read_counter_tombstone:rate5m

A calculated rate of change for the total number of tombstone reads. Tombstones are markers of deleted records or certain updates, such as collection updates. Monitoring the rate of tombstone reads can help identify potential performance impacts. Alert if the value increases significantly.

astra_db_read_failure_tombstone:rate1m
astra_db_read_failure_tombstone:rate5m

A calculated rate of change for the total number of read operations that failed due to hitting the the tombstone guardrail failure threshold. This metric is critical for identifying issues that could lead to performance degradation or timeouts. Alert if the value is greater than 0.

astra_db_read_warnings_tombstone:rate1m
astra_db_read_warnings_tombstone:rate5m

A calculated rate of change for the total number of warnings generated due to getting close to the tombstone guardrail failure threshold. This metric helps identify scenarios where read operations are slowed or at risk of slowing. Alert on a significant increase, which can indicate potential read performance issues.

astra_db_cas_read_latency_seconds:rate1m
astra_db_cas_read_latency_seconds:rate5m

A calculated rate of change for the count of Compare and Set (CAS) read operations in Lightweight Transactions (LWTs), measuring the throughput of CAS reads. Monitoring this rate is important for understanding the load and performance of read operations that involve conditional checks. Alert on unusual changes, which could signal issues with data access patterns or performance bottlenecks.

astra_db_cas_read_latency_seconds_QUANTILE:rate1m
astra_db_cas_read_latency_seconds_QUANTILE:rate5m

A calculated rate of change for CAS read latency distributions in LWTs. QUANTILE is a histogram quantile of 99, 95, 90, 75, or 50, such as astra_db_cas_read_latency_seconds_P99. This metric helps you identify the latency of CAS reads, which is essential for diagnosing potential read performance issues and understanding the distribution of read operation latencies. Alert based on your application SLO.

astra_db_cas_write_latency_seconds:rate1m
astra_db_cas_write_latency_seconds:rate5m

A calculated rate of change for the count of Compare and Swap (CAS) write operations in LWTs, measuring the throughput of CAS writes. Compare and Swap operations are used for atomic read-modify-write operations. Monitoring the rate of these operations helps in understanding the load and performance characteristics of CAS writes. Alert if the rate significantly deviates from expected patterns, indicating potential concurrency or contention issues.

astra_db_cas_write_latency_seconds_QUANTILE:rate1m
astra_db_cas_write_latency_seconds_QUANTILE:rate5m

A calculated rate of change for CAS write latency distributions in LWTs. QUANTILE is a histogram quantile of 99, 95, 90, 75, or 50, such as astra_db_cas_write_latency_seconds_P99. This metric provides insights into the latency characteristics of CAS write operations, helping identify latency spikes or trends over time. Alert based on your application SLO.

astra_db_cas_write_unfinished_commit:rate1m
astra_db_cas_write_unfinished_commit:rate5m

A calculated rate of change for the total number of CAS write operations in LWTs that did not finish committing. This metric is crucial for detecting issues in the atomicity of write operations, potentially caused by network or node failures. Alert if there’s an increase because this could impact data consistency.

astra_db_cas_write_contention_QUANTILE:rate1m
astra_db_cas_write_contention_QUANTILE:rate5m

A calculated rate of change for the distribution of CAS write contention in LWTs. QUANTILE is a histogram quantile of 99, 95, 90, 75, or 50, such as astra_db_cas_write_contention_P99. Contention during CAS write operations can significantly impact performance. This metric helps you understand and diagnose the levels of contention affecting CAS writes. Alert based on your application SLO.

astra_db_cas_read_unfinished_commit:rate1m
astra_db_cas_read_unfinished_commit:rate5m

A calculated rate of change for the total number of CAS read operations that encountered unfinished commits. Monitoring this metric is important for identifying issues with read consistency and potential data visibility problems. Alert if there’s an increase, indicating problems with the completion of write operations.

astra_db_cas_read_contention_QUANTILE:rate1m
astra_db_cas_read_contention_QUANTILE:rate5m

A calculated rate of change for the distribution of CAS read contention in LWTs. QUANTILE is a histogram quantile of 99, 95, 90, 75, or 50, such as astra_db_cas_read_contention_P99. Contention during CAS reads can indicate performance issues or high levels of concurrent access to the same data. Alert based on your application SLO.

astra_db_cas_read_requests_failures:rate1m
astra_db_cas_read_requests_failures:rate5m

A calculated rate of change for the total number of CAS read operations in LWTs that failed. Failures in CAS reads can signal issues with data access or consistency problems. Alert if the rate increases, indicating potential issues affecting the reliability of CAS reads.

astra_db_cas_read_requests_timeouts:rate1m
astra_db_cas_read_requests_timeouts:rate5m

A calculated rate of change for the number of CAS read operations in LWTs that timed out. Timeouts can indicate system overload or issues with data access patterns. Monitoring this metric helps in identifying and addressing potential bottlenecks. Alert if there’s an increase in timeouts.

astra_db_cas_read_requests_unavailables:rate1m
astra_db_cas_read_requests_unavailables:rate5m

A calculated rate of change for CAS read operations in LWTs that were unavailable. This metric is vital for understanding the availability of the system to handle CAS reads. An increase in unavailability can indicate cluster health issues. Alert if the rate increases.

astra_db_cas_write_requests_failures:rate1m
astra_db_cas_write_requests_failures:rate5m

A calculated rate of change for the total number of CAS write operations in LWTs that failed. Failure rates for CAS writes are critical for assessing the reliability and performance of write operations. Alert if there’s a significant increase in failures.

astra_db_cas_write_requests_timeouts:rate1m
astra_db_cas_write_requests_timeouts:rate5m

A calculated rate of change for the number of CAS write operations in LWTs that timed out. Write timeouts can significantly impact application performance and user experience. Monitoring this rate is crucial for maintaining system performance. Alert on an upward trend in timeouts.

astra_db_cas_write_requests_unavailables:rate1m
astra_db_cas_write_requests_unavailables:rate5m

A calculated rate of change for CAS write operations in LWTs that were unavailable. Increases in this metric can indicate problems with cluster capacity or health, impacting the ability to perform write operations. Alert if there’s an increase, as it could signal critical availability issues.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com