Monitor database metrics
You can view health metrics for your databases, such as latency and throughput. These metrics provide insights into database performance and workload distribution.
View metrics in the Astra Portal
In the Astra Portal, go to your Astra DB Classic database, and then click the Health tab to explore metrics such as total requests, errors, latency, lightweight transactions (LWTs), and tombstones.
Read and write latencies display in nanoseconds. To view more granular metrics, hover over a specific time in the graph. Gaps in read/write metrics indicate periods when there were no requests.
To change the reporting period, use the
Time Frame Section menu. The default time range is 10 minutes.For multi-region databases, use the Region menu to inspect data for a different region.
To view the dashboard in a full window, click Cycle View Mode, press Esc, click Share, and then open the URL in a new browser tab or window.
Request Overview widget
- Requests Combined
-
Displays request rates (req/s) for different types of requests. The rates are summed over all database coordinators.
- Request Errors
-
Displays the request error rates (req/s) for different types of requests. The rates are summed over all database coordinators.
-
Timeouts indicate that the queries are taking too long to complete.
-
Unavailables indicate that the coordinator did not have enough alive data nodes to work with.
-
Failures can be caused by queries violating certain guardrails or other error conditions. For more information, see Astra DB Classic database limits.
-
- Average Request Complexity
-
This panel provides a high-level view of the average complexity of database requests. It measures the ratio of the average request units to the average request rate. A higher complexity score indicates that a request is more resource-intensive and expensive in terms of read/write activity. This metric combines write operations, including regular writes, counter writes, and index writes, to provide a comprehensive view of the write request complexity.
Writes widget
- Write Latency
-
Displays coordinator write request latency quantiles on the left y-axis and the total write request rate (req/s) on the right y-axis.
- Write Size Distribution
-
Displays different write request mutation size quantiles. Large mutations can cause performance problems and might even be rejected.
Reads widget
- Read Latency
-
Displays coordinator read request latency quantiles on the left y-axis, and the total read request rate (req/s) on the right y-axis.
- Range Latency
-
Displays coordinator range request latency quantiles on the left y-axis, and the total range request rate (req/s) on the right y-axis.
Stargate widget
This widget includes Connected Clients, which reports the number of CQL connections for the database.
Lightweight Transactions (LWTs) widget
- Compare and Set (CAS) Write Latency
-
Displays coordinator CAS write request latency quantiles on the left y-axis, and the total CAS write request rate (req/s) on the right y-axis.
- CAS Write Contention
-
Displays coordinator CAS write request contention quantiles on the left y-axis and the number of unfinished commits on the right y-axis.
A high number of contended requests negatively affect request latency and cause timeouts. Reduce the number of concurrent requests to the same partition. Unfinished commits cause increased latency. Reducing contention can help reduce the number of unfinished commits.
- CAS Read Latency
-
Displays coordinator CAS read request latency quantiles on the left y-axis, and the total CAS read request rate (req/s) on the right y-axis.
- CAS Read Contention
-
Displays coordinator CAS read request contention quantiles on the left y-axis, and the number of unfinished commits on the right y-axis.
A high number of contended requests negatively affect request latency and cause timeouts. Reduce the number of concurrent requests to the same partition. Unfinished commits cause increased latency. Reducing contention can help reduce the number of unfinished commits.
Tombstones widget
- Tombstones Scanned / s
-
Displays the number of tombstones being scanned per keyspace, table, and second. A large number of tombstones can cause increased latency or query failures.
- Tombstone Guardrail Warnings / s
-
Displays the number of queries exceeding the tombstone guardrail warning threshold per keyspace, table, and second. For more information, see Astra DB Classic database limits.
- Tombstone Guardrail Failures / s
-
Displays the number of queries exceeding the tombstone guardrail failure threshold per keyspace, table, and second. For more information, see Astra DB Classic database limits.
Export metrics to third-party services
If your organization is on the Pay As You Go or Enterprise plan, your active databases can forward health metrics to a third-party observability service.
If you use a private endpoint, exported metrics traffic doesn’t use the private connection. |
Export metrics to Prometheus
You can use the Astra Portal or the DevOps API to manage the metrics export destinations for your databases. To export metrics for multiple databases, you must configure an export destination for each database.
You must use Prometheus v2.25 or later. DataStax recommends Prometheus v2.33 or later.
You must enable remote-write-receiver
in the destination app.
For more information, see Remote storage integrations and <remote_write>
.
-
Astra Portal
-
DevOps API
-
In the Astra Portal navigation menu, select your database, and then click Settings.
-
In the Export Metrics section, click Add Destination.
-
Select Prometheus.
-
For Prometheus Strategy, select your authentication method, and then provide the required credentials:
-
Bearer: Provide a Prometheus Token and Prometheus Endpoint.
-
Basic: Provide a Prometheus Username, Prometheus Password, and Prometheus Endpoint.
-
-
Click Add Destination.
The new destination appears in the Export Metrics section. To edit or delete this configuration, click
More, and then select Edit or Delete.
-
Create an application token that has Manage Metrics permission for the database that you want export metrics.
If you plan to export metrics for multiple databases, you can customize each token’s scope. For example, you can create separate tokens for each database or create one token with the Organization Administrator role.
-
Get the database ID. In the Astra Portal, go to your database, and then copy the Database ID, or use the DevOps API List databases endpoint to get multiple database IDs at once.
curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases" \ --header "Authorization: Bearer APPLICATION_TOKEN_WITH_ORG_ADMIN_ROLE"
-
Use the DevOps API Get Telemetry Configuration endpoint to get the database’s existing metrics export configuration:
curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \ --header "Authorization: Bearer APPLICATION_TOKEN"
A database’s telemetry configuration is a desired state list. You must always send the list of all export destinations when you add, remove, or change any destinations for a database. For example, if a database has five destinations, but your
POST
body contains only one destination, then all destinations are removed except the one included in your request. -
To add a Prometheus metrics export destination to a database, send a
POST
request to the DevOps API Configure Telemetry endpoint. The request body must contain the database’s entire telemetry configuration, including all existing destinations, as returned by the Get Telemetry Configuration endpoint.-
Prometheus token
-
Prometheus username and password
curl -sS --location -X POST "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \ --header "Authorization: Bearer APPLICATION_TOKEN" \ --data '{ "prometheus_remote": { "endpoint": "PROMETHEUS_ENDPOINT", "auth_strategy": "bearer", "token": PROMETHEUS_TOKEN } }'
Replace the following:
-
ASTRA_DB_ID
: Your database ID. -
APPLICATION_TOKEN
: Your Astra DB application token. -
PROMETHEUS_ENDPOINT
: Your Prometheus endpoint URL. -
PROMETHEUS_TOKEN
: Your Prometheus authentication token.
curl -sS --location -X POST "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \ --header "Authorization: Bearer APPLICATION_TOKEN" \ --data '{ "prometheus_remote": { "endpoint": "PROMETHEUS_ENDPOINT", "auth_strategy": "basic", "password": PROMETHEUS_PASSWORD, "user": PROMETHEUS_USERNAME } }'
Replace the following:
-
ASTRA_DB_ID
: Your database ID. -
APPLICATION_TOKEN
: Your Astra DB application token. -
PROMETHEUS_ENDPOINT
: Your Prometheus endpoint URL. -
PROMETHEUS_PASSWORD
andPROMETHEUS_USERNAME
: Your Prometheus authentication credentials.
-
-
Use the Get Telemetry Configuration endpoint to review the updated configuration:
curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \ --header "Authorization: Bearer APPLICATION_TOKEN"
Response
A successful response includes information about all metrics export destinations for the specified database:
{ "prometheus_remote": { "endpoint": "https://prometheus.example.com/api/prom/push", "auth_strategy": "basic", "user": "PROMETHEUS_USERNAME", "password": "PROMETHEUS_PASSWORD" } }
-
To edit or delete a telemetry configuration, do the following:
-
Use the Get Telemetry Configuration endpoint to get the database’s existing telemetry configuration.
-
Send a
POST
request to the Configure Telemetry endpoint with the entire updated configuration. This is a desired state list; thePOST
request body replaces the database’s telemetry configuration.
-
Export metrics to Apache Kafka
You can use the Astra Portal or the DevOps API to manage the metrics export destinations for your databases. To export metrics for multiple databases, you must configure an export destination for each database.
This configuration uses SASL authentication. For more information about telemetry with Kafka, see Kafka metrics overview and Kafka Monitoring.
-
Astra Portal
-
DevOps API
-
In the Astra Portal navigation menu, select your database, and then click Settings.
-
In the Export Metrics section, click Add Destination.
-
Select Kafka.
-
If required, select a Kafka Security Protocol:
-
SASL_PLAINTEXT: SASL authenticated, non-encrypted channel
-
SASL_SSL: SASL authenticated, encrypted channel
This isn’t required for most Kafka installations. If you use hosted Kafka on Confluent Cloud, you might need to use SASL_SSL. Non-Authenticated options (SSL and PLAINTEXT) aren’t supported.
-
-
For SASL Mechanism, enter the appropriate SASL mechanism property for your security protocol, such as
GSSAPI
,PLAIN
,SCRAM-SHA-256
, orSCRAM-SHA-512
.For more information, see Kafka Authentication Basics and SASL Authentication in Confluent Platform.
-
For SASL Username and SASL Password, enter your Kafka authentication credentials.
-
For Topic, enter the Kafka topic where you want Astra DB to export metrics. This topic must exist on your Kafka servers.
-
For Bootstrap Servers, add one or more Kafka Bootstrap Server entries, such as
pkc-9999e.us-east-1.aws.confluent.cloud:9092
. -
Click Add Destination.
The new destination appears in the Export Metrics section. To edit or delete this configuration, click
More, and then select Edit or Delete.
-
Create an application token that has Manage Metrics permission for the database that you want export metrics.
If you plan to export metrics for multiple databases, you can customize each token’s scope. For example, you can create separate tokens for each database or create one token with the Organization Administrator role.
-
Get the database ID. In the Astra Portal, go to your database, and then copy the Database ID, or use the DevOps API List databases endpoint to get multiple database IDs at once.
curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases" \ --header "Authorization: Bearer APPLICATION_TOKEN_WITH_ORG_ADMIN_ROLE"
-
Use the DevOps API Get Telemetry Configuration endpoint to get the database’s existing metrics export configuration:
curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \ --header "Authorization: Bearer APPLICATION_TOKEN"
A database’s telemetry configuration is a desired state list. You must always send the list of all export destinations when you add, remove, or change any destinations for a database. For example, if a database has five destinations, but your
POST
body contains only one destination, then all destinations are removed except the one included in your request. -
To add a Kafka metrics export destination to a database, send a
POST
request to the DevOps API Configure Telemetry endpoint. The request body must contain the database’s entire telemetry configuration, including all existing destinations, as returned by the Get Telemetry Configuration endpoint.curl -sS --location -X POST "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \ --header "Authorization: Bearer APPLICATION_TOKEN" \ --data '{ "kafka": { "bootstrap_servers": [ "BOOTSTRAP_SERVER_URL" ], "topic": "KAFKA_TOPIC", "sasl_mechanism": "SASL_MECHANISM_PROPERTY", "sasl_username": "KAFKA_USERNAME", "sasl_password": "KAFKA_PASSWORD", "security_protocol": "SASL_SECURITY_PROTOCOL" } }'
Replace the following:
-
ASTRA_DB_ID
: Your database ID. -
APPLICATION_TOKEN
: Your Astra DB application token. -
BOOTSTRAP_SERVER_URL
: A Kafka Bootstrap Server URL, such aspkc-9999e.us-east-1.aws.confluent.cloud:9092
. You can provide a list of URLs. -
KAFKA_TOPIC
: The Kafka topic where you want Astra DB to export metrics. This topic must exist on your Kafka servers. -
SASL_MECHANISM_PROPERTY
: The appropriate SASL mechanism property for your security protocol, such asGSSAPI
,PLAIN
,SCRAM-SHA-256
, orSCRAM-SHA-512
. For more information, see Kafka Authentication Basics and SASL Authentication in Confluent Platform. -
KAFKA_USERNAME
andKAFKA_PASSWORD
: Your Kafka authentication credentials. -
SASL_SECURITY_PROTOCOL
: If required, specifySASL_PLAINTEXT
orSASL_SSL
. This isn’t required for most Kafka installations. If you use hosted Kafka on Confluent Cloud, you might need to useSASL_SSL
. Non-Authenticated options (SSL
andPLAINTEXT
) aren’t supported.
-
-
Use the Get Telemetry Configuration endpoint to review the updated configuration:
curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \ --header "Authorization: Bearer APPLICATION_TOKEN"
Response
A successful response includes information about all metrics export destinations for the specified database:
{ "kafka": { "bootstrap_servers": [ "BOOTSTRAP_SERVER_URL" ], "topic": "astra_metrics_events", "sasl_mechanism": "PLAIN", "sasl_username": "KAFKA_USERNAME", "sasl_password": "KAFKA_PASSWORD", "security_protocol": "SASL_PLAINTEXT" } }
-
To edit or delete a telemetry configuration, do the following:
-
Use the Get Telemetry Configuration endpoint to get the database’s existing telemetry configuration.
-
Send a
POST
request to the Configure Telemetry endpoint with the entire updated configuration. This is a desired state list; thePOST
request body replaces the database’s telemetry configuration.
-
Export metrics to Amazon CloudWatch
You can use the Astra Portal or the DevOps API to manage the metrics export destinations for your databases. To export metrics for multiple databases, you must configure an export destination for each database.
-
Astra Portal
-
DevOps API
-
In the Astra Portal navigation menu, select your database, and then click Settings.
-
In the Export Metrics section, click Add Destination.
-
Select Amazon CloudWatch.
-
For Access Key and Secret Key, enter your Amazon CloudWatch authentication credentials.
The secret key user must have the
cloudwatch:PutMetricData
permission.Example IAM policy
{ "Version": "2012-10-17", "Statement": [ { "Sid": "AstraDBMetrics", "Effect": "Allow", "Action": "cloudwatch:PutMetricData", "Resource": "*" } ] }
-
For Region, select the region where you want to export the metrics. This doesn’t have to match your database’s region.
-
Click Add Destination.
The new destination appears in the Export Metrics section. To edit or delete this configuration, click
More, and then select Edit or Delete.
-
Create an application token that has Manage Metrics permission for the database that you want export metrics.
If you plan to export metrics for multiple databases, you can customize each token’s scope. For example, you can create separate tokens for each database or create one token with the Organization Administrator role.
-
Get the database ID. In the Astra Portal, go to your database, and then copy the Database ID, or use the DevOps API List databases endpoint to get multiple database IDs at once.
curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases" \ --header "Authorization: Bearer APPLICATION_TOKEN_WITH_ORG_ADMIN_ROLE"
-
Use the DevOps API Get Telemetry Configuration endpoint to get the database’s existing metrics export configuration:
curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \ --header "Authorization: Bearer APPLICATION_TOKEN"
A database’s telemetry configuration is a desired state list. You must always send the list of all export destinations when you add, remove, or change any destinations for a database. For example, if a database has five destinations, but your
POST
body contains only one destination, then all destinations are removed except the one included in your request. -
To add an Amazon CloudWatch metrics export destination to a database, send a
POST
request to the DevOps API Configure Telemetry endpoint. The request body must contain the database’s entire telemetry configuration, including all existing destinations, as returned by the Get Telemetry Configuration endpoint.curl -sS --location -X POST "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \ --header "Authorization: Bearer APPLICATION_TOKEN" \ --data '{ "cloudwatch": { "access_key": "AWS_ACCESS_KEY", "secret_key": "AWS_SECRET_KEY", "region": "AWS_REGION" } }'
Replace the following:
-
ASTRA_DB_ID
: Your database ID. -
APPLICATION_TOKEN
: Your Astra DB application token. -
AWS_ACCESS_KEY
andAWS_SECRET_KEY
: Your Amazon CloudWatch authentication credentials. The secret key user must have thecloudwatch:PutMetricData
permission, for example:Example IAM policy{ "Version": "2012-10-17", "Statement": [ { "Sid": "AstraDBMetrics", "Effect": "Allow", "Action": "cloudwatch:PutMetricData", "Resource": "*" } ] }
-
AWS_REGION
: Enter the region where you want to export the metrics. This doesn’t have to match your database’s region.
-
-
Use the Get Telemetry Configuration endpoint to review the updated configuration:
curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \ --header "Authorization: Bearer APPLICATION_TOKEN"
Response
A successful response includes information about all metrics export destinations for the specified database:
{ "cloudwatch": { "access_key": "AWS_ACCESS_KEY", "secret_key": "AWS_SECRET_KEY", "region": "AWS_REGION" } }
-
To edit or delete a telemetry configuration, do the following:
-
Use the Get Telemetry Configuration endpoint to get the database’s existing telemetry configuration.
-
Send a
POST
request to the Configure Telemetry endpoint with the entire updated configuration. This is a desired state list; thePOST
request body replaces the database’s telemetry configuration.
-
Export metrics to Splunk
You can use the Astra Portal or the DevOps API to manage the metrics export destinations for your databases. To export metrics for multiple databases, you must configure an export destination for each database.
-
Astra Portal
-
DevOps API
-
In the Astra Portal navigation menu, select your database, and then click Settings.
-
In the Export Metrics section, click Add Destination.
-
Select Splunk.
-
For Endpoint, enter the full HTTP address and path for the Splunk HTTP Event Collector (HEC) endpoint.
-
For Index, enter the Splunk index where you want to export metrics.
-
For Token, enter the Splunk HTTP Event Collector (HEC) token for Splunk authentication. This token must have permission to write to the specified index.
-
For Source, enter the source for the events sent to the sink. If you don’t specify a source, the default is
astradb
. -
For Source Type, enter the type of events sent to the sink. If you don’t specify a source type, the default is
astradb-metrics
. -
Click Add Destination.
The new destination appears in the Export Metrics section. To edit or delete this configuration, click
More, and then select Edit or Delete.
-
Create an application token that has Manage Metrics permission for the database that you want export metrics.
If you plan to export metrics for multiple databases, you can customize each token’s scope. For example, you can create separate tokens for each database or create one token with the Organization Administrator role.
-
Get the database ID. In the Astra Portal, go to your database, and then copy the Database ID, or use the DevOps API List databases endpoint to get multiple database IDs at once.
curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases" \ --header "Authorization: Bearer APPLICATION_TOKEN_WITH_ORG_ADMIN_ROLE"
-
Use the DevOps API Get Telemetry Configuration endpoint to get the database’s existing metrics export configuration:
curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \ --header "Authorization: Bearer APPLICATION_TOKEN"
A database’s telemetry configuration is a desired state list. You must always send the list of all export destinations when you add, remove, or change any destinations for a database. For example, if a database has five destinations, but your
POST
body contains only one destination, then all destinations are removed except the one included in your request. -
To add a Splunk metrics export destination to a database, send a
POST
request to the DevOps API Configure Telemetry endpoint. The request body must contain the database’s entire telemetry configuration, including all existing destinations, as returned by the Get Telemetry Configuration endpoint.curl -sS --location -X POST "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \ --header "Authorization: Bearer APPLICATION_TOKEN" \ --data '{ "splunk": { "endpoint": "SPLUNK_HEC_ENDPOINT", "index": "SPLUNK_INDEX", "token": "SPLUNK_TOKEN", "source": "SPLUNK_SOURCE", "sourcetype": "SPLUNK_SOURCETYPE" } }'
Replace the following:
-
ASTRA_DB_ID
: Your database ID. -
APPLICATION_TOKEN
: Your Astra DB application token. -
SPLUNK_HEC_ENDPOINT
: The full HTTP address and path for the Splunk HTTP Event Collector (HEC) endpoint. You can get this from your Splunk Administrator. -
SPLUNK_INDEX
: The Splunk index where you want to export metrics. -
SPLUNK_TOKEN
: The Splunk HEC token for Splunk authentication. This token must have permission to write to the specifiedindex
. -
SPLUNK_SOURCE
: The source for the events sent to the sink. If unset, the default isastradb
. -
SPLUNK_SOURCETYPE
: The type of events sent to the sink. If unset, the default isastradb-metrics
.
-
-
Use the Get Telemetry Configuration endpoint to review the updated configuration:
curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \ --header "Authorization: Bearer APPLICATION_TOKEN"
Response
A successful response includes information about all metrics export destinations for the specified database:
{ "splunk": { "endpoint": "https://http-inputs-COMPANY.splunkcloud.com", "index": "astra_db_metrics", "token": "SPLUNK_TOKEN", "source": "astradb", "sourcetype": "astradb-metrics" } }
-
To edit or delete a telemetry configuration, do the following:
-
Use the Get Telemetry Configuration endpoint to get the database’s existing telemetry configuration.
-
Send a
POST
request to the Configure Telemetry endpoint with the entire updated configuration. This is a desired state list; thePOST
request body replaces the database’s telemetry configuration.
-
Export metrics to Apache Pulsar
You can use the Astra Portal or the DevOps API to manage the metrics export destinations for your databases. To export metrics for multiple databases, you must configure an export destination for each database.
-
Astra Portal
-
DevOps API
-
In the Astra Portal navigation menu, select your database, and then click Settings.
-
In the Export Metrics section, click Add Destination.
-
Select Pulsar.
-
For Endpoint, enter your Pulsar Broker URL.
-
For Topic, enter the Pulsar topic where you want to publish metrics.
-
Enter an Auth Name, select the Auth Strategy used by your Pulsar Broker, and then provide the required credentials:
-
Token: Provide a Pulsar authentication token.
-
Oauth2: Provide your OAuth2 Credentials URL and OAuth2 Issuer URL. Optionally, you can provide your OAuth2 Audience and OAuth2 Scope. For more information, see Authentication using OAuth 2.0 access tokens.
-
-
Click Add Destination.
The new destination appears in the Export Metrics section. To edit or delete this configuration, click
More, and then select Edit or Delete.
-
Create an application token that has Manage Metrics permission for the database that you want export metrics.
If you plan to export metrics for multiple databases, you can customize each token’s scope. For example, you can create separate tokens for each database or create one token with the Organization Administrator role.
-
Get the database ID. In the Astra Portal, go to your database, and then copy the Database ID, or use the DevOps API List databases endpoint to get multiple database IDs at once.
curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases" \ --header "Authorization: Bearer APPLICATION_TOKEN_WITH_ORG_ADMIN_ROLE"
-
Use the DevOps API Get Telemetry Configuration endpoint to get the database’s existing metrics export configuration:
curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \ --header "Authorization: Bearer APPLICATION_TOKEN"
A database’s telemetry configuration is a desired state list. You must always send the list of all export destinations when you add, remove, or change any destinations for a database. For example, if a database has five destinations, but your
POST
body contains only one destination, then all destinations are removed except the one included in your request. -
To add a Pulsar metrics export destination to a database, send a
POST
request to the DevOps API Configure Telemetry endpoint. The request body must contain the database’s entire telemetry configuration, including all existing destinations, as returned by the Get Telemetry Configuration endpoint.-
Pulsar token
-
Pulsar OAuth2
curl -sS --location -X POST "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \ --header "Authorization: Bearer APPLICATION_TOKEN" \ --data '{ "pulsar": { "endpoint": "PULSAR_ENDPOINT", "topic": "PULSAR_TOPIC", "auth_strategy": "token", "token": "PULSAR_TOKEN", "auth_name": "PULSAR_AUTH_NAME" } }'
Replace the following:
-
ASTRA_DB_ID
: Your database ID. -
APPLICATION_TOKEN
: Your Astra DB application token. -
PULSAR_ENDPOINT
: Your Pulsar Broker URL. -
PULSAR_TOPIC
: The Pulsar topic where you want to publish metrics. -
PULSAR_TOKEN
: A Pulsar authentication token. -
PULSAR_AUTH_NAME
: A Pulsar authentication name.
curl -sS --location -X POST "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \ --header "Authorization: Bearer APPLICATION_TOKEN" \ --data '{ "pulsar": { "endpoint": "PULSAR_ENDPOINT", "topic": "PULSAR_TOPIC", "auth_strategy": "oauth2", "oauth2_credentials_url": "CREDENTIALS_URL", "oauth2_issuer_url": "ISSUER_URL" } }'
Replace the following:
-
ASTRA_DB_ID
: Your database ID. -
APPLICATION_TOKEN
: Your Astra DB application token. -
PULSAR_ENDPOINT
: Your Pulsar Broker URL. -
PULSAR_TOPIC
: The Pulsar topic where you want to publish metrics. -
CREDENTIALS_URL
andISSUER_URL
: Your Pulsar OAuth2 credentials and issuer URLs. Optionally, you can provideoauth_audience
andoauth2_scope
. For more information, see Authentication using OAuth 2.0 access tokens.
-
-
Use the Get Telemetry Configuration endpoint to review the updated configuration:
curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \ --header "Authorization: Bearer APPLICATION_TOKEN"
Response
A successful response includes information about all metrics export destinations for the specified database:
{ "pulsar": { "endpoint": "PULSAR_ENDPOINT_URL", "topic": "PULSAR_TOPIC", "auth_strategy": "token", "token": "PULSAR_TOKEN", "auth_name": "PULSAR_AUTH_NAME" } }
-
To edit or delete a telemetry configuration, do the following:
-
Use the Get Telemetry Configuration endpoint to get the database’s existing telemetry configuration.
-
Send a
POST
request to the Configure Telemetry endpoint with the entire updated configuration. This is a desired state list; thePOST
request body replaces the database’s telemetry configuration.
-
Export metrics to Datadog
You can use the Astra Portal or the DevOps API to manage the metrics export destinations for your databases. To export metrics for multiple databases, you must configure an export destination for each database.
-
Astra Portal
-
DevOps API
-
In the Astra Portal navigation menu, select your database, and then click Settings.
-
In the Export Metrics section, click Add Destination.
-
Select Datadog.
-
For API Key, enter your Datadog API key.
-
Optional: For Site, enter the Datadog site parameter for the site where you want to export metrics. The site parameter format depends on your site URL, for example:
-
If your site URL begins with
https://app.
, remove this entire clause from your site URL to form the site parameter. -
If your site URL begins with a different subdomain, such as
https://us5
orhttps://ap1
, remove onlyhttps://
from your site URL to form the site parameter.
-
-
Click Add Destination.
The new destination appears in the Export Metrics section. To edit or delete this configuration, click
More, and then select Edit or Delete.
-
Create an application token that has Manage Metrics permission for the database that you want export metrics.
If you plan to export metrics for multiple databases, you can customize each token’s scope. For example, you can create separate tokens for each database or create one token with the Organization Administrator role.
-
Get the database ID. In the Astra Portal, go to your database, and then copy the Database ID, or use the DevOps API List databases endpoint to get multiple database IDs at once.
curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases" \ --header "Authorization: Bearer APPLICATION_TOKEN_WITH_ORG_ADMIN_ROLE"
-
Use the DevOps API Get Telemetry Configuration endpoint to get the database’s existing metrics export configuration:
curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \ --header "Authorization: Bearer APPLICATION_TOKEN"
A database’s telemetry configuration is a desired state list. You must always send the list of all export destinations when you add, remove, or change any destinations for a database. For example, if a database has five destinations, but your
POST
body contains only one destination, then all destinations are removed except the one included in your request. -
To add a Datadog metrics export destination to a database, send a
POST
request to the DevOps API Configure Telemetry endpoint. The request body must contain the database’s entire telemetry configuration, including all existing destinations, as returned by the Get Telemetry Configuration endpoint.curl -sS --location -X POST "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \ --header "Authorization: Bearer APPLICATION_TOKEN" \ --data '{ "datadog": { "api_key": "DATADOG_API_KEY", "site": "DATADOG_SITE" } }'
Replace the following:
-
ASTRA_DB_ID
: Your database ID. -
APPLICATION_TOKEN
: Your Astra DB application token. -
DATADOG_API_KEY
: Your Datadog API key. -
DATADOG_SITE
: The Datadog site parameter for the site where you want to export metrics. The site parameter format depends on your site URL, for example: -
If your site URL begins with
https://app.
, remove this entire clause from your site URL to form the site parameter. -
If your site URL begins with a different subdomain, such as
https://us5
orhttps://ap1
, remove onlyhttps://
from your site URL to form the site parameter.
-
-
Use the Get Telemetry Configuration endpoint to review the updated configuration:
curl -sS --location -X GET "https://api.astra.datastax.com/v2/databases/ASTRA_DB_ID/telemetry/metrics" \ --header "Authorization: Bearer APPLICATION_TOKEN"
Response
A successful response includes information about all metrics export destinations for the specified database:
{ "datadog": { "api_key": "DATADOG_API_KEY", "site": "DATADOG_SITE" } }
-
To edit or delete a telemetry configuration, do the following:
-
Use the Get Telemetry Configuration endpoint to get the database’s existing telemetry configuration.
-
Send a
POST
request to the Configure Telemetry endpoint with the entire updated configuration. This is a desired state list; thePOST
request body replaces the database’s telemetry configuration.
-
Metric definitions
Astra DB Classic metrics are aggregated values calculated once per minute, and each metric has a rate1m
and a rate5m
variant.
rate1m
is the rate of increase or decrease over a one minute interval, and rate5m
is the rate of increase or decrease over a five minute interval.
Astra DB Classic captures the following metrics for your databases:
astra_db_rate_limited_requests:rate1m
astra_db_rate_limited_requests:rate5m
-
A calculated rate of change for the number of failed operations due to an Astra DB rate limit. Using these rates, alert if the value is greater than 0 for more than 30 minutes.
astra_db_read_requests_failures:rate1m
astra_db_read_requests_failures:rate5m
-
A calculated rate of change for the number of failed reads. Using these rates, alert if the value is greater than 0. Warn alert on low amount. High alert on larger amounts; determine potentially as a percentage of read throughput.
astra_db_read_requests_timeouts:rate1m
astra_db_read_requests_timeouts:rate5m
-
A calculated rate of change for read timeouts. Timeouts happen when operations against the database take longer than the server side timeout. Using these rates, alert if the value is greater than 0.
astra_db_read_requests_unavailables:rate1m
astra_db_read_requests_unavailables:rate5m
-
A calculated rate of change for reads where there were not enough data service replicas available to complete the request. Using these rates, alert if the value is greater than 0.
astra_db_write_requests_failures:rate1m
astra_db_write_requests_failures:rate5m
-
A calculated rate of change for the number of failed writes. Apache Cassandra® drivers retry failed operations, but significant failures can be problematic. Using these rates, alert if the value is greater than 0. Warn alert on low amount. High alert on larger amounts; determine potentially as a percentage of read throughput.
astra_db_write_requests_timeouts:rate1m
astra_db_write_requests_timeouts:rate5m
-
A calculated rate of change for timeouts, which occur when operations take longer than the server side timeout. Using these rates, compare with
astra_db_write_requests_failures
. astra_db_write_requests_unavailables:rate1m
astra_db_write_requests_unavailables:rate5m
-
A calculated rate of change for unavailable errors, which occur when the service is not available to service a particular request. Using these rates, compare with
astra_db_write_requests_failures
. astra_db_range_requests_failures:rate1m
astra_db_range_requests_failures:rate5m
-
A calculated rate of change for the number of range reads that failed. Cassandra drivers retry failed operations, but significant failures can be problematic. Using these rates, alert if the value is greater than 0. Warn alert on low amount. High alert on larger amounts; determine potentially as a percentage of read throughput.
astra_db_range_requests_timeouts:rate1m
astra_db_range_requests_timeouts:rate5m
-
A calculated rate of change for timeouts, which are a subset of total failures. Use this metric to understand if failures are due to timeouts. Using these rates, compare with
astra_db_range_requests_failures
. astra_db_range_requests_unavailables:rate1m
astra_db_range_requests_unavailables:rate5m
-
A calculated rate of change for unavailable errors, which are a subset of total failures. Use this metric to understand if failures are due to timeouts. Using these rates, compare with
astra_db_range_requests_failures
. astra_db_write_latency_seconds:rate1m
astra_db_write_latency_seconds:rate5m
-
A calculated rate of change for write throughput. Alert based on your application service level objective (SLO).
astra_db_write_latency_seconds_QUANTILE:rate1m
astra_db_write_latency_seconds_QUANTILE:rate5m
-
A calculated rate of change for write latency.
QUANTILE
is a histogram quantile of99
,95
,90
,75
, or50
, such asastra_db_write_latency_seconds_P99:rate1m
. Alert based on your application SLO. astra_db_write_requests_mutation_size_bytes_QUANTILE
astra_db_write_requests_mutation_size_bytes_QUANTILE
-
A calculated rate of change for write size.
QUANTILE
is a histogram quantile of99
,95
,90
,75
, or50
, such asastra_db_write_requests_mutation_size_bytes_P99:rate5m
. astra_db_read_latency_seconds:rate1m
astra_db_read_latency_seconds:rate5m
-
A calculated rate of change for read latency. Alert based on your application SLO.
astra_db_read_latency_seconds_QUANTILE:rate1m
astra_db_read_latency_seconds_QUANTILE:rate5m
-
A calculated rate of change for read latency.
QUANTILE
is a histogram quantile of99
,95
,90
,75
, or50
, such asastra_db_read_latency_seconds_P99:rate1m
. Alert based on your application SLO. astra_db_range_latency_seconds:rate1m
astra_db_range_latency_seconds:rate5m
-
A calculated rate of change for range read throughput. Alert based on your application SLO.
astra_db_range_latency_seconds_QUANTILE:rate1m
astra_db_range_latency_seconds_QUANTILE:rate5m
-
A calculated rate of change of range read latency.
QUANTILE
is a histogram quantile of99
,95
,90
,75
, or50
, such asastra_db_range_latency_seconds_P99
. Alert based on your application SLO. astra_db_read_counter_tombstone:rate1m
astra_db_read_counter_tombstone:rate5m
-
A calculated rate of change for the total number of tombstone reads. Tombstones are markers of deleted records or certain updates, such as collection updates. Monitoring the rate of tombstone reads can help identify potential performance impacts. Alert if the value increases significantly.
astra_db_read_failure_tombstone:rate1m
astra_db_read_failure_tombstone:rate5m
-
A calculated rate of change for the total number of read operations that failed due to hitting the the tombstone guardrail failure threshold. This metric is critical for identifying issues that could lead to performance degradation or timeouts. Alert if the value is greater than
0
. astra_db_read_warnings_tombstone:rate1m
astra_db_read_warnings_tombstone:rate5m
-
A calculated rate of change for the total number of warnings generated due to getting close to the tombstone guardrail failure threshold. This metric helps identify scenarios where read operations are slowed or at risk of slowing. Alert on a significant increase, which can indicate potential read performance issues.
astra_db_cas_read_latency_seconds:rate1m
astra_db_cas_read_latency_seconds:rate5m
-
A calculated rate of change for the count of Compare and Set (CAS) read operations in Lightweight Transactions (LWTs), measuring the throughput of CAS reads. Monitoring this rate is important for understanding the load and performance of read operations that involve conditional checks. Alert on unusual changes, which could signal issues with data access patterns or performance bottlenecks.
astra_db_cas_read_latency_seconds_QUANTILE:rate1m
astra_db_cas_read_latency_seconds_QUANTILE:rate5m
-
A calculated rate of change for CAS read latency distributions in LWTs.
QUANTILE
is a histogram quantile of99
,95
,90
,75
, or50
, such asastra_db_cas_read_latency_seconds_P99
. This metric helps you identify the latency of CAS reads, which is essential for diagnosing potential read performance issues and understanding the distribution of read operation latencies. Alert based on your application SLO. astra_db_cas_write_latency_seconds:rate1m
astra_db_cas_write_latency_seconds:rate5m
-
A calculated rate of change for the count of Compare and Swap (CAS) write operations in LWTs, measuring the throughput of CAS writes. Compare and Swap operations are used for atomic read-modify-write operations. Monitoring the rate of these operations helps in understanding the load and performance characteristics of CAS writes. Alert if the rate significantly deviates from expected patterns, indicating potential concurrency or contention issues.
astra_db_cas_write_latency_seconds_QUANTILE:rate1m
astra_db_cas_write_latency_seconds_QUANTILE:rate5m
-
A calculated rate of change for CAS write latency distributions in LWTs.
QUANTILE
is a histogram quantile of99
,95
,90
,75
, or50
, such asastra_db_cas_write_latency_seconds_P99
. This metric provides insights into the latency characteristics of CAS write operations, helping identify latency spikes or trends over time. Alert based on your application SLO. astra_db_cas_write_unfinished_commit:rate1m
astra_db_cas_write_unfinished_commit:rate5m
-
A calculated rate of change for the total number of CAS write operations in LWTs that did not finish committing. This metric is crucial for detecting issues in the atomicity of write operations, potentially caused by network or node failures. Alert if there’s an increase because this could impact data consistency.
astra_db_cas_write_contention_QUANTILE:rate1m
astra_db_cas_write_contention_QUANTILE:rate5m
-
A calculated rate of change for the distribution of CAS write contention in LWTs.
QUANTILE
is a histogram quantile of99
,95
,90
,75
, or50
, such asastra_db_cas_write_contention_P99
. Contention during CAS write operations can significantly impact performance. This metric helps you understand and diagnose the levels of contention affecting CAS writes. Alert based on your application SLO. astra_db_cas_read_unfinished_commit:rate1m
astra_db_cas_read_unfinished_commit:rate5m
-
A calculated rate of change for the total number of CAS read operations that encountered unfinished commits. Monitoring this metric is important for identifying issues with read consistency and potential data visibility problems. Alert if there’s an increase, indicating problems with the completion of write operations.
astra_db_cas_read_contention_QUANTILE:rate1m
astra_db_cas_read_contention_QUANTILE:rate5m
-
A calculated rate of change for the distribution of CAS read contention in LWTs.
QUANTILE
is a histogram quantile of99
,95
,90
,75
, or50
, such asastra_db_cas_read_contention_P99
. Contention during CAS reads can indicate performance issues or high levels of concurrent access to the same data. Alert based on your application SLO. astra_db_cas_read_requests_failures:rate1m
astra_db_cas_read_requests_failures:rate5m
-
A calculated rate of change for the total number of CAS read operations in LWTs that failed. Failures in CAS reads can signal issues with data access or consistency problems. Alert if the rate increases, indicating potential issues affecting the reliability of CAS reads.
astra_db_cas_read_requests_timeouts:rate1m
astra_db_cas_read_requests_timeouts:rate5m
-
A calculated rate of change for the number of CAS read operations in LWTs that timed out. Timeouts can indicate system overload or issues with data access patterns. Monitoring this metric helps in identifying and addressing potential bottlenecks. Alert if there’s an increase in timeouts.
astra_db_cas_read_requests_unavailables:rate1m
astra_db_cas_read_requests_unavailables:rate5m
-
A calculated rate of change for CAS read operations in LWTs that were unavailable. This metric is vital for understanding the availability of the system to handle CAS reads. An increase in unavailability can indicate cluster health issues. Alert if the rate increases.
astra_db_cas_write_requests_failures:rate1m
astra_db_cas_write_requests_failures:rate5m
-
A calculated rate of change for the total number of CAS write operations in LWTs that failed. Failure rates for CAS writes are critical for assessing the reliability and performance of write operations. Alert if there’s a significant increase in failures.
astra_db_cas_write_requests_timeouts:rate1m
astra_db_cas_write_requests_timeouts:rate5m
-
A calculated rate of change for the number of CAS write operations in LWTs that timed out. Write timeouts can significantly impact application performance and user experience. Monitoring this rate is crucial for maintaining system performance. Alert on an upward trend in timeouts.
astra_db_cas_write_requests_unavailables:rate1m
astra_db_cas_write_requests_unavailables:rate5m
-
A calculated rate of change for CAS write operations in LWTs that were unavailable. Increases in this metric can indicate problems with cluster capacity or health, impacting the ability to perform write operations. Alert if there’s an increase, as it could signal critical availability issues.