View and export metrics

Select a database and region to view health metrics, including latency and throughput to the database. These metrics provide insights into the performance of the database and how workloads are distributed.

  • Total latency by percentile: A p50 quantile indicates 50% of the database requests were processed faster than this value, and a p99 indicates the same for 99% of requests.

  • Total throughput: The number of tasks or requests the database is processing in a given amount of time. The key metrics separate between read and write throughput, then provide the average and total (both read and write throughputs). This metric is measured in seconds.

Your database determines how you view your metrics.

  • Vector metrics

  • Non-vector metrics

  1. Open your DataStax Astra Portal and select Databases in the left navigation.

  2. Select a database name to view the details, which appear on the Overview tab under Key Metrics.

  3. Hover anywhere in the graph to display the metrics in a particular time period.

  4. Click the arrow in the box to select a time period to display in the health metrics. You can select anywhere from five minutes to the last hour.

  1. Open your DataStax Astra Portal and select Databases in the left navigation.

  2. Select a database name to view the details, which appear on the Health tab.

  3. Hover anywhere in the graph to display the metrics in a particular time period.

  4. Click the arrow in the box to select a time period to display in the health metrics. You can select anywhere from five minutes to the last hour.

Gaps in the read/write metrics are normal; they indicate periods when no read/write requests are happening.

Export your metrics

Forward your Astra DB database metrics to an external third-party metrics system, also known as the destination system.

  1. With your Astra Portal open, navigate to your database.

  2. Ensure that your database is in an Active status, and then select Settings from the dashboard.

  3. Scroll down to the Export Metrics section, and then select your destination. The table below shows your options.

Metric Description Options

Prometheus

Bearer: Provide your Prometheus Token value and Prometheus Endpoint on the resulting form. The form does not display username/password properties for this strategy.

Basic: Provide your Prometheus Username, Password, and Endpoint on the resulting form. The form does not display a token property for this strategy.

 — 

Kafka

SASL Mechanism - Your Kafka Simple Authentication and Security Layer (SASL) mechanism for authentication and data security. Possible value, one of: GSSAPI, PLAIN, SCRAM-SHA-256, or SCRAM-SHA-512. For background information, see the Confluent Kafka - Authentication Methods Overview documentation.

SASL Username - Existing username for Kafka authentication.

SASL Password - Existing password for Kafka authentication.

Topic - Kafka topic to which Astra DB exports the metrics; you must create this topic on your server(s).

Bootstrap Servers - One or more Kafka Bootstrap Server entries. Example: pkc-9999e.us-east-1.aws.confluent.cloud:9092

Kafka Security Protocol (optional) - Most Kafka installations do not require this setting for these metrics to connect. Users of hosted Kafka on Confluent Cloud may need to set SASL_SSL in this Security Protocol property.

Valid options are: SASL_PLAINTEXT - SASL authenticated, non-encrypted channel.

SASL_SSL - SASL authenticated, encrypted channel. Non-Authenticated options (SSL and PLAINTEXT) are not supported. Specify the appropriate, related SASL Mechanism property. For more information, see the Confluent Cloud security tutorial.

AWS CloudWatch

Access Key: Your AWS access key. For example, AKIAIOSFODNN7EXAMPLE. Get the value from your account in the AWS console.

Secret Key: Your AWS secret key. For example, wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY. Get the value from your account in the AWS console.

Region: You can enter the same AWS region selected for the Astra DB serverless database or select a different AWS region. For example, use this option to view metrics across two regions together.

 — 

Splunk

Endpoint: The full HTTP address and path for the Splunk HTTP Event Collector (HEC) endpoint. If unsure of the address, contact your Splunk Administrator.

Index: The Splunk index to which you want to write metrics. Set the identified index so the Splunk token has permission to write to it.

Token: The Splunk HTTP Event Collector (HEC) token for Splunk authentication.

Source (optional): You can enter the source of events sent to this sink. If unset, your Astra Portal sets this field to a default value: astradb.

Source Type (optional): You can enter the source type of events sent to this sink. If unset, the API sets it to a default value: astradb-metrics.

Pulsar/Streaming

Endpoint: The URL of your Pulsar Broker.

Topic: The Pulsar topic to which you publish telemetry.

Auth Name: The authentication name.

Auth Strategy: The authentication strategy used by your Pulsar broker. From the drop-down menu, select token or oauth2.

If the Auth Strategy is token, provide the token for Pulsar authentication.

If the Auth Strategy is oauth2, provide the required Oauth2 Credentials URL and the Oauth2 Issuer URL properties.

You can also provide (optionally) the Oauth2 Audience and Oauth2 Scope. For related information, see Authentication using OAuth 2.0 access tokens.

Datadog

API Key: The required API key so that your Astra DB metrics export operation can successfully authenticate into the Datadog API.

Site: The Datadog site to which the exported Astra DB metrics are sent.

For details, see this Authentication topic in the Datadog documentation.

  1. Select Add Destination. The destination appears in the Export Metrics section.

Editing and deleting your destination

Once added, you can edit or delete the destination as needed.

  1. To edit your destination, click the ellipsis (…​), and select Edit.

    1. Make your changes as needed and select Update Destination.

  2. To delete your destination, click the ellipsis (…​), and select Delete.

    1. Select Delete Destination.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com