• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

DataStax Astra DB Classic Documentation

    • Overview
      • Release notes
      • Astra DB FAQs
      • Astra DB glossary
      • Get support
    • Getting Started
      • Grant a user access
      • Load and retrieve data
        • Use DSBulk to load data
        • Use Data Loader in Astra Portal
      • Connect a driver
      • Build sample apps
      • Use integrations
        • Connect with DataGrip
        • Connect with DBSchema
        • Connect with JanusGraph
        • Connect with Strapi
    • Planning
      • Plan options
      • Database regions
    • Securing
      • Security highlights
      • Security guidelines
      • Default user permissions
      • Change your password
      • Reset your password
      • Authentication and Authorization
      • Astra DB Plugin for HashiCorp Vault
    • Connecting
      • Connecting to a VPC
      • Connecting Change Data Capture (CDC)
      • Connecting CQL console
      • Connect the Spark Cassandra Connector to Astra
      • Drivers for Astra DB
        • Connecting C++ driver
        • Connecting C# driver
        • Connecting Java driver
        • Connecting Node.js driver
        • Connecting Python driver
        • Drivers retry policies
      • Connecting Legacy drivers
      • Get Secure Connect Bundle
    • Migrating
      • FAQs
      • Preliminary steps
        • Feasibility checks
        • Deployment and infrastructure considerations
        • Create target environment for migration
        • Understand rollback options
      • Phase 1: Deploy ZDM Proxy and connect client applications
        • Set up the ZDM Automation with ZDM Utility
        • Deploy the ZDM Proxy and monitoring
          • Configure Transport Layer Security
        • Connect client applications to ZDM Proxy
        • Manage your ZDM Proxy instances
      • Phase 2: Migrate and validate data
      • Phase 3: Enable asynchronous dual reads
      • Phase 4: Change read routing to Target
      • Phase 5: Connect client applications directly to Target
      • Troubleshooting
        • Troubleshooting tips
        • Troubleshooting scenarios
      • Additional resources
        • Glossary
        • Contribution guidelines
        • Release Notes
    • Managing
      • Managing your organization
        • User permissions
        • Pricing and billing
        • Audit Logs
        • Configuring SSO
          • Configure SSO for Microsoft Azure AD
          • Configure SSO for Okta
          • Configure SSO for OneLogin
      • Managing your database
        • Create your database
        • View your databases
        • Database statuses
        • Use DSBulk to load data
        • Use Data Loader in Astra Portal
        • Monitor your databases
        • Manage multiple keyspaces
        • Using multiple regions
        • Terminate your database
        • Resize your classic database
        • Park your classic database
        • Unpark your classic database
      • Managing with DevOps API
        • Managing database lifecycle
        • Managing roles
        • Managing users
        • Managing tokens
        • Managing multiple regions
        • Get private endpoints
        • AWS PrivateLink
        • Azure PrivateLink
        • GCP Private Service
    • Astra CLI
    • Developing with Stargate APIs
      • Develop with REST
      • Develop with Document
      • Develop with GraphQL
        • Develop with GraphQL (CQL-first)
        • Develop with GraphQL (Schema-first)
      • Develop with gRPC
        • gRPC Rust client
        • gRPC Go client
        • gRPC Node.js client
        • gRPC Java client
      • Develop with CQL
      • Tooling Resources
      • Node.js Document API client
      • Node.js REST API client
    • Stargate QuickStarts
      • Document API QuickStart
      • REST API QuickStart
      • GraphQL API CQL-first QuickStart
    • API References
      • DevOps REST API v2
      • Stargate Document API v2
      • Stargate REST API v2
  • DataStax Astra DB Classic Documentation
  • Authenticating classic databases

Authenticating classic databases

This information applies to only classic databases.

Classic databases were created before 4 March 2021. These databases have fixed compute and storage capabilities and do not include the latest authentication version.

To authenticate your DataStax Astra DB classic database, generate an authorization token. You’ll use this token to authenticate with your database and make additional requests, such as creating tables or adding rows.

Use the authorization endpoint to generate the token. For the following examples, we’ll use cURL commands. If you’re making requests from your application, use the code samples described in the authorization endpoint details.

The authorization token is active for 30 minutes from the most recent request made. If no request has been made within 30 minutes, the authorization token expires.

  1. Open a browser, navigate to Astra DB, and log in.

  2. From your Dashboard page, select your database.

  3. Copy the Cluster ID of your database. You can also find the Cluster ID in the URL, which is the last UUID in the path: https://astra.datastax.com/org/{org-Id}/database/{databaseid}

  4. Add the Cluster ID as an environment variable with the following command:

  • Set environment variable

  • Example

export ASTRA_CLUSTER_ID={databaseid}
export ASTRA_CLUSTER_ID=b5285f63-8da5-4c6e-afd8-ade371a48795
  1. Copy the Region of your database, the region where your database is located.

  2. Add the Region as an environment variable with the following command:

  • Set environment variable

  • Example

export ASTRA_CLUSTER_REGION={region}
export ASTRA_CLUSTER_REGION=us-east1
  1. Add your username, keyspace, and your password as environment variables with the following command:

  • Set environment variable

  • Example

export ASTRA_DB_USERNAME={username}
export ASTRA_DB_KEYSPACE={keyspace}
export ASTRA_DB_PASSWORD={password}
export ASTRA_DB_USERNAME=john.smith@datastax.com
export ASTRA_DB_KEYSPACE=users
export ASTRA_DB_PASSWORD=P@ssw0rd
  1. Use printenv to ensure the environment variables were exported.

  2. Run the entire cURL command with the values for your database:

    • Replace db_username with your database username.

    • Replace db_password with your database password.

    • Optional: Add a unique UUID for the authorization request:

curl --request POST \\
 --url https://${ASTRA_CLUSTER_ID}-${ASTRA_CLUSTER_REGION}.apps.astra.datastax.com/api/rest/v1/auth \
 --header 'Content-Type: application/json' \
 --data '{"username":"'"$ASTRA_DB_USERNAME"'", "password":"'"$ASTRA_DB_PASSWORD"'"}'
 --header 'x-cassandra-request-id: {unique-UUID}

Consider using a tool like this Online UUID generator to quickly create a random UUID to pass with your authorization request.

An authorization token is returned:

{"authToken": "37396a44-dcb8-4740-a97f-79f0dba47973"}
  1. Copy the value of the returned authToken and store the authorization token in the ASTRA_AUTHORIZATION_TOKEN environment variable:

  • Set environment variable

  • Example

export ASTRA_AUTHORIZATION_TOKEN={authToken}
export ASTRA_AUTHORIZATION_TOKEN=37396a44-dcb8-4740-a97f-79f0dba47973

The authorization token must be included when making requests to your database, such as creating tables, adding rows, or modifying columns.

  1. If the authorization token expires, generate a new authorization token and update it in the ASTRA_AUTHORIZATION_TOKEN environment variable.

What’s next?

You can now use your token to connect to the Astra DB APIs. See more about the available APIs:

  • Document API

  • REST API

  • GraphQL CQL first API

  • GraphQL Schema first API endif::[]

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage