Steps for new deployment

Here are high-level steps for implementing HCD Unified Authentication in a new deployment.

To implement authentication and authorization in a pre-established HCD environment, additional precautions and steps are required. See Steps for production environments.

Configure HCD Unified Authentication

  1. Ensure that required data for logins and permission management are accessible and in all datacenters. See Configure the security keyspaces replication factors.

  2. Configure the system settings. See Enable HCD Unified Authentication.

  3. Configuring authentication and authorization methods (schemes):

  4. Configure JMX authentication: Requires changes to the cassandra-env.sh for nodetool and hcdtool to run against an authentication enabled cluster.

    The location of the cassandra-env.sh file depends on the type of installation:

    • Package installations: /etc/hcd/cassandra/cassandra-env.sh

    • Tarball installations: <installation_location>/resources/cassandra/conf/cassandra-env.sh

  5. Restart HCD. See Starting and stopping HCD.

    Nodes are vulnerable to malicious activity following the restart. Anybody can access the system using the default cassandra account with password cassandra. DataStax recommends isolating the cluster until after disabling the cassandra account.

  6. Set up your own root account and disable or drop the default, cassandra account. See Add a superuser login.

    Using the default cassandra account may impact performance, because all requests including login execute with consistency level QUORUM. DataStax recommends only using this account to create your root account.

  7. Create roles that map to users in the configured schemes and grant permission to allow users access to database resources, such as keyspaces and tables. See Set up logins and users.

    • Use the latest DataStax certified drivers in all applications connecting to HCD Unified Authentication-enabled transactional nodes. HCD drivers support all the features of the Cassandra drivers and provide additional support for multiple authentication methods as well as externally managed roles assignment. See DataStax drivers.

    • Spark component limitations: HCD provides internal authentication support for connecting Spark to HCD transactional nodes, not for authenticating between Spark components.

Next steps

After enabling authentication and authorization, run tools by supplying credentials:

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2025 DataStax | Privacy policy | Terms of use | Manage Privacy Choices

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com