Configuring and using internal authentication

Internal authentication is based on Cassandra-controlled login accounts and passwords.

Like object permission management (which uses internal authorization), internal authentication is based on Cassandra-controlled login accounts and passwords and is supported for use with dse commands and the dsetool utility. Internal authentication is supported on the following clients when you provide a user name and password to start up the client:

  • Astyanax
  • cassandra-cli
  • cqlsh
  • Drivers
  • Hector
  • pycassa

Internal authentication stores user names and bcrypt-hashed passwords in the system_auth.credentials table.

Spark component limitations

DataStax Enterprise provides internal authentication support for some Hadoop tools and for connecting Spark to Cassandra, not authenticating Spark components between each other.

Hadoop tool authentication limitations

The following authentication limitations apply when using Hadoop tools:
  • Internal authentication is not supported for Mahout.
  • Using internal authentication to run the hadoop jar command is not supported.

    The hadoop jar command accepts only the JAR file name as an option, and rejects other options such as username and password. The main class in the jar is responsible for making sure that the credentials are applied to the job configuration.

  • In Pig scripts that use the custom storage handlers CqlNativeStorage and CassandraStorage, provide credentials in the URL of the URL-encoded prepared statement:


    Use this method of providing authentication for Pig commands regardless of the mechanism that is used to pass credentials to Pig.

  • To use Hadoop tools, such as Hive, a user who is not a superuser needs all privileges to HiveMetaStore and cfs keyspaces. To configure a user account named jdoe, for example, to use Hadoop tools, use these cqlsh commands:
    cqlsh> GRANT ALL PERMISSIONS ON KEYSPACE "HiveMetaStore" TO jdoe;