Authenticating with internal Cassandra password authentication
DSE Unified Authentication works with internal Cassandra password authentication.
Internal Cassandra password authentication is based on Cassandra-controlled login accounts and passwords.
Internal authentication is supported on the following clients when you provide a role and password to start up the client:
- Astyanax
- cqlsh
- Drivers
- Hector
- pycassa
You can provide authentication credentials in several ways, see Credentials for authentication.
Authentication stores user names (roles) and bcrypt-hashed passwords in the system_auth.roles table.
Spark component limitations
DataStax Enterprise provides internal authentication support for some Hadoop tools and for connecting Spark to Cassandra, not authenticating Spark components between each other.
Hadoop tool authentication limitations
- Internal authentication is not supported for Mahout.
- Using internal authentication to run the
hadoop jar
command is not supported.The
hadoop jar
command accepts only the JAR file name as an option, and rejects other options such as username and password. The main class in the JAR is responsible for making sure that the credentials are applied to the job configuration. -
In Pig scripts that use the custom storage handlers CqlNativeStorage and CassandraStorage, provide credentials in the URL of the URL-encoded prepared statement:
cql://username:password@keyspace/columnfamily cassandra://username:password@keyspace/columnfamily
Use this method of providing authentication for Pig commands regardless of the mechanism that is used to pass credentials to Pig.
- To use Hadoop tools, such as Hive, a role that does not have superuser privileges
requires all privileges to HiveMetaStore and cfs keyspaces. To configure a role
named role2, for example, to use Hadoop tools, use these cqlsh
commands:
cqlsh> GRANT ALL PERMISSIONS ON KEYSPACE "HiveMetaStore" TO role2; cqlsh> GRANT ALL PERMISSIONS ON KEYSPACE cfs TO role2;