Configuring and using internal authentication

Internal authentication is based on Cassandra-controlled login accounts and passwords.

Like object permission management that uses internal authorization, internal authentication is based on Cassandra-controlled login accounts and passwords. Internal authentication is supported on the following clients when you provide a user name and password to start up the client:

Internal authentication stores user names and bcrypt-hashed passwords in the system_auth.credentials table.

Limitations 

DataStax Enterprise 4.0 - 4.0.2 does not support internal authentication in Solr, the dsetool, and Hadoop utilities. DataStax Enterprise 4.0.3 and later provides internal authentication support for some Hadoop tools.

Authentication for Hadoop tools  

After configuring authentication in DataStax Enterprise 4.0.3, starting Hadoop requires a user name and password. These login credentials can be provided using a file or a command line option as follows:

Using a file

You can provide the user name and password by creating a file named .dserc in your DataStax Enterprise home directory. The ~/.dserc file contains the user name and password:
username=<username>
password=<password>
        

Using the command line

If a ~/.dserc file does not exist, use these options on the dse command line to provide the login credentials as follows:
dse hadoop <command> -Dcassandra.username=<username> -Dcassandra.password=<password> <other options>

dse hive <hive options> -hiveconf cassandra.username=<username> -hiveconf cassandra.password=<password>

dse pig -Dcassandra.username=<username> -Dcassandra.password=<password> <pig options> 

dse sqoop <sqoop options> --cassandra-username=<username> --cassandra-password=<password>                   
The dse command reference covers other options.

Hadoop tool authentication limitations

  • Internal authentication is not supported for Mahout.
  • Using internal authentication to run the hadoop jar command is not supported.

    The hadoop jar command accepts only the jar file name as an option, and rejects other options such as username and password. The main class in the jar is responsible for making sure that the credentials are applied to the job configuration.

  • In Pig scripts that use the custom storage handlers CqlStorage and CassandraStorage storage handlers, provide credentials in the URL of the URL-encoded prepared statement:

    cql://<username>:<password>@<keyspace>/<columnfamily>
    cassandra://<username>:<password>@<keyspace>/<columnfamily>

    Use this method of providing authentication for Pig commands regardless of the mechanism you use for passing credentials to Pig.