The dse commands

Table of dse commands for using DataStax Enterprise

You can issue the dse commands listed in this document from the bin directory of the DataStax Enterprise Linux installation or from the command line in a packaged or AMI distribution.

DSE commands 

Synopsis
dse [-v ] | cassandra  [options ] | hadoop  [options ] | hive  [options ]
    | mahout  [options ] | pig  [options ] | sqoop  [options ]
Synopsis when using secure JMX
$ dse [-u <username>] [-a <jmx_username>] <subcommand> [command-arguments]
For commands that require authentication credentials or JMX credentials, issue the command and subcommands with only the Cassandra user name and/or secure JMX user name. When a .dserc file does not exist, you are prompted to enter the passwords on the next line. For example:
$ dse -u cassandra hadoop fs -ls /

Prompts you to enter the password to authenticate against the configured Cassandra authentication schema.

This table describes the authentication command arguments that can be used with all subcommands.
Command arguments Description
-u User name to authenticate against the configured Cassandra authentication schema.
-p Password to authenticate against the configured Cassandra authentication schema.
-a User name to authenticate with secure JMX.
-b Password to authenticate with secure JMX.
This table describes the dse version command that can be used without authentication:
Command argument Description
-v Send the DSE version number to standard output.

dse subcommands

This table describes the dse subcommands that use authentication.
Subcommand Command arguments Description
beeline   Start the Beeline shell.
cassandra   Start up a real-time Cassandra node in the background. See Starting DataStax Enterprise.
cassandra -c Enable the Cassandra File System (CFS) but not the integrated DSE Job Trackers and Task Trackers. Use to start nodes for running an external Hadoop system.
cassandra -f Start up a real-time Cassandra node in the foreground. Can be used with -k, -t, or -s options.
cassandra -k Start up an analytics node in Spark mode in the background. See Starting Spark.
cassandra -k -t Start up an analytics node in Spark and DSE Hadoop mode. See Starting Spark.
cassandra -s Start up a DSE Search node in the background. See Starting DataStax Enterprise.
cassandra -t Start up an analytics node in DSE Hadoop mode in the background. See Starting DataStax Enterprise.
cassandra -t -j Start up an analytics node as the Job Tracker. See starting the Job Tracker node.
cassandra-stop -p pid Stop the DataStax Enterprise process number pid. See Stopping a node.
cassandra -Dcassandra.replace_address After replacing a node, replace the IP address in the table. See Replacing a dead node.

All -D options in  Cassandra start up commands are supported.

hadoop version Sends the version of the Hadoop component to standard output.
hadoop fs options Invoke the Hadoop FileSystem shell. See the Hadoop tutorial.
hadoop fs -help Send Apache Hadoop fs command descriptions to standard output. See the Hadoop tutorial.
hive   Start a Hive client.
hive --service name Start a Hive server by connecting through the JDBC driver.
mahout mahout command options Run Mahout commands.
mahout hadoop hadoop command options Add Mahout classes to classpath and execute the hadoop command. See Mahout commands.
pig   Start Pig.
shark   Start the Shark shell.
spark   Accessing Cassandra from the Spark shell.
spark-with-cc   Submit a Spark job with Cassandra Context support. Deprecated, but exists.
dse spark-class   Start a Spark application.
spark-schema options Generate Cassandra schema jar. Deprecated, but exists.
spark-schema   Generate Cassandra Context source files. Deprecated, but exists.
sqoop -help Send Apache Sqoop command line help to standard output. See the Sqoop demo.
Note: The directory in which you run the dse Spark commands must be writable by the current user.

Hadoop, hive, mahout, and pig commands must be issued from an analytics node. The hadoop fs options, which DSE Hadoop supports with one exception (-moveToLocal), are described in the HDFS File System Shell Guide on the Apache Hadoop web site. DSE Hadoop does not support the -moveToLocal option; use the -copyToLocal option instead.