The dse commands
Table of common dse commands for using DataStax Enterprise.
You can issue the dse commands listed in this document from the bin directory of the DataStax Enterprise Linux installation or from the command line in a packaged or AMI distribution.
DSE commands
$ dse [-f <config_file>] [-u <username> -p <password>] [-a <jmx_username> -b <jmx_password] <command> [command-arguments]
Syntax conventions | Description |
---|---|
Italics
or <value> |
Variable value. Replace with a user-defined value. Do not type the angle brackets. |
[ ] |
Optional. Square brackets ( [ ] ) surround optional command
arguments. Do not type the square brackets. |
( ) |
Group. Parentheses ( ( ) ) identify a group to choose from. Do
not type the parentheses. |
| |
Or. A vertical bar ( | ) separates alternative elements. Type
any one of the elements. Do not type the vertical bar. |
[ -- ] |
Separate the command line options from the command arguments with two hyphens (
-- ). This syntax is useful when arguments might be mistaken for
command line options. |
You can provide user credentials in several ways, see internal authentication.
Command arguments | Description |
---|---|
-f | Path to configuration file that stores credentials. If not specified, then use ~/.dserc if it exists. |
-u | User name to authenticate against the configured Cassandra authentication schema. |
-p | Password to authenticate against the configured Cassandra authentication schema. |
-a | User name to authenticate with secure JMX. |
-b | Password to authenticate with secure JMX. |
This table describes the dse version command that can be used without authentication:
Command argument | Description |
---|---|
-v | Send the DSE version number to standard output. |
dse subcommands
This table describes the dse subcommands that use authentication.Subcommand | Command arguments | Description |
---|---|---|
beeline | Start the Beeline shell. | |
cassandra | Start up a real-time Cassandra node in the background. See Starting DataStax Enterprise. | |
cassandra | -c | Enable the Cassandra File System (CFS) but not the integrated DSE Job Trackers and Task Trackers. Use to start nodes for running an external Hadoop system. |
cassandra | -f | Start up a real-time Cassandra node in the foreground. Can be used with -k, -t, or -s options. |
cassandra | -k | Start up an analytics node in Spark mode in the background. See Starting Spark. |
cassandra | -k -t | Start up an analytics node in Spark and DSE Hadoop mode. See Starting Spark. |
cassandra | -s | Start up a DSE Search node in the background. See Starting DataStax Enterprise. |
cassandra | -t | Start up an analytics node in DSE Hadoop mode in the background. See Starting DataStax Enterprise. |
cassandra | -t -j | Start up an analytics node as the Job Tracker. See starting the Job Tracker node. |
cassandra-stop | -p pid | Stop the DataStax Enterprise process number pid. See Stopping a node. |
cassandra | -Dcassandra.replace_address | After replacing a node, replace the IP address in the table. See Replacing a dead
node. All -D options in Cassandra start up commands are supported. |
esri-import | ESRI import tool options | The DataStax Enterprise custom ESRI import tool supports the Enclosed JSON format. See Spatial analytics support. |
hadoop | version | Sends the version of the Hadoop component to standard output. |
hadoop | fs options | Invoke the Hadoop FileSystem shell. See the Hadoop tutorial. |
hadoop | fs -help | Send Apache Hadoop fs command descriptions to standard output. See the Hadoop tutorial. |
hive | Start a Hive client. | |
hive | --service name | Start a Hive server by connecting through the JDBC driver. |
hive-schema | Create a hive schema representing the Cassandra table when Using Hive with BYOH. | |
hive-metastore-migrate | Hive-metastore-migrate tool options | Map custom external tables to the new release format after upgrading. See dse hive-metastore-migrate -to <to>. |
mahout | mahout command options | Run Mahout commands. |
mahout hadoop | hadoop command options | Add Mahout classes to classpath and execute the hadoop command. See Mahout commands. |
pig | Start Pig. | |
pyspark | Start PySpark. | |
shark | Start the Shark shell. | |
spark | Accessing Cassandra from the Spark shell. | |
spark-submit | options | Launch applications on a cluster and use Spark cluster managers. See dse spark-submit. |
spark-submit-with-cc | options | Submit a Spark job with Cassandra Context support. Deprecated, but exists. |
spark-schema | options | Generate Cassandra schema jar. Deprecated, but exists. |
sqoop | -help | Send Apache Sqoop command line help to standard output. See the Sqoop demo. |
Hadoop, hive, mahout, and pig commands must be issued from an analytics node. The hadoop fs options, which DSE Hadoop supports with one exception (-moveToLocal), are described in the HDFS File System Shell Guide on the Apache Hadoop web site. DSE Hadoop does not support the -moveToLocal option; use the -copyToLocal option instead.