dse commands

The dse commands provide additional controls for starting and using DataStax Enterprise.

The dse commands provide additional controls for starting and using DataStax Enterprise.

dse commands 

Synopsis
$ dse [-f config_file] [-u username -p password] [-a jmx_username -b jmx_password]  subcommand [command-arguments]

You can provide user credentials in several ways, see internal authentication.

Legend
Syntax conventions Description
Italics Variable value. Replace with a user-defined value.
[ ] Optional. Square brackets ( [ ] ) surround optional command arguments. Do not type the square brackets.
( ) Group. Parentheses ( ( ) ) identify a group to choose from. Do not type the parentheses.
| Or. A vertical bar ( | ) separates alternative elements. Type any one of the elements. Do not type the vertical bar.
[ -- ] Separate the command line options from the command arguments with two hyphens ( -- ). This syntax is useful when arguments might be mistaken for command line options.
This table describes the authentication command arguments that can be used with all subcommands.
Command arguments Description
-f Path to configuration file that stores credentials. If not specified, then use ~/.dserc if it exists.
-u User name to authenticate against the configured Cassandra authentication schema.
-p Password to authenticate against the configured Cassandra authentication schema.
-a User name to authenticate with secure JMX.
-b Password to authenticate with secure JMX.

This table describes the dse version command that can be used without authentication:

Command argument Description
-v Send the DSE version number to standard output.

dse subcommands

This table describes the dse subcommands that use authentication.
Subcommand Command arguments Description
beeline   Start the Beeline shell.
cassandra   Start up a real-time Cassandra node in the background. See Starting DataStax Enterprise.
cassandra -c Enable the Cassandra File System (CFS) but not the integrated DSE Job Trackers and Task Trackers. Use to start nodes for running an external Hadoop system.
cassandra -f Start up a real-time Cassandra node in the foreground. Can be used with -k, -t, or -s options.
cassandra -k Start up an analytics node in Spark mode in the background. See Starting Spark.
cassandra -k -t Start up an analytics node in Spark and DSE Hadoop mode. See Starting Spark.
cassandra -s Start up a DSE Search node in the background. See Starting DataStax Enterprise.
cassandra -t Start up an analytics node in DSE Hadoop mode in the background. See Starting DataStax Enterprise.
cassandra-stop -p pid Stop the DataStax Enterprise process number pid. See Stopping a node.
cassandra -s -Ddse.solr.data.dir=path Use path to store DSE Search data. See Moving solr.data
cassandra -Dcassandra.replace_address After replacing a node, replace the IP address in the table. See Replacing a dead node.

All -D options in  Cassandra start commands are supported.

esri-import ESRI import tool options The DataStax Enterprise custom ESRI import tool supports the Enclosed JSON format. See Spatial analytics support.
hadoop version Sends the version of the Hadoop component to standard output.
hadoop fs options Invoke the Hadoop FileSystem shell. See the Hadoop tutorial.
hadoop fs -help Send Apache Hadoop fs command descriptions to standard output. See the Hadoop tutorial.
hive   Start a Hive client.
hive --service name Start a Hive server by connecting through the JDBC driver.
hive-metastore-migrate Hive-metastore-migrate tool options Map custom external tables to the new release format after upgrading. See dse hive-metastore-migrate -to dest_release_num.
hive-schema   Create a hive schema representing the Cassandra table when Using Hive with BYOH.
mahout mahout command options Run Mahout commands.
mahout hadoop hadoop command options Add Mahout classes to classpath and execute the hadoop command. See Mahout commands.
pig   Start Pig.
pyspark   Start PySpark.
shark   Start the Shark shell.
spark   Accessing Cassandra from the Spark shell.
spark-submit options Launch applications on a cluster and use Spark cluster managers. See dse spark-submit.
sqoop -help Send Apache Sqoop command line help to standard output. See the Sqoop reference and the Sqoop demo.
Note: The directory in which you run the dse Spark commands must be writable by the current user.

Hadoop, hive, mahout, and pig commands must be issued from an analytics node. The hadoop fs options, which DSE Hadoop supports with one exception (-moveToLocal), are described in the HDFS File System Shell Guide on the Apache Hadoop web site. DSE Hadoop does not support the -moveToLocal option; use the -copyToLocal option instead.

The default location of the dse tool depends on the type of installation:
Package installations /usr/bin/dse
Installer-Services installations /usr/bin/dse
Installer-No Services and Tarball installations install_location/bin/dse