Starting and stopping the Spark SQL Thrift JDBC server

The Spark SQL Thrift JDBC server provides a JDBC interface for client connections to Cassandra.

The Spark SQL Thrift JDBC server provides a JDBC interface for client connections to Cassandra. The server is an extension of the HiveServer2 server, and uses the same configuration options.

By default, the server listens on port 10000 on the localhost interface on node from which it was started.

Note: The Spark SQL Thrift server also provides a ODBC interface, but DataStax recommends using the Databricks ODBC driver for Apache Shark.

Procedure

  1. Start DataStax Enterprise with Spark enabled as a service or in a standalone installation.
    Note: To run index queries, start the node with both Spark and Hadoop enabled. Running in this mode is experimental and not supported.
  2. Start the server by entering the dse start-spark-sql-thriftserver command as a user with permissions to write to the Spark directories.

    To override the default settings for the server, pass in the configuration property using the --hiveconf option. Refer to the HiveServer2 documentation for a complete list of configuration properties.

    $ dse start-spark-sql-thriftserver
    • To start the server on port 10001, use the --hiveconf hive.server2.thrift.port=10001 option when starting the server.
      $ dse start-spark-sql-thriftserver --hiveconf hive.server2.thrift.port=10001
    • To enable virtual columns, use the enableVirtualColumns=true option when starting the server.
      $ dse start-spark-sql-thriftserver --hiveconf enableVirtualColumns=true
    • If authentication is enabled, provide the user name and password to authenticate against the configured Cassandra authentication schema:
      $ dse -u username -p password start-spark-sql-thriftserver
      Note: Providing Cassandra credentials to the Thrift server using --hiveconf is not supported.
  3. To stop the server, enter the dse stop-spark-sql-thriftserver command.
    $ dse stop-spark-sql-thriftserver

What's next

You can now connect your application by using JDBC to the server at the URI: jdbc:hive2://hostname:port number. See Connecting to the Spark SQL JDBC server using Beeline for a test using the Beeline console.