Connecting to the Spark SQL JDBC server using Beeline
Use Shark Beeline to test the Spark SQL JDBC server.
You can use Shark Beeline to test the Spark SQL JDBC server.
Procedure
-
Start DataStax Enterprise with Spark enabled as a service or in a standalone
installation.
Note: To run index queries, start the node with both Spark and Hadoop enabled. Running in this mode is experimental and not supported.
-
Start the server by entering the dse
start-spark-sql-thriftserver command as a user with permissions to
write to the Spark directories.
To override the default settings for the server, pass in the configuration property using the --hiveconf option. Refer to the HiveServer2 documentation for a complete list of configuration properties.
$ dse start-spark-sql-thriftserver
- To start the server on port 10001, use the --hiveconf
hive.server2.thrift.port=10001 option when starting the
server.
$ dse start-spark-sql-thriftserver --hiveconf hive.server2.thrift.port=10001
- To enable virtual columns, use the
enableVirtualColumns=true option when starting the
server.
$ dse start-spark-sql-thriftserver --hiveconf enableVirtualColumns=true
- If authentication is enabled, provide the user name and
password to authenticate against the configured Cassandra authentication
schema:
$ dse -u username -p password start-spark-sql-thriftserver
Note: Providing Cassandra credentials to the Thrift server using --hiveconf is not supported.
- To start the server on port 10001, use the --hiveconf
hive.server2.thrift.port=10001 option when starting the
server.
-
Start the Beeline shell.
$ dse beeline
-
Connect to the server using the JDBC URI for your server.
beeline> !connect jdbc:hive2://localhost:10000
-
Connect to a keyspace and run a query from the Beehive shell.
0: jdbc:hive2://localhost:10000> use test; 0: jdbc:hive2://localhost:10000> select * from test;