Analyzing data using Spark
Spark is the default mode when you start an analytics node in a packaged installation.
- About Spark
Information about Spark architecture and capabilities.
- Using Spark with DataStax Enterprise
DataStax Enterprise integrates with Apache Spark to allow distributed analytic applications to run using database data.
- Configuring Spark
Configuring Spark includes setting Spark properties for DataStax Enterprise and the database, enabling Spark apps, and setting permissions.
- Using Spark modules with DataStax Enterprise
Spark Streaming, Spark SQL, and MLlib are modules that extend the capabilities of Spark.
- Using AlwaysOn SQL service
AlwaysOn SQL is a high availability service that responds to SQL queries from JDBC and ODBC applications.
- Accessing DataStax Enterprise data from external Spark clusters
Information on accessing data in DataStax Enterprise clusters from external Spark clusters, or Bring Your Own Spark (BYOS).
- Using the Spark Jobserver
DSE includes Spark Jobserver, a REST interface for submitting and managing Spark jobs.
- DSE Spark Connector API documentation