DSE Analytics and Search integration
An integrated DSE
SearchAnalytics cluster allows analytics jobs to be performed using CQL queries.
This integration allows finer-grained control over the types of queries that are used in analytics workloads, and improves performance by reducing the amount of data that is processed.
However, a DSE
SearchAnalytics cluster does not provide workload isolation and there are no detailed guidelines for provisioning and performance in production environments.
Nodes that are started in
SearchAnalytics mode allow you to create analytics queries that use DSE Search indexes.
These queries return RDDs that are used by Spark jobs to analyze the returned data.
The following code shows how to use a DSE Search query from the DSE Spark console.
val table = sc.cassandraTable("music","solr") val result = table.select("id","artist_name") .where("solr_query='artist_name:Miles*'") .take(10)
You can use Spark Spark Datasets/DataFrames instead of RDDs.
val table = spark.read.format("org.apache.spark.sql.cassandra") .options(Map("keyspace"->"music", "table" -> "solr")) .load() val result = table.select("id","artist_name").where("solr_query='artist_name:Miles*'") .show(10)
You may alternately use a Spark SQL query.
val result = spark.sql("SELECT id, artist_name FROM music.solr WHERE solr_query = 'artist_name:Miles*' LIMIT 10")
For a detailed example, see Running the Wikipedia demo with SearchAnalytics.
Create DSE SearchAnalytics nodes in a mixed-workload cluster, as described in Initializing a single datacenter per workload type.
The name of the datacenter is set to
SearchAnalyticswhen using the
DseSimpleSnitch. Do not modify existing search or analytics nodes that use
DseSimpleSnitchto be SearchAnalytics nodes. If you use another snitch like
GossipingPropertyFileSnitchyou can have a mixed workload within a datacenter.
Perform load testing to ensure your hardware has enough CPU and memory for the additional resource overhead that is required by Spark and Solr.
SearchAnalytics nodes always use driver paging settings. See Using pagination (cursors) with CQL Solr queries.
SearchAnalytics nodes might consume more resources than search or analytics nodes. Resource requirements of the nodes greatly depend on the type of query patterns you are using.
Care should be taken when enabling both Search and Analytics on a DSE node. Since both workloads will be enabled, ensure proper resources are provisioned for these simultaneous workloads. This includes sufficient memory and compute resources to accommodate the specific indexing, query, and processing appropriate to the use case.
SearchAnalytics clusters are appropriate for production environments, provided these environments provide sufficient resources for the specific workload, as is the case for all DSE clusters.
All of the fields that are queried on DSE SearchAnalytics clusters must be defined in the search index schema definition. Fields that are not defined in the search index schema columns are excluded from the results returned from Spark queries.
Using predicate push down on search indexes in Spark SQL Search predicate push down allows queries in SearchAnalytics datacenters to use Solr-indexed columns in Spark SQL queries.