An RDD that will do a selecting join between left
RDD and the specified
Cassandra Table This will perform individual selects to retrieve the rows from Cassandra and will take
advantage of RDDs that have been partitioned with the
com.datastax.spark.connector.rdd.partitioner.ReplicaPartitioner
An RDD that will do a selecting join between left
RDD and the specified
Cassandra Table This will perform individual selects to retrieve the rows from Cassandra and will take
advantage of RDDs that have been partitioned with the
com.datastax.spark.connector.rdd.partitioner.ReplicaPartitioner
item type on the left side of the join (any RDD)
item type on the right side of the join (fetched from Cassandra)
An RDD that will do a selecting join between left
RDD and the specified
Cassandra Table This will perform individual selects to retrieve the rows from Cassandra and will take
advantage of RDDs that have been partitioned with the
com.datastax.spark.connector.rdd.partitioner.ReplicaPartitioner
An RDD that will do a selecting join between left
RDD and the specified
Cassandra Table This will perform individual selects to retrieve the rows from Cassandra and will take
advantage of RDDs that have been partitioned with the
com.datastax.spark.connector.rdd.partitioner.ReplicaPartitioner
item type on the left side of the join (any RDD)
item type on the right side of the join (fetched from Cassandra)
Used to get a RowReader of type [R] for transforming the rows of a particular Cassandra table into scala objects.
Used to get a RowReader of type [R] for transforming the rows of a particular Cassandra table into scala objects. Performs necessary checking of the schema and output class to make sure they are compatible.
RDD representing a Table Scan of A Cassandra table.
RDD representing a Table Scan of A Cassandra table.
This class is the main entry point for analyzing data in Cassandra database with Spark. Obtain objects of this class by calling com.datastax.spark.connector.SparkContextFunctions.cassandraTable.
Configuration properties should be passed in the SparkConf
configuration of SparkContext.
CassandraRDD
needs to open connection to Cassandra, therefore it requires appropriate
connection property values to be present in SparkConf.
For the list of required and available properties, see
CassandraConnector.
CassandraRDD
divides the data set into smaller partitions, processed locally on every
cluster node. A data partition consists of one or more contiguous token ranges.
To reduce the number of roundtrips to Cassandra, every partition is fetched in batches.
The following properties control the number of partitions and the fetch size: - spark.cassandra.input.split.sizeInMB: approx amount of data to be fetched into a single Spark partition, default 512 MB - spark.cassandra.input.fetch.sizeInRows: number of CQL rows fetched per roundtrip, default 1000
A CassandraRDD
object gets serialized and sent to every Spark Executor, which then
calls the compute
method to fetch the data on every node. The getPreferredLocations
method tells Spark the preferred nodes to fetch a partition from, so that the data for
the partition are at the same node the task was sent to. If Cassandra nodes are collocated
with Spark nodes, the queries are always sent to the Cassandra process running on the same
node as the Spark Executor process, hence data are not transferred between nodes.
If a Cassandra node fails or gets overloaded during read, the queries are retried
to a different node.
By default, reads are performed at ConsistencyLevel.LOCAL_ONE in order to leverage data-locality and minimize network traffic. This read consistency level is controlled by the spark.cassandra.input.consistency.level property.
Represents a logical conjunction of CQL predicates.
Represents a logical conjunction of CQL predicates.
Each predicate can have placeholders denoted by '?' which get substituted by values from the values
array.
The number of placeholders must match the size of the values
array.
Represents a CassandraRDD with no rows.
Represents a CassandraRDD with no rows. This RDD does not load any data from Cassandra and doesn't require for the table to exist.
Read settings for RDD
Read settings for RDD
number of partitions to divide the data into; unset by default
size of Cassandra data to be read in a single Spark task;
determines the number of partitions, but ignored if splitCount
is set
number of CQL rows to fetch in a single round-trip to Cassandra
consistency level for reads, default LOCAL_ONE; higher consistency level will disable data-locality
whether or not enable task metrics updates (requires Spark 1.2+)
maximum read throughput allowed per single core in requests/s while joining an RDD with C* table (joinWithCassandraTable operation) also used by enterprise integrations
Provides components for partitioning a Cassandra table into smaller parts of appropriate size.
Provides components for partitioning a Cassandra table into smaller parts of appropriate size. Each partition can be processed locally on at least one cluster node.
Provides components for reading data rows from Cassandra and converting them to objects of desired type.
Provides components for reading data rows from Cassandra and converting them to objects of desired type. Additionally provides a generic CassandraRow class which can represent any row.
Contains com.datastax.spark.connector.rdd.CassandraTableScanRDD class that is the main entry point for analyzing Cassandra data from Spark.