Determines which filter predicates can be pushed down to Cassandra.
Determines which filter predicates can be pushed down to Cassandra.
The list of predicates to be pushed down is available in predicatesToPushDown
property.
The list of predicates that cannot be pushed down is available in predicatesToPreserve
property.
Store data source options
Implements BaseRelation]], InsertableRelation]] and PrunedFilteredScan]] It inserts data to and scans Cassandra table.
Implements BaseRelation]], InsertableRelation]] and PrunedFilteredScan]] It inserts data to and scans Cassandra table. If filterPushdown is true, it pushs down some filters to CQL
Cassandra data source extends RelationProvider, SchemaRelationProvider and CreatableRelationProvider.
Cassandra data source extends RelationProvider, SchemaRelationProvider and CreatableRelationProvider. It's used internally by Spark SQL to create Relation for a table which specifies the Cassandra data source e.g.
CREATE TEMPORARY TABLE tmpTable USING org.apache.spark.sql.cassandra OPTIONS ( table "table", keyspace "keyspace", cluster "test_cluster", pushdown "true", spark.cassandra.input.fetch.sizeInRows "10", spark.cassandra.output.consistency.level "ONE", spark.cassandra.connection.timeoutMS "1000" )
A unified API for predicates, used by BasicCassandraPredicatePushDown.
A unified API for predicates, used by BasicCassandraPredicatePushDown.
Keeps all the Spark-specific stuff out of BasicCassandraPredicatePushDown
It is also easy to plug-in custom predicate implementations for unit-testing.
Store table name, keyspace name and option cluster name, keyspace is equivalent to database
A data frame format used to access Cassandra through Connector
Convert Cassandra data type to Catalyst data type
Provides PredicateOps
adapters for Expression and Filter classes
Returns a map of options which configure the path to Cassandra table as well as whether pushdown is enabled or not