Entry point for Spark Cassandra Connector.
Entry point for Spark Cassandra Connector. Reads SolrIndexedColumn information from C*. See above Apply method for actual implementation.
For all top level filters.
For all top level filters. If the filter can be changed into a SolrQuery we will convert it and mark it as handled by Cassandra. All other filters will be filtered within Spark
Whenever we have an attribute filter we don't need to do an IS_NOT_NULL check.
Whenever we have an attribute filter we don't need to do an IS_NOT_NULL check. This also helps when we remove partition key restrictions because we don't keep useless IsNotNulls which generate bad Solr.
Returns all predicates that can be treated as a single partition restriction.
Returns all predicates that can be treated as a single partition restriction.
Follows the same rules as in SCC Basic Cassandra Predicates
If no single partition restriction can be found returns nothing.
Unfortunately the easiest current way to remotely determine what columns have been indexed by solr is to read the Schema.xml.
Unfortunately the easiest current way to remotely determine what columns have been indexed by solr is to read the Schema.xml. To obtain this we check the Cassandra solr_admin table and pull the text from the schema.xml.bak.
Schema.xml.bak is the current "live" schema while schema.xml is the schema that will be applied on refresh.
Checks that the filter and all dependent filters are indexed by SolrIndexes.
Sometimes the Java String Representation of the Value is not what solr is expecting so we need to do conversions.
Sometimes the Java String Representation of the Value is not what solr is expecting so we need to do conversions. Additionally we need to encode that encoded string for JSON so we can pass it through to Solr.