Class/Object

com.datastax.spark.connector.rdd

ReadConf

Related Docs: object ReadConf | package rdd

Permalink

case class ReadConf(splitCount: Option[Int] = None, splitSizeInMB: Int = ReadConf.SplitSizeInMBParam.default, fetchSizeInRows: Int = ..., consistencyLevel: ConsistencyLevel = ..., taskMetricsEnabled: Boolean = ReadConf.TaskMetricParam.default, throughputMiBPS: Option[Double] = None, readsPerSec: Option[Int] = ReadConf.ReadsPerSecParam.default, parallelismLevel: Int = ..., executeAs: Option[String] = None) extends Product with Serializable

Read settings for RDD

splitCount

number of partitions to divide the data into; unset by default

splitSizeInMB

size of Cassandra data to be read in a single Spark task; determines the number of partitions, but ignored if splitCount is set

fetchSizeInRows

number of CQL rows to fetch in a single round-trip to Cassandra

consistencyLevel

consistency level for reads, default LOCAL_ONE; higher consistency level will disable data-locality

taskMetricsEnabled

whether or not enable task metrics updates (requires Spark 1.2+)

readsPerSec

maximum read throughput allowed per single core in requests/s while joining an RDD with C* table (joinWithCassandraTable operation) also used by enterprise integrations

Linear Supertypes
Serializable, Serializable, Product, Equals, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. ReadConf
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. AnyRef
  7. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new ReadConf(splitCount: Option[Int] = None, splitSizeInMB: Int = ReadConf.SplitSizeInMBParam.default, fetchSizeInRows: Int = ..., consistencyLevel: ConsistencyLevel = ..., taskMetricsEnabled: Boolean = ReadConf.TaskMetricParam.default, throughputMiBPS: Option[Double] = None, readsPerSec: Option[Int] = ReadConf.ReadsPerSecParam.default, parallelismLevel: Int = ..., executeAs: Option[String] = None)

    Permalink

    splitCount

    number of partitions to divide the data into; unset by default

    splitSizeInMB

    size of Cassandra data to be read in a single Spark task; determines the number of partitions, but ignored if splitCount is set

    fetchSizeInRows

    number of CQL rows to fetch in a single round-trip to Cassandra

    consistencyLevel

    consistency level for reads, default LOCAL_ONE; higher consistency level will disable data-locality

    taskMetricsEnabled

    whether or not enable task metrics updates (requires Spark 1.2+)

    readsPerSec

    maximum read throughput allowed per single core in requests/s while joining an RDD with C* table (joinWithCassandraTable operation) also used by enterprise integrations

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  6. val consistencyLevel: ConsistencyLevel

    Permalink

    consistency level for reads, default LOCAL_ONE; higher consistency level will disable data-locality

  7. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  8. val executeAs: Option[String]

    Permalink
  9. val fetchSizeInRows: Int

    Permalink

    number of CQL rows to fetch in a single round-trip to Cassandra

  10. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  11. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  12. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  13. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  14. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  15. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  16. val parallelismLevel: Int

    Permalink
  17. val readsPerSec: Option[Int]

    Permalink

    maximum read throughput allowed per single core in requests/s while joining an RDD with C* table (joinWithCassandraTable operation) also used by enterprise integrations

  18. val splitCount: Option[Int]

    Permalink

    number of partitions to divide the data into; unset by default

  19. val splitSizeInMB: Int

    Permalink

    size of Cassandra data to be read in a single Spark task; determines the number of partitions, but ignored if splitCount is set

  20. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  21. val taskMetricsEnabled: Boolean

    Permalink

    whether or not enable task metrics updates (requires Spark 1.2+)

  22. val throughputMiBPS: Option[Double]

    Permalink
  23. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  24. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  25. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from AnyRef

Inherited from Any

Ungrouped