Table properties

A list of CQL table properties and their syntax.

CQL supports Cassandra table properties, such as comments and compaction options, listed in the following table.

In CQL commands, such as CREATE TABLE, you format properties in either the name-value pair or collection map format. The name-value pair property syntax is:

name = value AND name = value

The collection map format, used by compaction and compression properties, is:

{ name : value, name : value, name : value ... }

Enclose properties that are strings in single quotation marks.

See CREATE TABLE for examples.

CQL properties
CQL property Description Default
bloom_filter_fp_chance Desired false-positive probability for SSTable Bloom filters. See Bloom filter below. 0.01 for SizeTieredCompactionStrategy, 0.1 for LeveledCompactionStrategy
caching Optimizes the use of cache memory without manual tuning. Set caching to one of the following values:
  • all
  • keys_only
  • rows_only
  • none

Use row caching with caution.Cassandra weights the cached data by size and access frequency. Use this parameter to specify a key or row cache instead of a table cache, as in earlier versions.

keys_only
comment A human readable comment describing the table.

See comments below.

N/A
compaction Sets the compaction strategy for the table. See compaction below. SizeTieredCompactionStrategy
compression The compression algorithm to use. Valid values are LZ4Compressor available in Cassandra 1.2.2 and later), SnappyCompressor, and DeflateCompressor. See compression below. SnappyCompressor
dclocal_read_repair_chance Specifies the probability of read repairs being invoked over all replicas in the current data center. 0.0
gc_grace_seconds Specifies the time to wait before garbage collecting tombstones (deletion markers). The default value allows a great deal of time for consistency to be achieved prior to deletion. In many deployments this interval can be reduced, and in a single-node cluster it can be safely set to zero. 864000 [10 days]
populate_io_cache_on_flush Adds newly flushed or compacted sstables to the operating system page cache, potentially evicting other cached data to make room. Enable when all data in the table is expected to fit in memory. See also the global option, compaction_preheat_key_cache. false
read_repair_chance Specifies the probability with which read repairs should be invoked on non-quorum reads. The value must be between 0 and 1. 0.1
replicate_on_write Applies only to counter tables. When set to true, replicates writes to all affected replicas regardless of the consistency level specified by the client for a write request. For counter tables, this should always be set to true. true

Bloom filter 

Desired false-positive probability for SSTable Bloom filters. When data is requested, the Bloom filter checks if the row exists before doing disk I/O.

  • Valid range: 0 to 1.0
  • Valid values

    0 Enables the unmodified (effectively the largest possible) Bloom filter

    1.0 disable the Bloom Filter

  • Recommended setting: 0.1.

    A higher value yields diminishing returns.

comments 

Comments can be used to document CQL statements in your application code. Single line comments can begin with a double dash (--) or a double slash (//) and extend to the end of the line. Multi-line comments can be enclosed in /* and */ characters.

compaction 

Sets the compaction strategy for the table. The available compaction strategies are:
  • SizeTieredCompactionStrategy: The default compaction strategy and the only compaction strategy available in releases earlier than Cassandra 1.0. This strategy triggers a minor compaction whenever there are a number of similar sized SSTables on disk (as configured by the subproperty, min_threshold. Using this strategy causes bursts in I/O activity while a compaction is in process, followed by longer and longer lulls in compaction activity as SSTable files grow larger in size. These I/O bursts can negatively effect read-heavy workloads, but typically do not impact write performance. Watching disk capacity is also important when using this strategy, as compactions can temporarily double the size of SSTables for a table while a compaction is in progress.
  • LeveledCompactionStrategy: The leveled compaction strategy creates SSTables of a fixed, relatively small size (5 MB by default) that are grouped into levels. Within each level, SSTables are guaranteed to be non-overlapping. Each level (L0, L1, L2 and so on) is 10 times as large as the previous. Disk I/O is more uniform and predictable as SSTables are continuously being compacted into progressively larger levels. At each level, row keys are merged into non-overlapping SSTables. This can improve performance for reads, because Cassandra can determine which SSTables in each level to check for the existence of row key data. This compaction strategy is modeled after Google's leveldb implementation. Articles When to Use Leveled Compaction and Leveled Compaction in Apache Cassandra provide more details.

Also use the compaction subproperties.

compression 

The compression algorithm to use. Valid values are LZ4Compressor available in Cassandra 1.2.2 and later), SnappyCompressor, and DeflateCompressor. Use an empty string ('') to disable compression, as shown in the example of how to use subproperties. Choosing the right compressor depends on your requirements for space savings over read performance. LZ4 (Cassandra 1.2.2 and later) is fastest to decompress, followed by Snappy, then by Deflate. Compression effectiveness is inversely correlated with decompression speed. The extra compression from Deflate or Snappy is not enough to make up for the decreased performance for general-purpose workloads, but for archival data they may be worth considering. Developers can also implement custom compression classes using the org.apache.cassandra.io.compress.ICompressor interface. Specify the full class name as a "string constant". Also use the compression subproperties.