cassandra.yaml configuration file

The cassandra.yaml file is the main configuration file for DataStax Enterprise.

dse.yaml

The location of the dse.yaml file depends on the type of installation:
Package installations /etc/dse/dse.yaml
Tarball installations installation_location/resources/dse/conf/dse.yaml

cassandra-topology.properties

The location of the cassandra-topology.properties file depends on the type of installation:
Package installations /etc/dse/cassandra/cassandra-topology.properties
Tarball installation installation_location/resources/cassandra/conf/cassandra-topology.properties

cassandra-rackdc.properties

The location of the cassandra-rackdc.properties file depends on the type of installation:
Package installations /etc/dse/cassandra/cassandra-rackdc.properties
Tarball installations installation_location/resources/cassandra/conf/cassandra-rackdc.properties

cassandra.yaml

The location of the cassandra.yaml file depends on the type of installation:
Package installations /etc/dse/cassandra/cassandra.yaml
Tarball installations installation_location/resources/cassandra/conf/cassandra.yaml

The cassandra.yaml file is the main configuration file for DataStax Enterprise. The dse.yaml file is the primary configuration file for security, DSE Search, DSE Graph, and DSE Analytics.

Important: After changing properties in the cassandra.yaml file, you must restart the node for the changes to take effect.

Syntax

For the properties in each section, the parent setting has zero spaces. Each child entry requires at least two spaces. Adhere to the YAML syntax and retain the spacing.

  • Default values that are not defined are shown as Default: none.
  • Internally defined default values are described.
    Note: Default values can be defined internally, commented out, or have implementation dependencies on other properties in the cassandra.yaml file. Additionally, some commented-out values may not match the actual default values. The commented out values are recommended alternatives to the default values.

Organization

The configuration properties are grouped into the following sections:

Quick start properties

The minimal properties needed for configuring a cluster.

cluster_name: 'Test Cluster'
listen_address: localhost
# listen_interface: wlan0
# listen_interface_prefer_ipv6: false
Tip: See .
cluster_name
The name of the cluster. This setting prevents nodes in one logical cluster from joining another. All nodes in a cluster must have the same value.

Default: 'Test Cluster'

listen_address
The IP address or hostname that the database binds to for connecting this node to other nodes.
Warning:
  • Never set listen_address to 0.0.0.0.
  • Set listen_address or listen_interface, do not set both.

Default: localhost

listen_interface
The interface that the database binds to for connecting to other nodes. Interfaces must correspond to a single address. IP aliasing is not supported.
Warning: Set listen_address or listen_interface, not both.

Default: wlan0

listen_interface_prefer_ipv6
Use IPv4 or IPv6 when interface is specified by name.
  • false - Use first IPv4 address.
  • true - Use first IPv6 address.
When only a single address is used, that address is selected without regard to this setting.

Default: false

Default directories

data_file_directories:
     - /var/lib/cassandra/data
commitlog_directory: /var/lib/cassandra/commitlog
cdc_raw_directory: /var/lib/cassandra/cdc_raw
hints_directory: /var/lib/cassandra/hints
saved_caches_directory: /var/lib/cassandra/saved_caches

If you have changed any of the default directories during installation, set these properties to the new locations. Make sure you have root access.

data_file_directories
The directory where table data is stored on disk. The database distributes data evenly across the location, subject to the granularity of the configured compaction strategy.
Tip: For production, DataStax recommends RAID 0 and SSDs.

Default: - /var/lib/cassandra/data

commitlog_directory
The directory where the commit log is stored.

For optimal write performance, place the commit log on a separate disk partition, or ideally on a separate physical device from the data file directories. Because the commit log is append only, a hard disk drive (HDD) is acceptable.

Default: /var/lib/cassandra/commitlog

cdc_raw_directory
The directory where the change data capture (CDC) commit log segments are stored on flush. DataStax recommends a physical device that is separate from the data directories. See .

Default: /var/lib/cassandra/cdc_raw

hints_directory
The directory in which hints are stored.

Default: /var/lib/cassandra/hints

saved_caches_directory
The directory location where table key and row caches are stored.

Default: /var/lib/cassandra/saved_caches

Commonly used properties

Properties most frequently used when configuring DataStax Enterprise.

Before starting a node for the first time, DataStax recommends that you carefully evaluate your requirements.

Common initialization properties

commit_failure_policy: stop
# disk_optimization_strategy: ssd
disk_failure_policy: stop
endpoint_snitch: com.datastax.bdp.snitch.DseSimpleSnitch
seed_provider:
    - org.apache.cassandra.locator.SimpleSeedProvider 
        - seeds: "127.0.0.1"
enable_user_defined_functions: false
enable_scripted_user_defined_functions: false
enable_user_defined_functions_threads: true
user_defined_function_warn_micros: 500
user_defined_function_fail_micros: 10000
user_defined_function_warn_heap_mb: 200
user_defined_function_fail_heap_mb: 500
user_function_timeout_policy: die
Note: Be sure to set the properties in the Quick start section as well.
commit_failure_policy
Policy for commit disk failures:
  • die - Shut down the node and kill the JVM, so the node can be replaced.
  • stop - Shut down the node, leaving the node effectively dead, available for inspection using JMX.
  • stop_commit - Shut down the commit log, letting writes collect but continuing to service reads.
  • ignore - Ignore fatal errors and let the batches fail.

Default: stop

disk_optimization_strategy
The strategy for optimizing disk reads.
  • ssd - Solid state disks.
  • spinning - Spinning disks.

Default: ssd

disk_failure_policy
How the database responds to disk failure. Recommend settings: stop or best_effort.
  • die - Shut down gossip and client transports, and kill the JVM for any file system errors or single SSTable errors, so the node can be replaced.
  • stop_paranoid - Shut down the node, even for single SSTable errors.
  • stop - Shut down the node, leaving the node effectively dead, but available for inspection using JMX.
  • best_effort - Stop using the failed disk and respond to requests based on the remaining available SSTables. This setting allows obsolete data at consistency level of ONE.
  • ignore - Ignore fatal errors and lets the requests fail; all file system errors are logged but otherwise ignored.
Tip: See .

Default: stop

endpoint_snitch
A class that implements the IEndpointSnitch interface. The database uses the snitch to locate nodes and route requests.
Important: Use only snitch implementations bundled with DSE.
  • DseSimpleSnitch

    Appropriate only for development deployments. Proximity is determined by DSE workload, which places transactional, analytics, and search nodes into their separate datacenters. Does not recognize datacenter or rack information.

  • GossipingPropertyFileSnitch

    Recommended for production. Reads rack and datacenter for the local node in cassandra-rackdc.properties file and propagates these values to other nodes via gossip. For migration from the PropertyFileSnitch, uses the cassandra-topology.properties file if it is present.

  • PropertyFileSnitch

    Determines proximity by rack and datacenter that are explicitly configured in cassandra-topology.properties file.

  • Ec2Snitch

    For EC2 deployments in a single region. Loads region and availability zone information from the Amazon EC2 API. The region is treated as the datacenter, the availability zone is treated as the rack, and uses only private IP addresses. For this reason, Ec2Snitch does not work across multiple regions.

  • Ec2MultiRegionSnitch

    Uses the public IP as the broadcast_address to allow cross-region connectivity. This means you must also set seed addresses to the public IP and open the storage_port or ssl_storage_port on the public IP firewall. For intra-region traffic, the database switches to the private IP after establishing a connection.

  • RackInferringSnitch

    Proximity is determined by rack and datacenter, which are assumed to correspond to the 3rd and 2nd octet of each node's IP address, respectively. Best used as an example for writing a custom snitch class (unless this happens to match your deployment conventions).

  • GoogleCloudSnitch

    Use for deployments on Google Cloud Platform across one or more regions. The region is treated as a datacenter and the availability zones are treated as racks within the datacenter. All communication occurs over private IP addresses within the same logical network.

  • CloudstackSnitch

    Use the CloudstackSnitch for Apache Cloudstack environments.

Tip: See .

Default: com.datastax.bdp.snitch.DseSimpleSnitch

seed_provider
The addresses of hosts that are designated as contact points in the cluster. A joining node contacts one of the nodes in the -seeds list to learn the topology of the ring.
Important: Use only seed provider implementations bundled with DSE.
  • class_name - The class that handles the seed logic. It can be customized, but this is typically not required.

    Default: org.apache.cassandra.locator.SimpleSeedProvider

  • - seeds - A comma delimited list of addresses that are used by gossip for bootstrapping new nodes joining a cluster. If your cluster includes multiple nodes, you must change the list from the default value to the IP address of one of the nodes.

    Default: "127.0.0.1"

    Attention: Making every node a seed node is not recommended because of increased maintenance and reduced gossip performance. Gossip optimization is not critical, but it is recommended to use a small seed list (approximately three nodes per datacenter).
Tip: See and .

Default: org.apache.cassandra.locator.SimpleSeedProvider

Common compaction settings

compaction_throughput_mb_per_sec: 16
compaction_large_partition_warning_threshold_mb: 100
compaction_throughput_mb_per_sec
The MB per second to throttle compaction for the entire system. The faster the database inserts data, the faster the system must compact in order to keep the SSTable count down.
  • 16 to 32 x rate of write throughput in MB/second, recommended value.
  • 0 - Disable compaction throttling.
Tip: See .

Default: 16

compaction_large_partition_warning_threshold_mb
The partition size threshold before logging a warning.

Default: 100

Memtable settings

# memtable_space_in_mb: 2048
memtable_allocation_type: heap_buffers
memtable_flush_writers: 4
memtable_space_in_mb
Total permitted memory to use for memtables. When this threshold is exceeded, writes are not accepted until a flush completes. A flush is triggered based on memtable_cleanup_threshold. When not set:
  • If the deprecated settings are not present, the calculated default is 1/4 of the heap size.
  • If the deprecated memtable_heap_space_in_mb or memtable_offheap_space_in_mb settings are present, an error is logged and the appropriate value is used based on memtable_allocation_type.
Tip: See Tuning the Java heap.

Default: 2048

memtable_cleanup_threshold
Ratio used for automatic memtable flush.
With the caveat below in mind, the memtable_cleanup_threshold setting is valid in DSE 6.7 and 6.8; this setting is removed in Apache Cassandra® 4.0.

Generally, the calculated default value is appropriate and does not need adjusting. DataStax recommends contacting the DataStax Services team before changing this value.

Default: 0.2

memtable_allocation_type
The method the database uses to allocate and manage memtable memory.
  • offheap_objects - Native memory, eliminating NIO buffer heap overhead.
  • heap_buffers - On heap NIO (non-blocking I/O) buffers.
  • offheap_buffers - Off heap (direct) NIO buffers.

Default: offheap_objects

memtable_flush_writers
The number of memtable flush writer threads per disk and the total number of memtables that can be flushed concurrently, generally a combination of computer and I/O bound. Memtable flushing is more CPU efficient than memtable ingest. A single thread can keep up with the ingest rate of a server on a single fast disk, until the server temporarily becomes IO bound under contention, typically with compaction. Generally, the default value is appropriate and does not need adjusting.

Default for SSDs: 4

Default for HDDs: 2

memtable_heap_space_in_mb (deprecated)
Note: This setting is deprecated. Use memtable_space_in_mb instead.
memtable_offheap_space_in_mb (deprecated)
Note: This setting is deprecated. Use memtable_space_in_mb instead.

Common automatic backup settings

incremental_backups: false
snapshot_before_compaction: false
snapshot_before_dropping_column: false
incremental_backups
Enables incremental backups.
  • true - Enable incremental backups to create a hard link to each SSTable flushed or streamed locally in a backups subdirectory of the keyspace data. Incremental backups enable storing backups off site without transferring entire snapshots.
    Important: The database does not automatically clear incremental backup files. DataStax recommends setting up a process to clear incremental backup hard links each time a new snapshot is created.
  • false - Do not enable incremental backups.
Tip: See .

Default: false

snapshot_before_compaction
Enables a snapshot before each compaction. A snapshot is useful to back up data when there is a data format change.
Important: Be careful using this option, the database does not clean up older snapshots automatically.
Tip: See .

Default: false

snapshot_before_dropping_column
When enabled, every time the user drops a column/columns from a table, a snapshot is created on each node in the cluster before the change in schema is applied. Those snapshots have the same name on each node. For example: auto-snapshot_drop-column-columnname_20200515143511000. The name includes the name of the dropped column and the timestamp (UTC) when the column was dropped.
Important: The database does not automatically clear incremental backup files. DataStax recommends setting up a process to clear incremental backup hard links each time a new snapshot is created.

Default: false

Performance tuning properties

Tuning performance and system resource utilization, including commit log, compaction, memory, disk I/O, CPU, reads, and writes.

Performing tuning properties include:

Commit log settings

commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
# commitlog_sync_group_window_in_ms: 1000
# commitlog_sync_batch_window_in_ms: 2  //deprecated
commitlog_segment_size_in_mb: 32
# commitlog_total_space_in_mb: 8192
# commitlog_compression:
#   - class_name: LZ4Compressor
#     parameters:
#         -
commitlog_sync
Commit log synchronization method:
  • periodic - Send ACK signal for writes immediately. Commit log is synced every commitlog_sync_period_in_ms.
  • group - Send ACK signal for writes after the commit log has been flushed to disk. Wait up to commitlog_sync_group_window_in_ms between flushes.
  • batch - Send ACK signal for writes after the commit log has been flushed to disk. Each incoming write triggers the flush task.

Default: periodic

commitlog_sync_period_in_ms
Use with commitlog_sync: periodic. Time interval between syncing the commit log to disk. Periodic syncs are acknowledged immediately.

Default: 10000

commitlog_sync_group_window_in_ms
Use with commitlog_sync: group. The time that the database waits between flushing the commit log to disk. DataStax recommends using group instead of batch.

Default: 1000

commitlog_sync_batch_window_in_ms
Deprecated. Use with commitlog_sync: batch. The maximum time that queries may be batched together.

Default: commented out (2)

commitlog_segment_size_in_mb
The size of an individual commitlog file segment. A commitlog segment may be archived, deleted, or recycled after all its data has been flushed to SSTables. This data can potentially include commitlog segments from every table in the system. The default size is usually suitable, but for commitlog archiving you might want a finer granularity; 8 or 16 MB is reasonable.
Restriction:
If you set max_mutation_size_in_kb explicitly, then you must set commitlog_segment_size_in_mb to:
2 * max_mutation_size_in_kb / 1024
The value must be positive and less than 2048.
Tip: See .

Default: 32

max_mutation_size_in_kb
The maximum size of a mutation before the mutation is rejected. Before increasing the commitlog segment size of the commitlog segments, investigate why the mutations are larger than expected. Look for underlying issues with access patterns and data model, because increasing the commitlog segment size is a limited fix.

Default: calculated (commitlog_segment_size_in_mb * 1024) / 2

commitlog_total_space_in_mb
Disk usage threshold for commit logs before triggering the database flushing memtables to disk. If the total space used by all commit logs exceeds this threshold, the database flushes memtables to disk for the oldest commitlog segments to reclaim disk space by removing those log segments from the commit log. This flushing reduces the amount of data to replay on start-up, and prevents infrequently updated tables from keeping commitlog segments indefinitely. If the commitlog_total_space_in_mb is small, the result is more flush activity on less-active tables.
Tip: See .

Default for 64-bit JVMs: calculated (8192 or 25% of the total space of the commit log value, whichever is smaller)

Default for 32-bit JVMs:  calculated (32 or 25% of the total space of the commit log value, whichever is smaller )

commitlog_compression
The compressor to use if commit log is compressed. To make changes, uncomment the commitlog_compression section and make changes to these options:
# commitlog_compression:
#   - class_name: LZ4Compressor
#     parameters:
#         -
  • class_name: LZ4Compressor, Snappy, or Deflate
  • parameters: optional parameters for the compressor
When not set, the default compression for the commit log is uncompressed.

Default: commented out

Lightweight transactions (LWT) settings

# concurrent_lw_transactions: 128
# max_pending_lw_transactions: 10000
concurrent_lw_transactions
Maximum number of permitted concurrent lightweight transactions (LWT).
  • A higher number might improve throughput if non-contending LWTs are in heavy use, but will use more memory and might be less successful with contention.
  • When not set, the default value is 8x the number of TPC cores. This default value is appropriate for most environments.

Default: calculated (8 x the number of TPC cores)

max_pending_lw_transactions
Maximum number of lightweight transactions (LWT) in the queue before node reports OverloadedException for LWTs.

Default: 10000

Change-data-capture (CDC) space settings

cdc_enabled: false
cdc_total_space_in_mb: 4096
cdc_free_space_check_interval_ms: 250

See also cdc_raw_directory.

cdc_enabled
Enables change data capture (CDC) functionality on a per-node basis. This modifies the logic used for write path allocation rejection.
  • true - use CDC functionality to reject mutations that contain a CDC-enabled table if at space limit threshold in cdc_raw_directory.
  • false - standard behavior, never reject.

Default: false

cdc_total_space_in_mb
Total space to use for change-data-capture (CDC) logs on disk. If space allocated for CDC exceeds this value, the database throws WriteTimeoutException on mutations, including CDC-enabled tables. A CDCCompactor (a consumer) is responsible for parsing the raw CDC logs and deleting them when parsing is completed.

Default: calculated (4096 or 1/8th of the total space of the drive where the cdc_raw_directory resides)

cdc_free_space_check_interval_ms
Interval between checks for new available space for CDC-tracked tables when the cdc_total_space_in_mb threshold is reached and the CDCCompactor is running behind or experiencing back pressure.

Default: 250

Compaction settings

# concurrent_validations: 0
concurrent_materialized_view_builders: 2
sstable_preemptive_open_interval_in_mb: 50
# pick_level_on_streaming: false
Tip: See also compaction_throughput_mb_per_sec in the common compaction settings section and .
concurrent_validations
Configures simultaneous repair validations.
  • values greater than 1 - Number of simultaneous repair validations to allow.
  • 0 - Unbounded.

Default: 0

concurrent_materialized_view_builders
Number of simultaneous materialized view builder tasks allowed to run concurrently. When a view is created, the node ranges are split into (num_processors x 4) builder tasks and submitted to this executor.

Default: 2

sstable_preemptive_open_interval_in_mb
The size of the SSTables to trigger preemptive opens. The compaction process opens SSTables before they are completely written and uses them in place of the prior SSTables for any range previously written. This process helps to smoothly transfer reads between the SSTables by reducing cache churn and keeps hot rows hot.
Important: A low value has a negative performance impact and will eventually cause heap pressure and GC activity. The optimal value depends on hardware and workload.

Default: 50

pick_level_on_streaming
The compaction level for streamed-in SSTables.
  • true - Streamed-in SSTables of tables using are placed on the same level as the source node. For operational tasks like nodetool refresh or replacing a node, true improves performance for compaction work.
  • false - Streamed-in SSTables are placed in level 0.

Default: false

Cache and index settings

column_index_size_in_kb: 16
# file_cache_size_in_mb: 4096
# direct_reads_size_in_mb: 128
column_index_size_in_kb
Granularity of the index of rows within a partition. For huge rows, decrease this setting to improve seek time. Lower density nodes might benefit from decreasing this value to 4, 2, or 1.

Default: 16

file_cache_size_in_mb

DSE 6.7.0-6.7.2: Maximum memory for buffer pooling and SSTable chunk cache. 32 MB is reserved for pooling buffers, the remaining memory is the cache for holding recent or frequently used index pages and uncompressed SSTable chunks. This pool is allocated off heap and is in addition to the memory allocated for heap. Memory is allocated only when needed.

DSE 6.7.3 and later: Buffer pool is split into two pools, this setting defines the maximum memory to use file buffers that are stored in the file cache, also known as chunk cache. Memory is allocated only when needed but is not released. The other buffer pool is direct_reads_size_in_mb.

See .

Default: calculated (0.5 of -XX:MaxDirectMemorySize)

direct_reads_size_in_mb
DSE 6.7.3 and later: Buffer pool is split into two pools; this setting defines the buffer pool for transient read operations. A buffer is typically used by a read operation and then returned to this pool when the operation is finished so that it can be reused by other operations. The other buffer pool is file_cache_size_in_mb.
Note: When not set, before DSE 6.7.9, the default direct_reads_size_in_mb value was calculated as 2 MB per TPC core thread, plus 2 MB shared by non-TPC threads, with a maximum value of 128 MB. In DSE 6.7.9 and later, the default is calculated as 4 MB per TPC core thread, plus 4 MB shared by non-TPC threads, with a maximum value of 128 MB.

Default: calculated

Streaming settings

# stream_throughput_outbound_megabits_per_sec: 200
# inter_dc_stream_throughput_outbound_megabits_per_sec: 200
# streaming_keep_alive_period_in_secs: 300
# streaming_connections_per_host: 1
stream_throughput_outbound_megabits_per_sec
Throttle for the throughput of all outbound streaming file transfers on a node. The database does mostly sequential I/O when streaming data during bootstrap or repair which can saturate the network connection and degrade client (RPC) performance.

Default: 200

inter_dc_stream_throughput_outbound_megabits_per_sec
Throttle for all streaming file transfers between datacenters, and for network stream traffic as configured with stream_throughput_outbound_megabits_per_sec. When not set, the value is 200 Mbps.
Note: Should be set to a value less than or equal to stream_throughput_outbound_megabits_per_sec since it is a subset of total throughput.

Default: 200

streaming_keep_alive_period_in_secs
Interval to send keep-alive messages to prevent reset connections during streaming. The stream session fails when a keep-alive message is not received for 2 keep-alive cycles.

Default: 300 (a stalled stream times out in 10 minutes)

streaming_connections_per_host
Maximum number of connections per host for streaming. Increase this value when you notice that joins are CPU-bound, rather than network-bound. For example, a few nodes with large files.

Default: 1

Fsync settings

trickle_fsync: true
trickle_fsync_interval_in_kb: 10240
trickle_fsync
When set to true, causes fsync to force the operating system to flush the dirty buffers at the set interval trickle_fsync_interval_in_kb. Enable this parameter to prevent sudden dirty buffer flushing from impacting read latencies. Recommended for use with SSDs, but not with HDDs.

Default: false

trickle_fsync_interval_in_kb
The size of the fsync in kilobytes.

Default: 10240

max_value_size_in_mb
The maximum size of any value in SSTables. SSTables are marked as corrupted when the threshold is exceeded.

Default: 256

Thread Per Core (TPC) parameters

#tpc_cores: 
# tpc_io_cores: 
io_global_queue_depth: 128
tpc_cores
The number of concurrent CoreThreads. The CoreThreads are the main workers in a DSE 6.x node, and process various asynchronous tasks from their queue. If not set, the default is the number of cores (processors on the machine) minus one. Note that configuring tpc_cores affects the default value for tpc_io_cores.

To achieve optimal throughput and latency, for a given workload, set tpc_cores to half the number of CPUs (minimum) to double the number of CPUs (maximum). In cases where there are a large number of incoming client connections, increasing tpc_cores to more than the default usually results in CoreThreads receiving more CPU time.

Attention: DSE Search workloads only: set tpc_cores to the number of physical CPUs. See .

Default: commented out; defaults to the number of cores minus one.

tpc_io_cores
The subset of tpc_cores that process asynchronous IO tasks. (That is, disk reads.) Must be smaller or equal to tpc_cores. Lower this value to decrease parallel disk IO requests.

Default: commented out; by default, calculated as min(io_global_queue_depth/4, tpc_cores)

io_global_queue_depth
Global IO queue depth used for reads when AIO is enabled, which is the default for SSDs. The optimal queue depth as found with the fio tool for a given disk setup.

Default: 128

NodeSync parameters

nodesync:
    rate_in_kb: 1024
By default, the NodeSync service runs on every node.
Tip: Manage the NodeSync service using the Manage NodeSync service using nodetool subcommands command.
rate_in_kb
The maximum bytes per second for data validation on the local node. The optimum validation rate for each node may vary.

Default: 1024

Advanced properties

Properties for advanced users or properties that are less commonly used.

Advanced initialization properties

batch_size_warn_threshold_in_kb: 64
batch_size_fail_threshold_in_kb: 640
unlogged_batch_across_partitions_warn_threshold: 10
# broadcast_address: 1.2.3.4
# listen_on_broadcast_address: false
# initial_token:
# num_tokens: 128
# allocate_tokens_for_local_replication_factor: 3
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
tracetype_query_ttl: 86400
tracetype_repair_ttl: 604800
auto_bootstrap
This setting has been removed from default configuration.
  • true - causes new (non-seed) nodes migrate the right data to themselves automatically
  • false - When initializing a fresh cluster without data
Tip: See .

Default: true

batch_size_warn_threshold_in_kb
Threshold to log a warning message when any multiple-partition batch size exceeds this value in kilobytes.
CAUTION: Increasing this threshold can lead to node instability.

Default: 64

batch_size_fail_threshold_in_kb
Threshold to fail and log WARN on any multiple-partition batch whose size exceeds this value. The default value is 10X the value of batch_size_warn_threshold_in_kb.

Default: 640

unlogged_batch_across_partitions_warn_threshold
Threshold to log a WARN message on any batches not of type LOGGED that span across more partitions than this limit.

Default: 10

broadcast_address
The public IP address this node uses to broadcast to other nodes outside the network or across regions in multiple-region EC2 deployments. If this property is commented out, the node uses the same IP address or hostname as listen_address. A node does not need a separate broadcast_address in a single-node or single-datacenter installation, or in an EC2-based network that supports automatic switching between private and public communication. It is necessary to set a separate listen_address and broadcast_address on a node with multiple physical network interfaces or other topologies where not all nodes have access to other nodes by their private IP addresses. For specific configurations, see the instructions for listen_address.

Default: listen_address

listen_on_broadcast_address
Enables the node to communicate on both interfaces.
  • true - If this node uses multiple physical network interfaces, set a unique IP address for broadcast_address
  • false - If this node is on a network that automatically routes between public and private networks, like Amazon EC2 does
Tip: See listen_address.

Default: false

initial_token
Enables single-node-per-token architecture, in which a node owns exactly one contiguous range in the ring space. Overrides num_tokens.
  • token - The initial token to start the contiguous range.
  • 1 - Disable single-node-per-token architecture.
If your installation is not using vnodes
Use this parameter only with num_tokens (vnodes ) in special cases such as .

Default: 1

num_tokens
Define virtual node (vnode) token architecture.
Note: All other nodes in the datacenter must have the same token architecture.
  • 1 - Disable vnodes and use 1 token for legacy compatibility. You must set an initial_token value when setting up a production cluster for the first time, and when adding capacity. See Generating tokens.
  • a number between 2 and 128 - the number of token ranges to assign to this virtual node (vnode). A higher value increases the probability that the data and workload are evenly distributed.
    Restriction: DataStax recommends not using vnodes with DSE Search. However, if you decide to use vnodes with DSE Search, do not use more than 8 vnodes and ensure that allocate_tokens_for_local_replication_factor option in cassandra.yaml is correctly configured for your environment.
    CAUTION: Using vnodes can impact performance for your cluster. DataStax recommends testing the configuration before enabling vnodes in production environments.

    When the token number varies between nodes in a datacenter, the vnode logic assigns a proportional number of ranges relative to other nodes in the datacenter. In general, if all nodes have equal hardware capability, each node should have the same num_tokens value.

To migrate an existing cluster from single node per token range to vnodes, see .
Default: 1
allocate_tokens_for_local_replication_factor
  • RF of keyspaces in datacenter - triggers the recommended algorithmic allocation for the RF and num_tokens for this node.

    The allocation algorithm optimizes the workload balance using the target keyspace replication factor. DataStax recommends setting the number of tokens to 8 to distribute the workload with ~10% variance between nodes. The allocation algorithm attempts to choose tokens in a way that optimizes replicated load over the nodes in the datacenter for the specified RF. The load assigned to each node is close to proportional to the number of vnodes.

    Note: The allocation algorithm is supported only for the Murmur3Partitioner and RandomPartitioner partitioners. The Murmur3Partitioner is the default partitioning strategy for new clusters and the right choice for new clusters in almost all cases.
  • commented out - uses the random selection algorithm to assign token ranges randomly.
    Note: Over time, loads in a datacenter using the random selection algorithm become unevenly distributed. DataStax recommends using only the allocation algorithm.

Default: use random selection algorithm

See Virtual node (vnode) configuration, and for set up instructions see or .

partitioner
The class that distributes rows (by partition key) across all nodes in the cluster. Any IPartitioner may be used, including your own as long as it is in the class path. For new clusters use the default partitioner.
DataStax Enterprise provides the following partitioners for backward compatibility:
  • RandomPartitioner
  • ByteOrderedPartitioner (deprecated)
  • OrderPreservingPartitioner (deprecated)
Important: Use only partitioner implementations bundled with DSE.
Tip: See .

Default: org.apache.cassandra.dht.Murmur3Partitioner

tracetype_query_ttl
TTL for different trace types used during logging of the query process.

Default: 86400

tracetype_repair_ttl
TTL for different trace types used during logging of the repair process.

Default: 604800

Advanced automatic backup setting

auto_snapshot: true
auto_snapshot
Enables snapshots of the data before truncating a keyspace or dropping a table. To prevent data loss, DataStax strongly advises using the default setting. If you set auto_snapshot to false, you lose data on truncation or drop.

Default: true

Global row properties

column_index_cache_size_in_kb: 2
# row_cache_class_name: org.apache.cassandra.cache.OHCProvider
row_cache_size_in_mb: 0
row_cache_save_period: 0
# row_cache_keys_to_save: 100

When creating or modifying tables, you can enable or disable the row cache for that table by setting the caching parameter. Other row cache tuning and configuration options are set at the global (node) level. The database uses these settings to automatically distribute memory for each table on the node based on the overall workload and specific table usage. You can also configure the save periods for these caches globally.

Tip: See Configuring caches.
column_index_cache_size_in_kb
Applies only to BIG format SSTables, the threshold for the total size of all index entries for a partition that the database stores in the partition key cache. If the total size of all index entries for a partition exceeds this amount, the database stops putting entries for this partition into the partition key cache.

Default: 2

row_cache_class_name
The classname of the row cache provider to use.
  • org.apache.cassandra.cache.OHCProvider - Fully off-heap.
  • org.apache.cassandra.cache.SerializingCacheProvider - Partially off-heap, available in earlier releases.
Important: Use only row cache provider implementations bundled with DSE.

Default: org.apache.cassandra.cache.OHCProvider

row_cache_size_in_mb
Maximum size of the row cache in memory. The row cache can save time, but it is space-intensive because it contains the entire row. Use the row cache only for hot rows or static rows. If you reduce the size, you may not get you hottest keys loaded on start up.
  • 0 - Disable row caching
  • MB - Maximum size of the row cache in memory

Default: 0

row_cache_save_period
The number of seconds that rows are kept in cache. Caches are saved to saved_caches_directory. This setting has limited use as described in row_cache_size_in_mb.

Default: 0

row_cache_keys_to_save
The number of keys from the row cache to save. All keys are saved.

Default: 100

Counter caches properties

counter_cache_size_in_mb:
counter_cache_save_period: 7200
# counter_cache_keys_to_save: 100

Counter cache helps to reduce counter locks' contention for hot counter cells. In case of RF = 1 a counter cache hit causes the database to skip the read before write entirely. With RF > 1 a counter cache hit still helps to reduce the duration of the lock hold, helping with hot counter cell updates, but does not allow skipping the read entirely. Only the local (clock, count) tuple of a counter cell is kept in memory, not the whole counter, so it is relatively cheap.

Note: If you reduce the counter cache size, the database may load the hottest keys start-up.
counter_cache_size_in_mb
The size of counter cache.
  • size - The smaller of minimum of 2.5% of heap or 50 megabytes (MB).
  • 0 - Disable. If your system performs counter deletes and relies on low gc_grace_seconds, you should disable the counter cache.

Default: calculated

counter_cache_save_period
The time, in seconds, after which the database saves the counter cache (keys only). The database saves caches to saved_caches_directory.

Default: 7200 (2 hours)

counter_cache_keys_to_save
The number of keys from the counter cache to save.

Default: save all keys

Tombstone settings

tombstone_warn_threshold: 1000
tombstone_failure_threshold: 100000

When executing a scan, within or across a partition, the database must keep tombstones in memory to allow them to return to the coordinator. The coordinator uses tombstones to ensure that other replicas know about the deleted rows. Workloads that generate numerous tombstones may cause performance problems and exhaust the server heap. Adjust these thresholds only if you understand the impact and want to scan more tombstones. You can adjust these thresholds at runtime using the StorageServiceMBean.

Tip: See the DataStax Developer Blog post Cassandra anti-patterns: Queues and queue-like datasets.
tombstone_warn_threshold
Issue a warning if a query scans more than this number of tombstones.

Default: 1000

tombstone_failure_threshold
Abort a query if a query scans more than this number of tombstones.

Default: 100000

Network timeout settings

read_request_timeout_in_ms: 5000
range_request_timeout_in_ms: 10000
aggregated_request_timeout_in_ms: 120000
write_request_timeout_in_ms: 2000
counter_write_request_timeout_in_ms: 5000
cas_contention_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 60000
request_timeout_in_ms: 10000
# cross_dc_rtt_in_ms: 0
read_request_timeout_in_ms
How long the coordinator waits for read operations to complete before timing it out.

Default: 5000

range_request_timeout_in_ms
How long the coordinator waits for sequential or index scans to complete before timing it out.

Default: 10000

aggregated_request_timeout_in_ms
How long the coordinator waits for sequential or index scans to complete. Lowest acceptable value is 10 ms. This timeout does not apply to aggregated queries such as SELECT, COUNT(*), MIN(x), and so on.

Default: 120000 (2 minutes)

write_request_timeout_in_ms
How long the coordinator waits for write requests to complete with at least one node in the local datacenter. Lowest acceptable value is 10 ms.
Tip: See .

Default: 2000 (2 seconds)

counter_write_request_timeout_in_ms
How long the coordinator waits for counter writes to complete before timing it out.

Default: 5000 (5 seconds)

cas_contention_timeout_in_ms
How long the coordinator continues to retry a CAS (compare and set) operation that contends with other proposals for the same row. If the coordinator cannot complete the operation within this timespan, it aborts the operation.

Default: 1000 (1 second)

truncate_request_timeout_in_ms
How long the coordinator waits for a truncate (the removal of all data from a table) to complete before timing it out. The long default value allows the database to take a snapshot before removing the data. If auto_snapshot is disabled (not recommended), you can reduce this time.

Default: 60000 (1 minute)

request_timeout_in_ms
The default timeout value for other miscellaneous operations. Lowest acceptable value is 10 ms.
Tip: See .

Default: 10000

cross_dc_rtt_in_ms
How much to increase the cross-datacenter timeout (write_request_timeout_in_ms + cross_dc_rtt_in_ms) for requests that involve only nodes in a remote datacenter. This setting is intended to reduce hint pressure.
Tip: DataStax recommends using LOCAL_* consistency levels (CL) for read and write requests in multi-datacenter deployments to avoid timeouts that may occur when remote nodes are chosen to satisfy the CL, such as QUORUM.

Default: 0

slow_query_log_timeout_in_ms
The time of a slow query. Queries that exceed this value generate an aggregated log message to identify slow queries.
  • milliseconds - How long before a node logs slow queries.
  • 0 - Disable slow query log timeout.

Default: 500

Internode settings

storage_port: 7000
cross_node_timeout: false
# internode_send_buff_size_in_bytes:
# internode_recv_buff_size_in_bytes:
internode_compression: dc
inter_dc_tcp_nodelay: false
storage_port
The port for internode communication. Follow security best practices, do not expose this port to the internet. Apply firewall rules.
Tip: See .

Default: 7000

cross_node_timeout
Enables operation timeout information exchange between nodes to accurately measure request timeouts.
  • true - Enable operation timeout information exchange between nodes.
  • false - The replica assumes any requests are forwarded to it instantly by the coordinator. During overload conditions this means extra time is required for processing already-timed-out requests.
CAUTION: Before enabling this property make sure NTP (network time protocol) is installed and the times are synchronized among the nodes.

Default: false

internode_send_buff_size_in_bytes
The sending socket buffer size, in bytes, for internode calls.
Tip: See TCP settings.
The sending socket buffer size and internode_recv_buff_size_in_bytes is limited by net.core.wmem_max. For more details run man tcp and refer to:
  • /proc/sys/net/core/wmem_max
  • /proc/sys/net/core/rmem_max
  • /proc/sys/net/ipv4/tcp_wmem
  • /proc/sys/net/ipv4/tcp_wmem

Default: determined by net.ipv4.tcp_wmem

internode_recv_buff_size_in_bytes
The receiving socket buffer size in bytes for internode calls.

Default: not set

internode_compression
Enables compressing traffic between nodes
  • all - Compress all traffic.
  • dc - Compress traffic between datacenters only.
  • none - No compression.

Default: dc

inter_dc_tcp_nodelay
Enables tcp_nodelay for inter-datacenter communication.
  • true - Enable tcp_nodelay for inter-datacenter communication. Reduce the number of packets that need to be sent over the network.
  • false - The network sends larger, but fewer, network packets. This reduces overhead from the TCP protocol itself. May increase latency by blocking cross datacenter responses.

Default: false

Native transport (CQL Binary Protocol)

start_native_transport: true
native_transport_port: 9042
# native_transport_port_ssl: 9142
# native_transport_max_frame_size_in_mb: 256
# native_transport_max_concurrent_connections: -1
# native_transport_max_concurrent_connections_per_ip: -1
native_transport_address: localhost
# native_transport_interface: eth0
# native_transport_interface_prefer_ipv6: false
# native_transport_broadcast_address: 1.2.3.4
native_transport_keepalive: true
start_native_transport
Enables or disables the native transport server.

Default: true

native_transport_port
The port where the CQL native transport listens for clients. For security reasons, do not expose this port to the internet. Firewall it if needed.

Default: 9042

native_transport_max_frame_size_in_mb
The maximum allowed size of a frame. Frame (requests) larger than this are rejected as invalid.

Default: 256

native_transport_max_concurrent_connections
The maximum number of concurrent client connections.

Default: -1 (unlimited)

native_transport_max_concurrent_connections_per_ip
The maximum number of concurrent client connections per source IP address.

Default: -1 (unlimited)

native_transport_address
When left blank, uses the configured hostname of the node. Unlike the listen_address, this value can be set to 0.0.0.0, but you must set the native_transport_broadcast_address to a value other than 0.0.0.0.
Note: Set native_transport_address OR native_transport_interface, not both.

Default: localhost

native_transport_interface
IP aliasing is not supported.
Note: Set native_transport_address OR native_transport_interface, not both.

Default: eth0

native_transport_interface_prefer_ipv6
Use IPv4 or IPv6 when interface is specified by name.
  • false - use first IPv4 address.
  • true - use first IPv6 address.
When only a single address is used, that address is selected without regard to this setting.

Default: false

native_transport_broadcast_address
Native transport address to broadcast to drivers and other DSE nodes. This cannot be set to 0.0.0.0.
  • blank - will be set to the value of native_transport_address
  • IP_address - when native_transport_address is set to 0.0.0.0

Default:1.2.3.4

native_transport_keepalive
Enables keepalive on native connections.

Default: true

Advanced fault detection settings

Settings to handle poorly performing or failing components.
# gc_log_threshold_in_ms: 200
# gc_warn_threshold_in_ms: 1000
# otc_coalescing_strategy: DISABLED
# otc_coalescing_window_us: 200
# otc_coalescing_enough_coalesced_messages: 8
gc_log_threshold_in_ms
The threshold for log messages at the INFO level. Adjust to minimize logging.

Default: 200

gc_warn_threshold_in_ms
Threshold for GC pause. Any GC pause longer than this interval is logged at the WARN level. By default, the database logs any GC pause greater than 200 ms at the INFO level.
Tip: See .

Default: 1000

otc_coalescing_strategy
Strategy to combine multiple network messages into a single packet for outbound TCP connections to nodes in the same data center. See the DataStax Developer Blog post Performance doubling with message coalescing.
Important: Use only strategy implementations bundled with DSE.
Supported strategies are:
  • FIXED
  • MOVINGAVERAGE
  • TIMEHORIZON
  • DISABLED

Default: DISABLED

otc_coalescing_window_us
How many microseconds to wait for coalescing messages to nodes in the same datacenter.
  • For FIXED strategy - the amount of time after the first message is received before it is sent with any accompanying messages.
  • For MOVING average - the maximum wait time and the interval that messages must arrive on average to enable coalescing.

Default: 200

otc_coalescing_enough_coalesced_messages
The threshold for the number of messages to nodes in the same data center. Do not coalesce messages when this value is exceeded. Should be more than 2 and less than 128.

Default: 8

seed_gossip_probability
The percentage of time that gossip messages are sent to a seed node during each round of gossip. Decreases the time to propagate gossip changes across the cluster.

Default: 1.0 (100%)

Backpressure settings

back_pressure_enabled: false
back_pressure_strategy:
    - class_name: org.apache.cassandra.net.RateBasedBackPressure
      parameters:
      - high_ratio: 0.90
        factor: 5
        flow: FAST
back_pressure_enabled
Enables for the coordinator to apply the specified back pressure strategy to each mutation that is sent to replicas.

Default: false

back_pressure_strategy
To add new strategies, implement org.apache.cassandra.net.BackpressureStrategy and provide a public constructor that accepts a Map<String, Object>.
Important: Use only strategy implementations bundled with DSE.
class_name
The default class_name uses the ratio between incoming mutation responses and outgoing mutation requests.

Default: org.apache.cassandra.net.RateBasedBackPressur

high_ratio
When outgoing mutations are below this value, they are rate limited according to the incoming rate decreased by the factor (described below). When above this value, the rate limiting is increased by the factor.

Default: 0.90

factor
A number between 1 and 10. When backpressure is below high ratio, outgoing mutations are rate limited according to the incoming rate decreased by the given factor; if above high ratio, the rate limiting is increased by the given factor.

Default: 5

flow
The flow speed to apply rate limiting:
  • FAST - Rate limit to the speed of the fastest replica.
  • SLOW - Rate limit to the speed of the slowest replica.

Default: FAST

dynamic_snitch_badness_threshold
The performance threshold for dynamically routing client requests away from a poorly performing node. Specifically, it controls how much worse a poorly performing node has to be before the dynamic snitch prefers other replicas. A value of 0.2 means the database continues to prefer the static snitch values until the node response time is 20% worse than the best performing node. Until the threshold is reached, incoming requests are statically routed to the closest replica as determined by the snitch.

Default: 0.1

dynamic_snitch_reset_interval_in_ms
Time interval after which the database resets all node scores to allow a bad node to recover.

Default: 600000

dynamic_snitch_update_interval_in_ms
The time interval, in milliseconds, between the calculation of node scores. Because score calculation is CPU intensive, be careful when reducing this interval.

Default: 100

Hinted handoff options

hinted_handoff_enabled: true
# hinted_handoff_disabled_datacenters:
#     - DC1
#     - DC2
max_hint_window_in_ms: 10800000 # 3 hours
hinted_handoff_throttle_in_kb: 1024
max_hints_delivery_threads: 2
hints_directory: /var/lib/cassandra/hints
hints_flush_period_in_ms: 10000
max_hints_file_size_in_mb: 128
#hints_compression:
#   - class_name: LZ4Compressor
#     parameters:
#         -
batchlog_replay_throttle_in_kb: 1024
# batchlog_endpoint_strategy: random_remote
Tip: See .
hinted_handoff_enabled
Enables hinted handoff. A hint indicates that the write needs to be replayed to an unavailable node. The database writes the hint to a hints file on the coordinator node.
  • true - Globally enable hinted handoff, except for datacenters specified for hinted_handoff_disabled_datacenters.
  • false - Do not enable hinted handoff.

Default: true

hinted_handoff_disabled_datacenters
A blacklist of datacenters that will not perform hinted handoffs. To disable hinted handoff on a certain datacenter, add its name to this list.

Default: no datacenters are blacklisted

max_hint_window_in_ms
Maximum amount of time during which the database generates hints for an unresponsive node. After this interval, the database does not generate any new hints for the node until it is back up and responsive. If the node goes down again, the database starts a new interval. This setting can prevent a sudden demand for resources when a node is brought back online and the rest of the cluster attempts to replay a large volume of hinted writes.
Tip: See .

Default: 10800000 (3 hours)

hinted_handoff_throttle_in_kb
Maximum amount of traffic per delivery thread in kilobytes per second. This rate reduces proportionally to the number of nodes in the cluster. For example, if there are two nodes in the cluster, each delivery thread uses. the maximum rate. If there are three, each node throttles to half of the maximum, since the two nodes are expected to deliver hints simultaneously.
Note: When applying this limit, the calculated hint transmission rate is based on the uncompressed hint size, even if internode_compression or hints_compression is enabled.

Default: 1024

hints_flush_period_in_ms
The time, in milliseconds, to wait before flushing hints from internal buffers to disk.

Default: 10000

max_hints_delivery_threads
Number of threads the database uses to deliver hints. In multiple datacenter deployments, consider increasing this number because cross datacenter handoff is generally slower.

Default: 2

max_hints_file_size_in_mb
The maximum size for a single hints file, in megabytes.

Default: 128

hints_compression
The compressor for hint files. Supported compressors: LZ, Snappy, and Deflate. When not set, the database does not compress hints files.

Default: LZ4Compressor

batchlog_replay_throttle_in_kb
Total maximum throttle, in KB per second, for replaying hints. Throttling is reduced proportionally to the number of nodes in the cluster

Default: 1024

batchlog_endpoint_strategy
Strategy to choose the batchlog storage endpoints.
  • random_remote - Default, purely random. Prevents the local rack, if possible. Same behavior as earlier releases.
  • dynamic_remote - Uses DynamicEndpointSnitch to select batchlog storage endpoints. Prevents the local rack, if possible. This strategy offers the same availability guarantees as random_remote, but selects the fastest endpoints according to the DynamicEndpointSnitch. DynamicEndpointSnitch tracks reads but not writes. Write-only, or mostly-write, workloads might not benefit from this strategy. Note: this strategy will fall back to random_remote if dynamic_snitch is not enabled.
  • dynamic - Mostly the same as dynamic_remote, except that local rack is not excluded, which offers lower availability guarantee than random_remote or dynamic_remote. Note: this strategy will fall back to random_remote if dynamic_snitch is not enabled.

Default: random_remote

Security properties

DSE Advanced Security fortifies DataStax Enterprise (DSE) databases against potential harm due to deliberate attack or user error. Configuration properties include authentication and authorization, permissions, roles, encryption of data in-flight and at-rest, and data auditing. DSE Unified Authentication provides authentication, authorization, and role management. Enabling DSE Unified Authentication requires additional configuration in dse.yaml, see .

authenticator: com.datastax.bdp.cassandra.auth.DseAuthenticator
# internode_authenticator: org.apache.cassandra.auth.AllowAllInternodeAuthenticator
authorizer: com.datastax.bdp.cassandra.auth.DseAuthorizer
role_manager: com.datastax.bdp.cassandra.auth.DseRoleManager
system_keyspaces_filtering: false
roles_validity_in_ms: 120000
# roles_update_interval_in_ms: 120000
permissions_validity_in_ms: 120000
# permissions_update_interval_in_ms: 120000
authenticator
The authentication backend. The only supported authenticator is DseAuthenticator for external authentication with multiple authentication schemes such as Kerberos, LDAP, and internal authentication. Authenticators other than DseAuthenticator are deprecated and not supported. Some security features might not work correctly if other authenticators are used. See authentication_options in dse.yaml.
Important: Use only authentication implementations bundled with DSE.

Default: com.datastax.bdp.cassandra.auth.DseAuthenticator

internode_authenticator
Internode authentication backend to enable secure connections from peer nodes.
Important: Use only authentication implementations bundled with DSE.

Default: org.apache.cassandra.auth.AllowAllInternodeAuthenticator

authorizer
The authorization backend. Authorizers other than DseAuthorizer are not supported. DseAuthorizer supports enhanced permission management of DSE-specific resources. Authorizers other than DseAuthorizer are deprecated and not supported. Some security features might not work correctly if other authorizers are used. See configDseYaml.html#configDseYaml__authorizationOptions indse.yaml.
Important: Use only authorization implementations bundled with DSE.

Default: com.datastax.bdp.cassandra.auth.DseAuthorizer

system_keyspaces_filtering
Enables system keyspace filtering so that users can access and view only schema information for rows in the system and system_schema keyspaces to which they have access.
  • true - Enable system keyspace filtering.
    • Data in the system.local and system.peers tables are visible.
    • Data in the following tables of the system keyspace are filtered based on the role's DESCRIBE privileges for keyspaces; only rows for appropriate keyspaces will be displayed in:
      • size_estimates
      • sstable_activity
      • built_indexes
      • built_views
      • available_ranges
      • view_builds_in_progress
    • Data in all tables in the system_schema keyspace are filtered based on a role's DESCRIBE privileges for keyspaces stored in the system_schema tables.
    • Read operations against other tables in the system keyspace are denied.
  • false - Disable system keyspace filtering.
Attention: Security requirements and user permissions apply. Enable this feature only after appropriate user permissions are granted. You must grant the DESCRIBE permission to role on any keyspaces stored in the system keyspaces. If you do not grant the permission, you will see an error that states the keyspace is not found.
GRANT DESCRIBE ON KEYSPACE keyspace_name TO ROLE role_name;
Tip: See and .

Default: false

role_manager
The DSE Role Manager supports LDAP roles and internal roles supported by the CassandraRoleManager. Role options are stored in the dse_security keyspace. When using the DSE Role Manager, increase the replication factor of the dse_security keyspace. Role managers other than DseRoleManager are deprecated and not supported. Some security features might not work correctly if other role managers are used.
Important: Use only role manager implementations bundled with DSE.

Default: com.datastax.bdp.cassandra.auth.DseRoleManager

roles_validity_in_ms
Validity period for roles cache in milliseconds. Determines how long to cache the list of roles assigned to the user; users may have several roles, either through direct assignment or inheritance (a role that has been granted to another role). Adjust this setting based on the complexity of your role hierarchy, tolerance for role changes, the number of nodes in your environment, and activity level of the cluster.
Fetching permissions can be an expensive operation, so this setting allows flexibility. Granted roles are cached for authenticated sessions in AuthenticatedUser. After the specified time elapses, role validity is rechecked. Disabled automatically when internal authentication is not enabled when using DseAuthenticator.
  • 0 - Disable role caching
  • milliseconds - how long to cache the list of roles assigned to the user

Default: 120000 (2 minutes)

roles_update_interval_in_ms
Refresh interval for roles cache. After this interval, cache entries become eligible for refresh. On next access, the database schedules an async reload, and returns the old value until the reload completes. If roles_validity_in_ms is non-zero, then this value must also be non-zero. When not set, the default is the same value as roles_validity_in_ms.

Default: 120000

permissions_validity_in_ms
How long permissions in cache remain valid to manage performance impact of permissions queries. Fetching permissions can be resource intensive. Set the cache validity period to your security tolerances. The cache is used for the standard authentication and the row-level access control (RLAC) cache. The cache is quite effective at small durations.
  • 0 - Disable permissions cache.
  • milliseconds - Time that permissions in cache remain valid.
CAUTION: does not automatically invalidate cached permissions. Permissions are invalidated the next time they are refreshed.

Default: 120000 (2 minutes)

permissions_update_interval_in_ms
Refresh interval for the standard authentication cache and the row-level access control (RLAC) cache. After this interval, cache entries become eligible for refresh. On next access, the database schedules an async reload and returns the old value until the reload completes. If permissions_validity_in_ms is non-zero, the value for roles_update_interval_in_ms must also be non-zero. When not set, the default is the same value as permissions_validity_in_ms.

Default: 2000

permissions_cache_max_entries
The maximum number of entries that are held by the standard authentication cache and row-level access control (RLAC) cache. With the default value of 1000, the RLAC permissions cache can have up to 1000 entries in it, and the standard authentication cache can have up to 1000 entries. This single option applies to both caches. To size the permissions cache for use with , use this formula:
numRlacUsers * numRlacTables + 100
If this option is not present in cassandra.yaml, manually enter it to use a value other than 1000. See .

Default: 1000

Internode encryption options

Node-to-node (internode) encryption protects data that is transferred between nodes in a cluster using SSL.

server_encryption_options:
    internode_encryption: none
    keystore: resources/dse/conf/.keystore
    keystore_password: cassandra
    truststore: resources/dse/conf/.truststore
    truststore_password: cassandra
    # More advanced defaults below:
    # protocol: TLS
    # algorithm: SunX509
    # keystore_type: JKS
    # truststore_type: JKS
    # cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
    # require_client_auth: false
    # require_endpoint_verification: false
server_encryption_options
Internode encryption options. If enabled, you must also generate keys and provide the appropriate key and truststore locations and passwords. No custom encryption options are supported.
Tip: The passwords used in these options must match the passwords used when generating the keystore and truststore. For instructions on generating these files, see Creating a Keystore to Use with JSSE.
Tip: See .
internode_encryption
Encryption options for of internode communication using the TLS_RSA_WITH_AES_128_CBC_SHA cipher suite for authentication, key exchange, and encryption of data transfers. Use the DHE/ECDHE ciphers, such as TLS_DHE_RSA_WITH_AES_128_CBC_SHA if running in Federal Information Processing Standard (FIPS) 140 compliant mode.
  • all - Encrypt all internode communications.
  • none - No encryption.
  • dc - Encrypt the traffic between the datacenters.
  • rack - Encrypt the traffic between the racks.

Default: none

keystore
Relative path from DSE installation directory or absolute path to the Java keystore (JKS) suitable for use with Java Secure Socket Extension (JSSE), which is the Java version of the Secure Sockets Layer (SSL), and Transport Layer Security (TLS) protocols. The keystore contains the private key used to encrypt outgoing messages.

Default: resources/dse/conf/.keystore

keystore_password
Password for the keystore. This must match the password used when generating the keystore and truststore.

Default: cassandra

truststore
Relative path from DSE installation directory or absolute path to truststore containing the trusted certificate for authenticating remote servers.

Default: resources/dse/conf/.truststore

truststore_password
Password for the truststore.

Default: cassandra

protocol

Default: TLS

algorithm

Default: SunX509

keystore_type
Valid types are JKS, JCEKS, PKCS11, PKCS12. For file-based keystores, use PKCS12.
Attention: DataStax supports PKCS11 as a keystore_type on nodes with cassandra or advanced workloads. The cassandra workload support is specific to DSE 6.7.7 and later releases. The advanced workload support is specific to DSE 6.7.9 and later. If PKCS11 is needed, in server_encryption_options or client_encryption_options, specify the keystore_type as PKCS11 and the keystore as NONE. PKCS11 is not supported in DSE 6.0.x and 5.1.x releases. PKCS11 is not supported as a truststore_type.

Default: JKS

truststore_type
Valid types are JKS, JCEKS, PKCS12. For file-based truststores, use PKCS12.
Attention: Due to an OpenSSL issue, you cannot use a PKCS12 truststore that was generated via OpenSSL. For example, a truststore generated via the following command will not work with DSE:
openssl pkcs12 -export -nokeys -out truststore.pfx -in intermediate.chain.pem
However, truststores generated via Java's keytool and then converted to PKCS12 work with DSE. Example:
keytool -importcert -alias rootca -file rootca.pem -keystore truststore.jks
keytool -importcert -alias intermediate -file intermediate.pem -keystore truststore.jks
keytool -importkeystore -srckeystore truststore.jks -destkeystore truststore.pfx -deststoretype pkcs12 

Default: JKS

cipher_suites
Supported ciphers:
  • TLS_RSA_WITH_AES_128_CBC_SHA
  • TLS_RSA_WITH_AES_256_CBC_SHA
  • TLS_DHE_RSA_WITH_AES_128_CBC_SHA
  • TLS_DHE_RSA_WITH_AES_256_CBC_SHA
  • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
  • TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA

Default: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]

require_client_auth
Enables certificate authentication for node-to-node (internode) encryption.

Default: false

require_endpoint_verification
Whether to verify the connected host and the host IP address in the certificate match. If set to true, the endpoint that you specify when generating the certificate key must be an IP address. Do not specify a DNS hostname. Example with a correctly specified IP address:
keytool -genkeypair -keyalg RSA \
    -alias node0 \
    -keystore my_keystore.jks \
    -storepass cassandra \
    -keypass cassandra \
    -validity 730 \
    -keysize 2048 \
    -dname "CN=node0, OU=lacerda-ssl, O=Datastax, C=CC" \
    -ext "san=ip:10.101.35.236"

Default: false

Client-to-node encryption options

Client-to-node encryption protects in-flight data from client machines to a database cluster using SSL (Secure Sockets Layer) and establishes a secure channel between the client and the coordinator node.
client_encryption_options:
    enabled: false
    # If enabled and optional is set to true, encrypted and unencrypted connections over native transport are handled.
    optional: false
    keystore: resources/dse/conf/.keystore
    keystore_password: cassandra
    # require_client_auth: false
    # Set trustore and truststore_password if require_client_auth is true
    # truststore: resources/dse/conf/.truststore
    # truststore_password: cassandra
    # More advanced defaults below:
    # protocol: TLS
    # algorithm: SunX509
    # keystore_type: JKS
    # truststore_type: JKS
    # cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
Tip: See .
client_encryption_options
Enables client-to-node encryption. You must also generate keys and provide the appropriate key and truststore locations and passwords. There are no custom encryption options enabled for DataStax Enterprise.
  • For PKCS11 support, using server_encryption_options and client_encryption_options in cassandra.yaml, specify the keystore_type as PKCS11, and specify the keystore as NONE.
  • PKCS11 is not supported for truststore.
  • The Advanced workloads include DSE Graph, DSE Search, DSE Analytics.
enabled
Enables client-to-node encryption.

Default: false

optional
When optional is selected, both encrypted and unencrypted connections over native transport are allowed. That is a necessary transition state to facilitate enabling client to node encryption on live clusters without inducing an outage for existing unencrypted clients. Typically, once existing clients are migrated to encrypted connections, optional is unselected in order to enforce native transport encryption.

Default: false

keystore
Relative path from DSE installation directory or absolute path to the Java keystore (JKS) suitable for use with Java Secure Socket Extension (JSSE), which is the Java version of the Secure Sockets Layer (SSL), and Transport Layer Security (TLS) protocols. The keystore contains the private key used to encrypt outgoing messages.

Default: resources/dse/conf/.keystore

keystore_password
Password for the keystore.

Default: cassandra

require_client_auth
Enables certificate authentication for client-to-node encryption.
  • true - Require certificate authentication for client-to-node encryption. Client certificates must be present on all nodes in the cluster.
  • false - Do not require certificate authentication for client-to-node encryption.

Default: false

truststore
Relative path from DSE installation directory or absolute path to truststore containing the trusted certificate for authenticating remote servers.
Note: Truststore password and path is only required when require_client_auth is set to true.

Default: resources/dse/conf/.truststore

truststore_password
Password for the truststore. This must match the password used when generating the keystore and truststore.
Note: Truststore password and path is only required when require_client_auth is set to true.

Default: cassandra

protocol

Default: TLS

algorithm

Default: SunX509

keystore_type
Valid types are JKS, JCEKS, PKCS11, or PKCS12. For file-based keystores, use PKCS12.
Attention: DataStax supports PKCS11 as a keystore_type on nodes with cassandra or advanced workloads. The cassandra workload support is specific to DSE 6.7.7 and later releases. The advanced workload support is specific to DSE 6.7.9 and later. If PKCS11 is needed, in server_encryption_options or client_encryption_options, specify the keystore_type as PKCS11 and the keystore as NONE. PKCS11 is not supported in DSE 6.0.x and 5.1.x releases. PKCS11 is not supported as a truststore_type.

Default: JKS

truststore_type
Valid types are JKS, JCEKS, or PKCS12. For file-based truststores, use PKCS12.
Attention: Due to an OpenSSL issue, you cannot use a PKCS12 truststore that was generated via OpenSSL. For example, a truststore generated via the following command will not work with DSE:
openssl pkcs12 -export -nokeys -out truststore.pfx -in intermediate.chain.pem
However, truststores generated via Java's keytool and then converted to PKCS12 work with DSE. Example:
keytool -importcert -alias rootca -file rootca.pem -keystore truststore.jks
keytool -importcert -alias intermediate -file intermediate.pem -keystore truststore.jks
keytool -importkeystore -srckeystore truststore.jks -destkeystore truststore.pfx -deststoretype pkcs12 

Default: JKS

cipher_suites
Supported ciphers:
  • TLS_RSA_WITH_AES_128_CBC_SHA
  • TLS_RSA_WITH_AES_256_CBC_SHA
  • TLS_DHE_RSA_WITH_AES_128_CBC_SHA
  • TLS_DHE_RSA_WITH_AES_256_CBC_SHA
  • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
  • TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA

Default: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]

Transparent data encryption options

transparent_data_encryption_options
DataStax Enterprise supports this option only for backward compatibility. When using DSE, configure data encryption options in the dse.yaml; see .

TDE properties:

  • enabled: (Default: false)
  • chunk_length_kb: (Default: 64)
  • cipher: options:
    • AES
    • CBC
    • PKCS5Padding
  • key_alias: testing:1
  • iv_length: 16
    Note: iv_length is commented out in the default cassandra.yaml file. Uncomment only if cipher is set to AES. The value must be 16 (bytes).
  • key_provider:
    • class_name: org.apache.cassandra.security.JKSKeyProvider

      parameters:

      • keystore: conf/.keystore
      • keystore_password: cassandra
      • store_type: JCEKS
      • key_password: cassandra

SSL Ports

ssl_storage_port: 7001
native_transport_port_ssl: 9142
Tip: See .
ssl_storage_port
The SSL port for encrypted communication. Unused unless enabled in encryption_options. Follow security best practices, do not expose this port to the internet. Apply firewall rules.

Default: 7001

native_transport_port_ssl
Dedicated SSL port where the CQL native transport listens for clients with encrypted communication. For security reasons, do not expose this port to the internet. Firewall it if needed.
  • commented out (disabled) - The native_transport_port encrypts all traffic.
  • port number different than native_transport_port - Use encryption for native_transport_port_ssl, keep native_transport_port unencrypted to use both unencrypted and encrypted traffic.

Default: 9142

User-defined functions (UDF) properties

enable_user_defined_functions: false
enable_scripted_user_defined_functions: false
enable_user_defined_functions_threads: true
user_defined_function_warn_micros: 500
user_defined_function_fail_micros: 10000
user_defined_function_warn_heap_mb: 200
user_defined_function_fail_heap_mb: 500
user_function_timeout_policy: die
enable_user_defined_functions
Enables user defined functions (UDFs), code that is executed inside Cassandra daemons. UDFs can present a security risk, since they are executed on the server side. UDFs are executed in a sandbox to control what code can be executed. See the DataStax blog post User Defined Functions.
  • true - Enabled. Supports Java as the code language. Detects endless loops and unintended memory leaks.
  • false - Disabled.

Default: false

enable_scripted_user_defined_functions
Enables the use of JavaScript language in UDFs. Scripted UDFs are less performant than UDFs and produce more garbage on the heap.
  • true - Enable scripted UDFs. Support JavaScript in addition to Java as a code language.
  • false - Disable scripted UDFs. Allow only Java as a code language.
Note: If enable_user_defined_functions is false, this setting has no impact.

Default: false

enable_user_defined_functions_threads
Enables sandbox for asynchronous JavaScript UDF execution. Does not apply to Java UDFs.
  • true - Enabled. Only one instance of a function can run at one time. Asynchronous execution prevents UDFs from running too long or forever and destabilizing the cluster.
  • false - Disabled. Allow multiple instances of the same function to run simultaneously.
    CAUTION: Disabling asynchronous UDF execution implicitly disables the Java security manager. You must monitor the read timeouts for JavaScipt UDFs that run too long or forever, which can cause the cluster to destabilize.

Default: true

user_defined_function_warn_micros
Threshold in microseconds (CPU time). When a UDF runs too long and this threshold is exceeded, a warning is logged and sent to the client. Java UDFs always issue a warning. Scripted UDFs log a warning only if enable_user_defined_functions_threads is set to true.

Default: 500

user_defined_function_fail_micros
Threshold in microseconds (CPU time). When a fatal UDF run-time situation is detected and this threshold is exceeded, the UDF is stopped. Java UDFs always throw an exception and stop. Scripted UDFs throw an exception and stop only if enable_user_defined_functions_threads is set to true.

Default: 10000

user_defined_function_warn_heap_mb
Threshold in MB for heap allocations. When this threshold is exceeded, a warning is logged and sent to the client. Java UDFs always issue a warning. Scripted UDFs log a warning only if enable_user_defined_functions_threads is set to true.

Default: 200

user_defined_function_fail_heap_mb
Threshold in MB for heap allocations. When this threshold is exceeded, the UDF is stopped.

Default: 500

user_function_timeout_policy
Defines action when a scripted UDF exceeds user_defined_function_fail_micros threshold. Applies only when enable_user_defined_functions_threads is set to true.
  • die - Issue a warning to the client before the Cassandra daemon shuts down.
  • die_immediate - Shut down Cassandra daemon immediately, effectively prevents the client from receiving a warning.
  • ignore - Log the warning, but do not take any action. DataStax does not recommend this option for production environments.

Default: die

Continuous paging options

continuous_paging:
    max_concurrent_sessions: 60
    max_session_pages: 4
    max_page_size_mb: 8
    max_local_query_time_ms: 5000
    client_timeout_sec: 600
    cancel_timeout_sec: 5
    paused_check_interval_ms: 1
continuous_paging
Tunes continuous paging that pushes pages, when requested, continuously to the client:
  • Maximum memory used:
    max_concurrent_sessions ⨉ max_session_pages ⨉ max_page_size_mb

    Default: calculated (60 ⨉ 4 ⨉ 8 = 1920 MB)

Guidance
  • Because memtables and SSTables are used by the continuous paging query, you can define the maximum period of time during which memtables cannot be flushed and compacted SSTables cannot be deleted.
  • If fewer threads exist than sessions, a session cannot execute until another one is swapped out.
  • Distributed queries (CL > ONE or non-local data) are swapped out after every page, while local queries at CL = ONE are swapped out after max_local_query_time_ms.
max_concurrent_sessions
The maximum number of concurrent sessions. Additional sessions are rejected with an unavailable error.

Default: 60

max_session_pages
The maximum number of pages that can be buffered for each session. If the client is not reading from the socket, the producer thread is blocked after it has prepared max_session_pages.

Default: 4

max_page_size_mb
The maximum size of a page, in MB. If an individual CQL row is larger than this value, the page can be larger than this value.

Default: 8

max_local_query_time_ms
The maximum time for a local continuous query to run. When this threshold is exceeded, the session is swapped out and rescheduled. Swapping and rescheduling ensures the release of resources that prevent the memtables from flushing and ensures fairness when max_threads < max_concurrent_sessions. Adjust when high write workloads exist on tables that have continuous paging requests.

Default: 5000

client_timeout_sec
How long the server will wait, in seconds, for clients to request more pages if the client is not reading and the server queue is full.

Default: 600

cancel_timeout_sec
How long to wait before checking if a paused session can be resumed. Continuous paging sessions are paused because of backpressure or when the client has not request more pages with backpressure updates.

Default: 5

paused_check_interval_ms
How long to wait, in milliseconds, before checking if a continuous paging sessions can be resumed, when that session is paused because of backpressure.

Default: 1

Fault detection setting

# phi_convict_threshold: 8
phi_convict_threshold
The sensitivity of the failure detector on an exponential scale. Generally, this setting does not need adjusting.
Tip: See .

Default: 8

Memory leak detection settings

#leaks_detection_params:
#  sampling_probability: 0
#  max_stacks_cache_size_mb: 32
#  num_access_records: 0
#  max_stack_depth: 30
sampling_probability
The sampling probability to track for the specified resource. For resources tracked, see nodetool leaksdetection.
  • 0 - Disable tracking.
  • 1 - Enable tracking all the time
  • percentage of time - A number between 0 and 1 to randomly track a resource. For example, 0.5 will track resources 50% of the time.
CAUTION: Tracking incurs a significant stack trace collection cost for every access and consumes heap space. Enable tracking only when directed by DataStax Support.

Default: 0

max_stacks_cache_size_mb
The size of the cache for call stack traces. Stack traces are used to debug leaked resources, and use heap memory. Set the amount of heap memory dedicated to each resource by setting the max stacks cache size in MB.

Default: 32

num_access_records
The average number of stack traces kept when a resource is accessed. Supported only for chunks in the cache.

Default: 0

max_stack_depth
The depth of the stack traces collected. Changes only the depth of the stack traces that will be collected from the time the parameter is set. Deeper stacks are more unique, so increasing the depth may require increasing stacks_cache_size_mb.

Default: 30