• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

DataStax Enterprise 6.8 Documentation

    • Overview
      • Release notes
        • DSE release notes
        • Cass Operator release notes
        • Studio release notes
        • Bulk loader release notes
        • Kafka Connector release notes
    • Architecture
      • Architecture FAQ
      • Database architecture
        • Architecture in brief
        • Internode communications (gossip)
        • Data distribution and replication
          • Data distribution to nodes
          • Consistent hashing
          • Virtual nodes
          • Data replication
          • Partitioners
          • Snitches
            • Dynamic snitching
            • Types of snitches
        • Node repair
          • NodeSync: Continuous background repair
          • Hinted handoff: repair during write path
          • Read Repair: repair during read path
          • Anti-entropy repair
      • Component architecture
        • DSE Analytics
        • DSE Search
        • DSE Graph
          • When to use DSE Graph
          • OLTP and OLAP
          • Comparing DSE Graph and relational databases
          • Migrating to DSE Graph from a relational database
          • Migrating to DSE Graph from Apache Cassandra
      • Database internals
        • Storage engine
        • About reads and writes
          • How is data written?
          • How is data maintained?
          • How is data updated?
          • How is data deleted?
          • What are tombstones?
          • How are indexes stored and updated?
          • How is data read?
          • How do write patterns affect reads?
        • Data consistency
          • Read and write consistency
          • Differences between DSE and RDBMS transactions
          • Using lightweight transactions
          • Consistency level performance
          • Consistency level configuration
          • Configuring serial consistency
          • Read requests
            • Read consistency levels examples
          • Write requests
            • Multiple datacenter write requests
    • Planning
      • Initializing a cluster
        • Initializing datacenters
          • Initializing a single datacenter per workload type
          • Initializing multiple datacenters per workload type
        • Setting seed nodes for a single datacenter
        • Use cases for listen address
      • Initializing single-token architecture datacenters
        • Calculating tokens for single-token architecture nodes
    • Getting started
    • Installing
      • Which install method should I use?
      • Install on a single node
      • Installing supporting software
      • Installing a cluster using Lifecycle Manager 6.8
      • Installing from the Yum package
      • RedHat systemd configuration
      • Installing from the Debian package
      • Install from the tarball on any Linux distribution
      • Installing patch releases
      • Installing on cloud providers
      • Installing on Docker
      • Uninstalling DSE
      • Default DSE file locations
        • Package installations
        • Tarball installations
      • Installing DSE 6.8 Tools
        • Installing CQLSH
        • Installing DataStax Studio 6.8
        • Installing DSE Graph Loader
        • Installing DataStax Bulk Loader
        • Installing DataStax Apache Kafka Connector
      • Installing DSE OpsCenter 6.8
        • Installing from the RPM package
        • Installing from the Debian package
        • Installing from the tarball on any Linux distribution
        • Installing on Docker
        • Uninstalling OpsCenter
        • Installing DataStax Agents 6.8
          • Installing DataStax Agents automatically
          • Installing DataStax Agents manually
            • From the RPM package
            • From the Debian package
            • From a tarball
          • Setting Agent permissions to run as the DSE user
          • Configuring JAVA_HOME
    • Managing
      • Configuration
        • Recommended production settings
        • YAML and configuration properties
          • cassandra.yaml
          • dse.yaml
          • remote.yaml
          • cassandra-rackdc.properties
          • cassandra-topology.properties
        • Cloud provider snitches
          • Amazon EC2 single-region snitch
          • Amazon EC2 multi-region snitch
          • Google Cloud Platform
          • Apache CloudStack snitch
        • JVM system properties
          • Cassandra
          • JMX
          • DSE Search
          • TPC
          • LDAP
          • Kerberos
          • NodeSync
          • DSE Metrics Collector
        • Choosing a compaction strategy
        • NodeSync service
          • About NodeSync
          • Starting and stopping the NodeSync service
          • Enabling NodeSync validation
          • Tuning NodeSync validations
            • Setting the NodeSync rate
            • Setting the NodeSync deadline
          • Manually starting NodeSync validation
        • Using multiple network interfaces
        • Configuring gossip settings
        • Configuring the heap dump directory
        • Configuring Virtual Nodes
          • Virtual node (vnode) configuration
          • Enabling virtual nodes on an existing production cluster
        • Logging configuration
          • Changing logging locations
          • Configuring logging
          • Commit log archive configuration
          • Change Data Capture (CDC) logging
      • Tools
        • DSE Metrics Collector
        • nodetool
          • Get information
            • clientstats
            • describecluster
            • describering
            • getbatchlogreplaythrottle
            • getcachecapacity
            • getcachekeystosave
            • getconcurrentviewbuilders
            • getendpoints
            • getinterdcstreamthroughput
            • getlogginglevels
            • getseeds
            • getsstables
            • getstreamthroughput
            • gettimeout
            • gettraceprobability
            • help
            • info
            • inmemorystatus
            • rangekeysample
            • ring
            • status
            • version
          • Collect metrics
            • gcstats
            • netstats
            • proxyhistograms
            • tablehistograms
            • tablestats
            • toppartitions
            • tpstats
          • Perform operations
            • assassinate
            • bootstrap resume
            • decommission
            • disablebinary
            • disablegossip
            • drain
            • enablebinary
            • enablegossip
            • gossipinfo
            • invalidatecountercache
            • invalidatekeycache
            • invalidaterowcache
            • import
            • join
            • move
            • refresh
            • reloadtriggers
            • relocatesstables
            • removenode
            • replaybatchlog
            • sequence
            • sjk
            • statusbinary
            • statusgossip
            • stopdaemon
            • upgradesstables
          • Adjust settings
            • reloadseeds
            • setbatchlogreplaythrottle
            • setcachecapacity
            • setcachekeystosave
            • setconcurrentviewbuilders
            • setinterdcstreamthroughput
            • setlogginglevel
            • setstreamthroughput
            • settimeout
            • settraceprobability
          • Diagnose issues
            • failuredetector
            • leaksdetection
          • Manage backup commands
            • clearsnapshot
            • disablebackup
            • enablebackup
            • listsnapshots
            • snapshot
            • statusbackup
          • Ensure data consistency
            • abortrebuild
            • cleanup
            • flush
            • mark_unrepaired
            • rebuild
            • rebuild_index
            • rebuild_view
            • resetlocalschema
            • repair
            • scrub
            • verify
          • Manage compaction
            • compact
            • compactionhistory
            • compactionstats
            • disableautocompaction
            • enableautocompaction
            • garbagecollect
            • getcompactionthreshold
            • getcompactionthroughput
            • getconcurrentcompactors
            • setcompactionthreshold
            • setcompactionthroughput
            • setconcurrentcompactors
            • stop
          • Manage NodeSync service
            • nodesyncservice enable
            • nodesyncservice disable
            • nodesyncservice getrate
            • nodesyncservice ratesimulator
            • nodesyncservice setrate
            • nodesyncservice status
          • Manage hints
            • disablehandoff
            • disablehintsfordc
            • enablehandoff
            • enablehintsfordc
            • gethintedhandoffthrottlekb
            • getmaxhintwindow
            • handoffwindow
            • listendpointspendinghints
            • pausehandoff
            • resumehandoff
            • sethintedhandoffthrottlekb
            • setmaxhintwindow
            • statushandoff
            • truncatehints
        • dse commands
          • dse connection options
          • Perform routine DSE operations
            • add-node
            • cassandra
            • cassandra-stop
            • list-nodes
            • remove-node
            • -v
          • Manage Spark
            • exec
            • pyspark
            • spark
            • spark-class
            • spark-jobserver
            • spark-history-server
            • spark-sql
            • spark-sql-thriftserver
            • spark-submit
            • SparkR
          • Connect to development consoles
            • beeline
            • fs
            • gremlin-console
          • Connect external client to DSE node
            • dse client-tool help
            • client-tool connection options
            • cassandra
            • configuration export
            • configuration byos-export
            • configuration import
            • spark
            • alwayson-sql
            • graph-olap
          • Modifies CQL nodesync
            • disable
            • enable
            • help
            • tracing
              • disable
              • enable
              • show
              • status
            • validation
      • dsefs shell commands
        • Get information
          • df
          • du
          • echo
          • ls
          • pwd
          • realpath
          • stat
        • Navigate DSEFS
          • cd
          • exit
        • Manage files
          • append
          • cat
          • cp
          • fsck
          • get
          • mkdir
          • mv
          • put
          • rename
          • rm
          • rmdir
          • truncate
          • umount
        • Manage permissions
          • chgrp
          • chmod
          • chown
      • dsetool
        • Connection options
        • Get information
          • help
          • inmemorystatus
          • list_subranges
          • listjt
          • node_health
          • partitioner
          • ring
          • status
          • tieredtablestats
        • Perform operations
          • infer_solr_schema
          • perf
          • sparkmaster cleanup
          • sparkworker restart
          • tsreload
        • Configure DSE Metrics Collector
          • insights_config
          • insights_filters
        • Manage security
          • createsystemkey
          • encryptconfigvalue
          • managekmip list
          • managekmip expirekey
          • managekmip revoke
          • managekmip destroy
        • Manage search index
          • core_indexing_status
          • create_core
          • get_core_config
          • get_core_schema
          • index_checks
          • list_index_files
          • list_core_properties
          • read_resource
          • rebuild_indexes
          • reload_core
          • set_core_property
          • stop_core_reindex
          • unload_core
          • upgrade_index_files
          • write_resource
      • SSTable tools
        • Get information
          • sstabledump
          • sstableexpiredblockers
          • sstablemetadata
          • sstablepartitions
          • sstableutil
        • Perform operations
          • sstabledowngrade
          • sstablelevelreset
          • sstableloader
          • sstableofflinerelevel
          • sstablesplit
          • sstableupgrade
        • Ensure data consistency
          • sstablerepairedset
          • sstablescrub
          • sstableverify
      • Preflight check tool
      • Compare yaml files
        • yaml_diff
        • cluster_check
      • Operations
        • Starting and stopping DSE
          • Starting as a service
          • Starting as a stand-alone process
          • Stopping a node
        • Adding or removing nodes, datacenters, or clusters
          • Adding nodes to vnode-enabled cluster
          • Adding a datacenter to a cluster using a designated datacenter as a data source
          • Replacing a dead node or dead seed node
          • Replacing a running node
            • Adding a node and then decommissioning the old node
            • Replacing a running node
          • Moving a node from one rack to another
          • Decommissioning a datacenter
          • Removing a node
          • Changing the IP address of a node
          • Switching snitches
          • Changing keyspace replication strategy
          • Migrating or renaming a cluster
          • Adding single-token nodes to a cluster
          • Adding a datacenter to a single-token architecture cluster
          • Replacing a dead node in a single-token architecture cluster
        • Backing up and restoring data using the DSE Backup and Restore Service
          • About the DSE Backup and Restore Service
          • Enabling and configuring the DSE Backup and Restore Service
          • Creating and managing backup stores
          • Creating and managing backup configurations
          • Managing backups
          • Restoring backups
          • Backup and Restore Service CQL command reference
            • ALTER BACKUP CONFIGURATION
            • ALTER BACKUP STORE
            • CANCEL BACKUP
            • CANCEL RESTORE
            • CLEAN BACKUPS
            • CREATE BACKUP CONFIGURATION
            • CREATE BACKUP STORE
            • DROP BACKUP CONFIGURATION
            • DROP BACKUP STORE
            • FORCE RESTORE
            • LIST BACKUP CONFIGURATIONS
            • LIST BACKUPS FROM KEYSPACE
            • LIST BACKUP STORES
            • RESTORE
            • RUN BACKUP
            • VERIFY BACKUP STORE
        • Backing up and restoring data using snapshots
          • About snapshots
          • Taking a snapshot
          • Deleting snapshot files
          • Enabling incremental snapshot backups
          • Restoring from a snapshot
          • Restoring a snapshot into a new cluster
          • Recovering from a single disk failure using JBOD
        • Repairing nodes
          • Manual repair: Anti-entropy repair
          • When to run anti-entropy repair
          • Changing repair strategies
            • Migrating to full repairs
            • Migrating to incremental repairs
        • Monitoring a DSE cluster
        • Tuning the database
          • Tuning Java Virtual Machine
            • Changing heap size parameters
            • Configuring the garbage collector
              • G1 MaxGCPauseMillis
              • CMS parameters
          • Tuning Bloom filters
          • Configuring memtable thresholds
        • Data caching
          • Configuring data caches
            • Enabling caching globally
            • Tips for efficient cache use
          • Monitoring and adjusting caching
        • Compacting and compressing
          • Configuring compaction
          • Compression
            • When to compress data
            • Configuring compression
          • Testing compaction and compression
        • Materialized views maintenance guidelines
        • Migrating data to DSE
        • Collecting node health and indexing scores
        • Clearing data from DSE
      • DSE Management Services
        • Performance Service
          • Performance Service
          • Configuring Performance Service replication strategy
          • Collecting data
            • Collecting slow queries
            • Collecting system level diagnostics
            • Collecting object I/O level diagnostics
            • Statistics gathered for objects
            • Collecting database summary diagnostics
            • Collecting cluster summary diagnostics
            • Collecting histogram diagnostics
            • Collecting user activity diagnostics
            • Statistics gathered for user activity
          • Collecting search data
            • Collecting slow search queries
            • Collecting Apache Solr performance statistics
            • Collecting cache statistics
            • Collecting index statistics
            • Collecting handler statistics
            • Collecting request handler metrics
          • Monitoring Spark with Spark Performance Objects
          • Diagnostic table reference
          • Solr diagnostic table reference
            • Frequently asked questions
            • Slow sub-query log for search
            • Indexing error log
            • Query latency snapshot
            • Update latency snapshot
            • Commit latency snapshot
            • Merge latency snapshot
            • Filter cache statistics
            • Query result cache statistics
            • Index statistics
            • Update handler statistics
            • Update request handler statistics
            • Search request handler statistics
      • DSE In-Memory
        • Creating or altering tables to use DSE In-Memory
        • Verifying table properties
        • Managing memory
        • Backing up and restoring data
      • DSE Tiered Storage
        • About DSE Tiered Storage
        • Configuring DSE Tiered Storage
        • Testing configurations
      • DSE Multi-Instance
        • About DSE Multi-Instance
        • DSE Multi-Instance architecture
        • Adding nodes to DSE Multi-Instance
        • DSE Multi-Instance commands
    • Securing
      • Security FAQ
      • Security checklists
      • Securing the environment
        • Securing ports
        • Securing the TMP directory
      • Authentication and authorization
        • Configuring authentication and authorization
          • About DSE Unified Authentication
            • Steps for new deployment
            • Steps for production environments
          • Configuring security keyspaces
          • Setting up Kerberos
            • Kerberos guidelines
            • Enabling JCE Unlimited
              • Removing AES-256
            • Preparing DSE nodes for Kerberos
              • DNS and NTP
              • krb5.conf
              • Principal
              • Keytab
          • Enabling authentication and authorization
            • Defining a Kerberos scheme
            • Defining an LDAP scheme
          • Configuring JMX authentication
          • Configuring cache settings
          • Securing schema information
        • Managing database access
          • About RBAC
          • Setting up logins and users
            • Adding a superuser login
            • Adding database users
            • LDAP users and groups
              • LDAP logins
              • LDAP groups
            • Kerberos principal logins
            • Setting up roles for applications
            • Binding a role to an authentication scheme
          • Assigning permissions
            • Database object permissions
              • Data resources
              • Functions and aggregate resources
              • Search indexes
              • Roles
              • Proxy login and execute
              • Authentication schemes
              • DSE Utilities (MBeans)
              • Analytic applications
              • Remote procedure calls
            • Separation of duties
            • Keyspaces and tables
            • Row Level Access Control (RLAC)
            • Search index permissions
            • DataStax Graph keyspace
            • Spark application permissions
            • DataStax Studio permissions
            • Remote procedure calls
            • DSE client-tool spark
            • JMX MBean permissions
            • Deny (denylist) db object permission
            • Restricting access to data
        • Providing credentials from DSE tools
          • About clients
          • Internal and LDAP authentication
            • Command line
            • File
            • Environment variables
            • Using CQLSH
          • Kerberos
            • JAAS configuration file location
            • Keytab
            • Ticket Cache
            • Spark jobs
            • SSTableLoader
            • Graph and gremlin-console
            • dsetool
            • CQLSH
          • Nodetool
          • JConsole
      • Auditing database activity
        • Enabling database auditing
        • Capturing DSE Search HTTP requests
        • Log formats
        • View events from DSE audit table
      • Transparent data encryption
        • About Transparent Data Encryption
        • Configuring local encryption
          • Setting up local encryption keys
          • Encrypting configuration file properties
          • Encrypting system resources
          • Encrypting tables
          • Rekeying existing data
          • Using tools with TDE-encrypted SSTables
          • Troubleshooting encryption key errors
        • Configuring KMIP encryption
        • Encrypting Search indexes
          • Encrypting new Search indexes
          • Encrypting existing Search indexes
          • Tuning encrypted Search indexes
        • Migrating encrypted tables from earlier versions
        • Bulk loading data between TDE-enabled clusters
      • Configuring SSL
        • Steps for configuring SSL
        • Creating SSL certificates, keystores, and truststores
          • Remote keystore provider
          • Local keystore files
        • Securing node-to-node connections
        • Securing client-to-node connections
          • Configuring JMX on the server side
          • nodetool, nodesync, dsetool, and Advanced Replication
          • JConsole (JMX)
          • SSTableloader
          • Connecting to SSL-enabled nodes using cqlsh
        • Enabling SSL encryption for DSEFS
        • Reference: SSL instruction variables
      • Securing Spark connections
    • Tooling Resources
      • Stress tools
        • cassandra-stress tool
          • About the cassandra-stress tool
          • Interpret output
          • counter_read
          • counter_write
          • help
          • legacy
          • mixed
          • print
          • read
          • user
          • version
          • write
      • fs-stress tool
      • OpsCenter services
        • Best Practice Service
        • Capacity Service
        • Repair Service
    • DSE Advanced Replication
      • About DSE Advanced Replication
      • Architecture
      • Traffic between the clusters
      • Terminology
      • Getting started
      • Keyspaces
      • Data types
      • Operations
      • CQL queries
      • Metrics
      • Managing invalid messages
      • Managing audit logs
      • Command line tool
        • connection options
        • channel create
        • channel update
        • channel delete
        • channel pause
        • channel resume
        • channel status
        • channel truncate
        • conf list
        • conf remove
        • conf update
        • destination create
        • destination update
        • destination delete
        • destination list
        • destination list-conf
        • destination remove-conf
        • help
        • metrics list
        • replog count
        • replog analyze-audit-log
    • DSE Analytics
      • Setting the replication factor for analytics keyspaces
      • DSE Analytics and Search integration
        • Using predicate push down on search indexes in Spark SQL
      • About DSE Analytics Solo
      • DSEFS (DataStax Enterprise file system)
        • About DSEFS
        • Enabling DSEFS
        • Disabling DSEFS
        • Configuring DSEFS
        • Commands DSEFS
        • DSEFS compression
        • DSEFS authentication
        • DSEFS authorization
        • Using the DSEFS REST interface
        • Programmatic access to DSEFS
        • Hadoop FileSystem interface implemented by DseFileSystem
        • Using JMX to read DSEFS metrics
    • DSE Graph
      • About Graph
      • What’s new
      • Graph QuickStart
      • CQL as Graph
      • Convert CQL to Graph
      • Graph OLTP and OLAP
      • Graph data modeling
        • Data modeling introduction
        • Basic data modeling
        • Data modeling design
        • Advanced data modeling
      • Manage graph
        • Create a graph
        • Exanube a Graph
        • Drop a Graph
      • Manage schema
        • Create a Graph schema
        • Examine a schema
        • Create UDT schema
        • Create collection and tuple schema
        • Create vertex lable schema
        • Create edge lable schema
        • Indexing
        • Create index schema
        • Drop Graph schema
        • Vertex and edge IDs
      • Manage Graph data
        • Data formats
        • Insert data with Graph traversal API
        • DataStax Bulk Loader for Graph
          • Install DataStax Bulk Loader
          • DataStax Bulk Loader Examples
        • Load data with DseGraphFrames
        • Drop graph data
      • Discovering properties
      • Creating queries using traversals
        • Anatomy of a graph traversal
        • Use indexes
        • Use search indexes
        • Simple traversals
        • Geospatial traversals
        • Branching traversals
        • Recursive traversals
        • Path traversals
      • Graph analysis with DSE Analytics
        • DseGraphFrame overview
          • TinkerPop API support in DseGraphFrame
          • Mapping rules for DseGraphFrame
          • DseGraphFrame API reference
        • Export graphs to DSEFS
        • Import graphs
        • Northwind demo graph with Spark OLAP jobs
      • DSE Graph Operations
        • Configuring DSE Graph
          • Specifying DSE database and graph settings
          • Configuring security
        • Graph backup and restore
        • Graph import/export
        • Graph JMX metrics
      • Graph tools
      • Start Gremlin console
      • Graph Reference
        • Graph traversal API
          • addE
          • addV
          • io
          • property
          • with
        • Schema API
          • drop
          • describe
          • edgeLabel
          • type
          • vertexLabel
        • System API
          • Graph
          • GraphClassic
            • config
            • option
          • graphs
          • list
        • TinkerPop traversal API
          • TinkerPop framework
          • TinkerPop general information
          • TinkerPop predicates
            • eq
            • neq
            • lt
            • lte
            • gt
            • gte
            • inside
            • outside
            • between
            • within
            • without
            • Step-modulators
            • as
            • by
            • emit
            • from
            • option
            • times
            • to
            • until
            • Vertex step
            • out
            • in
            • both
            • outE
            • inE
            • bothE
            • outV
            • inV
            • bothV
            • otherV
            • addV
            • addE
            • property
            • mid-traversal V()
            • aggregate
            • and
            • barrier
            • branch
            • cap
            • choose
            • coalesce
            • constant
            • count
            • cyclicPath
            • dedup
            • drop
            • explain
          • fill
          • filter
          • flatMap
          • fold
          • group
          • groupCount
          • has
          • hasId
          • hasKey
          • hasLabel
          • hasNext
          • hasNot
          • hasValue
          • id
          • inject
          • is
          • key
          • label
          • limit
          • local
          • loops
          • map
          • match
          • math
          • max
          • mean
          • min
          • next
          • not
          • optional
          • or
          • order
          • pageRank
          • path
          • peerPressure
          • profile
          • project
          • properties
          • propertyMap
          • range
          • repeat
          • sack
          • sample
          • select
          • sideEffect
          • simplePath
          • skip
          • store
          • subGraph
          • sum
          • tail
          • timeLimit
          • toBulkSet
          • toList
          • toSet
          • tree
          • unfold
          • union
          • value
          • valueMap
          • values
          • where
        • DataStax Graph data types
        • Graph storage in Cassandra keyspace and table
    • DSE Search
      • About Search
        • Solr OSS differences
        • Unsupported search features
        • Solr Lucene limitations
      • Configuring Search
      • Search Reference
      • Search index configuration
      • Search index schema
      • Search config.yaml options
      • Adding/viewing index resources
      • Initial data migration
      • Shard routing for distributed queries
      • Deleting Solr data
      • Verifying index status
      • Backing up search indexes
      • Restoring a search node
      • Metrics (MBEANS)
      • Uploading custom index resources
      • Solr admin UI configuration
      • Configuring Solr connector port
      • reqPerm Solr admin UI
      • Changing Tomcat settings
      • Configuring Solr library path
      • Using the Solr HTTP API
      • Configuring HTTP for AJP
      • aboutUpdateRequestProcessorAndFieldTransformer
      • Field Input/Output Transformer (FIT) API
      • FIT class examples
      • Custom URP example
      • Interface custom field types
      • Deleting by query - best practice
      • Monitoring segments
      • Solr clients
      • Tutorials and demos
        • Creating a healthcare keyspace for tutorials
        • Multi-faceted search using healthcare data
        • Term and phrase searches using the wikipedia demo
        • Using secure cluster
        • Indexing and querying polygons
    • DSE Spark
      • About Spark
      • Using Spark with DataStax Enterprise
        • Starting Spark
        • Running Spark commands against a remote cluster
        • Accessing database data from Spark
          • Using the Spark session
          • Using the Spark context
          • Controlling automatic direct join optimizations in queries
          • Accessing the Spark session and context for applications running outside of DSE Analytics
          • Saving RDD data to DSE
          • Spark supported types
          • Loading external HDFS data into the database using Spark
        • Monitoring Spark with the web interface
        • Getting started with the Spark Cassandra Connector
        • Using DSE Spark with third party tools and integrations
      • Configuring Spark nodes
        • Automatic Spark Master election
        • Configuring Spark logging options
        • Running Spark processes as separate users
        • Configuring the Spark history server
        • Setting Spark Cassandra Connector-specific properties
        • Creating a DSE Analytics Solo datacenter
        • Spark JVMs and memory management
      • Using Spark modules with DataStax Enterprise
        • Getting started with Spark Streaming
          • Creating a Spark Structured Streaming sink using DSE
        • Using Spark SQL to query data
          • Querying database data using Spark SQL in Scala
          • Querying database data using Spark SQL in Java
          • Querying DSE Graph vertices and edges with Spark SQL
          • Using Spark predicate push down in Spark SQL queries
          • Supported syntax of Spark SQL
          • Inserting data into tables with static columns using Spark SQL
          • Running HiveQL queries using Spark SQL
          • Using the DataFrames API
          • Using the Spark SQL Thriftserver
        • Using SparkR with DataStax Enterprise
      • Using AlwaysOn SQL service
        • Enabling SSL for AlwaysOn SQL
        • Using authentication with AlwaysOn SQL
        • Simba JDBC Driver for Apache Spark
        • Simba ODBC Driver for Apache Spark
        • Connecting to AlwaysOn SQL server using Beeline
      • Accessing DataStax Enterprise data from external Spark clusters
        • Overview of BYOS support in DataStax Enterprise
        • Generating the BYOS configuration file
        • Connecting to DataStax Enterprise using the Spark shell on an external Spark cluster
        • Generating Spark SQL schema files
        • Starting Spark SQL Thrift Server with Kerberos
      • Using the Spark Jobserver
      • Spark examples
        • Portfolio Manager demo using Spark
        • Running the Weather Sensor demo
        • Running the Wikipedia demo with SearchAnalytics
        • Running the Spark MLlib demo application
        • Running the http_receiver demo
        • Using DSE geometric types in Spark
        • Importing a text file into a table
        • Running spark-submit job with internal authentication
      • DSE Spark Connector API documentation
  • DataStax Enterprise 6.8 Documentation
  • DSE Search
  • Search index configuration

Search index config

  • DataStax recommends CQL CREATE SEARCH INDEX and ALTER SEARCH INDEX CONFIG commands.

  • dsetool commands can also be used to manage search indexes.

Changing search index config

To create and make changes to the search index config, follow these basic steps:

  1. Create a search index. For example:

    CREATE SEARCH INDEX ON demo.health_data;
  2. Alter the search index. For example:

    ALTER SEARCH INDEX CONFIG ON demo.health_data SET autoCommitTime = 30000;
  3. Optionally view the XML of the pending search index. For example:

    DESCRIBE PENDING SEARCH INDEX CONFIG on demo.health_data;
  4. Make the pending changes active. For example:

    RELOAD SEARCH INDEX ON demo.health_data;

Sample search index config

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<config>
  <abortOnConfigurationError>${solr.abortOnConfigurationError:true}</abortOnConfigurationError>
  <luceneMatchVersion>LUCENE_6_0_0</luceneMatchVersion>
  <dseTypeMappingVersion>2</dseTypeMappingVersion>
  <directoryFactory class="solr.StandardDirectoryFactory" name="DirectoryFactory"/>
  <indexConfig>
    <rt>false</rt>
    <rtOffheapPostings>true</rtOffheapPostings>
    <useCompoundFile>false</useCompoundFile>
    <reopenReaders>true</reopenReaders>
    <deletionPolicy class="solr.SolrDeletionPolicy">
      <str name="maxCommitsToKeep">1</str>
      <str name="maxOptimizedCommitsToKeep">0</str>
    </deletionPolicy>
    <infoStream file="INFOSTREAM.txt">false</infoStream>
  </indexConfig>
  <jmx/>
  <updateHandler class="solr.DirectUpdateHandler2">
    <autoSoftCommit>
      <maxTime>10000</maxTime>
    </autoSoftCommit>
  </updateHandler>
  <query>
    <maxBooleanClauses>1024</maxBooleanClauses>
    <filterCache class="solr.SolrFilterCache" highWaterMarkMB="2048" lowWaterMarkMB="1024"/>
    <enableLazyFieldLoading>true</enableLazyFieldLoading>
    <useColdSearcher>true</useColdSearcher>
    <maxWarmingSearchers>16</maxWarmingSearchers>
  </query>
  <requestDispatcher handleSelect="true">
    <requestParsers enableRemoteStreaming="true" multipartUploadLimitInKB="2048000"/>
    <httpCaching never304="true"/>
  </requestDispatcher>
  <requestHandler class="solr.SearchHandler" default="true" name="search">
    <lst name="defaults">
      <int name="rows">10</int>
    </lst>
  </requestHandler>
  <requestHandler class="com.datastax.bdp.search.solr.handler.component.CqlSearchHandler" name="solr_query">
    <lst name="defaults">
      <int name="rows">10</int>
    </lst>
  </requestHandler>
  <requestHandler class="solr.UpdateRequestHandler" name="/update"/>
  <requestHandler class="solr.UpdateRequestHandler" name="/update/csv" startup="lazy"/>
  <requestHandler class="solr.UpdateRequestHandler" name="/update/json" startup="lazy"/>
  <requestHandler class="solr.FieldAnalysisRequestHandler" name="/analysis/field" startup="lazy"/>
  <requestHandler class="solr.DocumentAnalysisRequestHandler" name="/analysis/document" startup="lazy"/>
  <requestHandler class="solr.admin.AdminHandlers" name="/admin/"/>
  <requestHandler class="solr.PingRequestHandler" name="/admin/ping">
    <lst name="invariants">
      <str name="qt">search</str>
      <str name="q">solrpingquery</str>
    </lst>
    <lst name="defaults">
      <str name="echoParams">all</str>
    </lst>
  </requestHandler>
  <requestHandler class="solr.DumpRequestHandler" name="/debug/dump">
    <lst name="defaults">
      <str name="echoParams">explicit</str>
      <str name="echoHandler">true</str>
    </lst>
  </requestHandler>
  <admin>
    <defaultQuery>*:*</defaultQuery>
  </admin>
</config>

For CQL index management, use configuration element shortcuts with CQL commands.

Configuration elements are listed alphabetically by shortcut.

The XML element is shown with the element start tag. An ellipsis indicates that other elements or attributes are not shown.

autoCommitTime

Defines the time interval between updates to the search index with the most recent data after an INSERT, UPDATE, or DELET E. By default, changes are automatically committed every 10000 milliseconds. To change the time interval between updates:

  1. Set auto commit time on the pending search index:

    ALTER SEARCH INDEX CONFIG ON wiki.solr SET autoCommitTime = 30000;
  2. You can view the pending search config:

    DESCRIBE PENDING SEARCH INDEX CONFIG on wiki.solr;

    The resulting XML shows the maximum time between updates is 30000 milliseconds:

    <updateHandler class="solr.DirectUpdateHandler2">
        <autoSoftCommit>
          <maxTime>30000</maxTime>
        </autoSoftCommit>
      </updateHandler>
  3. To make the pending changes active, reload the search index:

    RELOAD SEARCH INDEX ON wiki.solr;

See Tuning search for maximum indexing throughput.

defaultQueryField

Name of the default field to query. Default not set. To set the field to use when no field is specified by the query, see Setting up default query field.

directoryFactory

The directory factory to use for search indexes. Encryption is enabled per search index. To enable encryption for a search index, change the class for directoryFactory to EncryptedFSDirectoryFactory.

  1. Enable encryption on the pending search index:

    ALTER SEARCH INDEX CONFIG ON wiki.solr SET directoryFactory = EncryptedFSDirectoryFactory;
  2. You can view the pending search config:

    DESCRIBE PENDING SEARCH INDEX CONFIG on wiki.solr;

    The resulting XML shows that encryption is enabled:

    <directoryFactory class="solr.EncryptedFSDirectoryFactory" name="DirectoryFactory"/>
  3. To make the pending changes active, reload the search index:

    RELOAD SEARCH INDEX ON wiki.solr;

Even though additional properties are available to tune encryption, DataStax recommends using the default settings.

filterCacheLowWaterMark

Default is 1024 MB. See below.

filterCacheHighWaterMark

Default is 2048 MB.

The DSE Search configurable filter cache reliably bounds the filter cache memory usage for a search index. This implementation contrasts with the default Solr implementation which defines bounds for filter cache usage per segment. SolrFilterCache bounding works by evicting cache entries after the configured per search index (per core) high watermark is reached, and stopping after the configured lower watermark is reached.

  • The filter cache is cleared when the search index is reloaded.

  • SolrFilterCache does not support auto-warming.

SolrFilterCache defaults to offheap. In general, the larger the index is, then the larger the filter cache should be. A good default is 1 to 2 GB. If the index is 1 billion docs per node, then set to 4 to 5 GB.

  1. To change cache eviction for a large index, set the low and high values one at a time:

    ALTER SEARCH INDEX CONFIG ON solr.wiki SET filterCacheHighWaterMark = 5000;
    ALTER SEARCH INDEX CONFIG ON solr.wiki SET filterCacheLowWaterMark = 2000;
  2. View the pending search index config:

    <query>
    ...
        <filterCache class="solr.SolrFilterCache" highWaterMarkMB="5000" lowWaterMarkMB="2000"/>
    ...
    </query>
  3. To make the pending changes active, reload the search index:

    RELOAD SEARCH INDEX ON wiki.solr;
mergeFactor

When a new segment causes the number of lowest-level segments to exceed the merge factor value, then those segments are merged together to form a single large segment. When the merge factor is 10, each merge results in the creation of a single segment that is about ten times larger than each of its ten constituents. When there are 10 of these larger segments, then they in turn are merged into an even larger single segment. Default is 10.

  1. To change the number of segments to merge at one time:

    ALTER SEARCH INDEX CONFIG ON solr.wiki SET mergeFactor = 5;
  2. View the pending search index config:

    <indexConfig>
    ...
        <mergeFactor>10</mergeFactor>
    ...
      </indexConfig>
  3. To make the pending changes active, reload the search index:

    RELOAD SEARCH INDEX ON wiki.solr;
mergeMaxThreadCount

Must configure with mergeMaxMergeCount. The number of concurrent merges that Lucene can perform for the search index. The default mergeScheduler settings are set automatically. Do not adjust this setting.

Default: ½ the number of tpc_cores

mergeMaxMergeCount

Must configure with mergeMaxThreadCount. The number of pending merges (active and in the backlog) that can accumulate before segment merging starts to block/throttle incoming writes. The default mergeScheduler settings are set automatically. Do not adjust this setting.

Default: 2x the mergeMaxThreadCount

ramBufferSize

The index RAM buffer size in megabytes (MB). The RAM buffer holds uncommitted documents. A larger RAM buffer reduces flushes. Segments are also larger when flushed. Fewer flushes reduces I/O pressure which is ideal for higher write workload scenarios.

For example, adjust the ramBufferSize when you configure live indexing:

ALTER SEARCH INDEX CONFIG ON wiki.solr SET autoCommitTime = 100;
ALTER SEARCH INDEX CONFIG ON wiki.solr SET realtime = true;
ALTER SEARCH INDEX CONFIG ON wiki.solr SET ramBufferSize = 2048;
RELOAD SEARCH INDEX ON wiki.solr ;

Default: 512

realtime

Enables live indexing to increase indexing throughput. Enable live indexing on only one node per cluster. Live indexing, also called real-time (RT) indexing, supports searching directly against the Lucene RAM buffer and more frequent, cheaper soft-commits, which provide earlier visibility to newly indexed data.

Live indexing requires a larger RAM buffer and more memory usage than an otherwise equivalent NRT setup. See Tune RT indexing.

Configuration elements without shortcuts

To specify configuration elements that do not have shortcuts, you can specify the XML path to the setting and separate child elements using a period.

deleteApplicationStrategy

Controls how to retrieve deleted documents when deletes are being applied. Seek exact is the safe default most people should choose, but for a little extra performance you can try seekceiling.

Valid case-insensitive values are:

  • seekexact

    Uses bloom filters to avoid reading from most segments. Use when memory is limited and the unique key field data does not fit into memory.

  • seekceiling

    More performant when documents are deleted/inserted into the database with sequential keys, because this strategy can stop reading from segments when it is known that terms can no longer appear.

Default: seekexact

mergePolicyFactory

The AutoExpungeDeletesTieredMergePolicy custom merge policy is based on TieredMergePolicy. This policy cleans up the large segments by merging them when deletes reach the percentage threshold. A single auto expunge merge occurs at a time. Use for large indexes that are not merging the largest segments due to deletes. To determine whether this merge setting is appropriate for your workflow, view the segments on the Solr Segment Info screen.

When set, the XML is described as:

<indexConfig>
  <mergePolicyFactory class="org.apache.solr.index.AutoExpungeDeletesTieredMergePolicyFactory">
    <int name="maxMergedSegmentMB">5000</int>
    <int name="forceMergeDeletesPctAllowed">25</int>
    <bool name="mergeSingleSegments">true</bool>
  </mergePolicyFactory>
</indexConfig>

To extend TieredMergePolicy to support automatic removal of deletes:

  1. To enable automatic removal of deletes, set the custom policy:

    ALTER SEARCH INDEX CONFIG ON wiki.solr SET indexConfig.mergePolicyFactory[@class='org.apache.solr.index.AutoExpungeDeletesTieredMergePolicyFactory'].bool[@name='mergeSingleSegments'] = true;
  2. Set the maximum segment size in MB:

    ALTER SEARCH INDEX CONFIG ON wiki.solr SET indexConfig.mergePolicyFactory[@class='org.apache.solr.index.AutoExpungeDeletesTieredMergePolicyFactory'].int[@name='maxMergedSegmentMB'] = 5000;
  3. Set the percentage threshold for deleting from the large segments:

    ALTER SEARCH INDEX CONFIG ON wiki.solr SET indexConfig.mergePolicyFactory[@class='org.apache.solr.index.AutoExpungeDeletesTieredMergePolicyFactory'].int[@name='forceMergeDeletesPctAllowed'] = 25;

    If mergeFactor is in the existing index config, you must drop it from the search index before you alter the table to support automatic removal of deletes:

    ALTER SEARCH INDEX CONFIG ON wiki.solr DROP indexConfig.mergePolicyFactory;
parallelDeleteTasks

Regulates how many tasks are created to apply deletes during soft/hard commit in parallel. Supported for RT and NRT indexing. Specify a positive number greater than 0.

Leave parallelDeleteTasks at the default value, except when issues occur with write load when running a mixed read/write workload. If writes occasionally spike in utilization and negatively impact your read performance, then set this value lower.

Default: the number of available processors

Search Reference Search index schema

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage