• Glossary
  • Support
  • Downloads
  • DataStax Home
Get Live Help
Expand All
Collapse All

DataStax Enterprise 5.1 Documentation

    • Overview
      • Release notes
      • New features
    • Architecture
      • Architecture FAQ
      • Database architecture
        • Architecture in brief
        • Internode communications (gossip)
        • Data distribution and replication
          • Data distribution overview
          • Consistent hashing
          • Virtual nodes
            • Data replication
          • Partitioners
          • Snitches
            • Dynamic snitching
            • Types of snitches
        • Node repair
          • Hinted handoff: repair during write path
          • Read Repair: repair during read path
          • Anti-entropy repair
      • Component architecture
        • DSE Analytics
        • DSE Search
        • DSE Graph
          • When to use DSE Graph
          • DSE Graph, OLTP, and OLAP
          • Comparing DSE Graph and relational databases
          • Migrating to DSE Graph from a relational database
          • Migrating to DSE Graph from Apache Cassandra
      • Database internals
        • Storage engine
        • About reads and writes
          • How is data written?
          • How is data maintained?
          • How is data updated?
          • How is data deleted?
          • What are tombstones?
          • How are indexes stored and updated?
          • How is data read?
          • How do write patterns affect reads?
        • Data consistency
          • Read and write consistency
          • Differences between DSE and RDMBS transactions
          • Using lightweight transactions
          • Consistency level performance
          • Consistency level configuration
          • Configuring serial consistency
          • Read requests
            • Examples of read consistency levels
          • Write requests
            • Multiple datacenter write requests
    • Planning
      • Initializing a cluster
        • Initializing datacenters
          • Initializing a single datacenter per workload type
          • Initializing multiple datacenters per workload type
        • Initializing single-token architecture datacenters
          • Calculating tokens for single-token architecture nodes
    • Getting started
    • Installing DSE
      • Which install method should I use?
      • DataStax Installers
        • DataStax Installer (root permissions)
        • DataStax Installer (no root permissions)
        • DataStax Installer (unattended)
      • Lifecycle Manager
      • Package installer using Yum
      • RedHat systemd configuration
      • Package installer using APT
      • Binary tarball installer
      • Installing DSE patch releases
      • Installing on cloud providers
      • Installing on Docker
      • Installing supporting software
      • Uninstalling DSE
      • Installing CQLSH
      • Default file locations
        • Package and Installer-Services installations
        • Tarball and Installer-No Services installations
    • Managing
      • Configuration
        • Recommended production settings
        • YAML and configuration properties
          • cassandra.yaml
          • dse.yaml
          • remote.yaml
          • cassandra-rackdc.properties
          • cassandra-topology.properties
        • Configuring snitches for cloud providers
          • Ec2Snitch
          • Ec2MultiRegionSnitch
          • GoogleCloudSnitch
          • CloudstackSnitch
        • Start-up parameters
        • Choosing a compaction strategy
        • Using multiple network interfaces
        • Configuring gossip directory
        • Configuring heap dump directory
        • Configuring Virtual Nodes
          • Virtual node (vnode) configuration
          • Enabling virtual nodes on an existing production cluster
        • Logging configuration
          • Changing logging locations
          • Configuring logging
          • Commit log archive configuration
          • Change Data Capture (CDC) logging
      • Tools
        • DSE Metrics Collector
        • nodetool
          • About the nodetool utility
          • abortrebuild
          • assassinate
          • bootstrap
          • cfhistograms
          • cfstats
          • cleanup
          • clearsnapshot
          • compact
          • compactionhistory
          • compactionstats
          • decommission
          • describecluster
          • describering
          • disableautocompaction
          • disablebackup
          • disablebinary
          • disablegossip
          • disablehandoff
          • disablehintsfordc
          • disablethrift
          • drain
          • enableautocompaction
          • enablebackup
          • enablebinary
          • enablegossip
          • enablehandoff
          • enablehintsfordc
          • enablethrift
          • failuredetector
          • flush
          • garbagecollect
          • getcachecapacity
          • getcachekeystosave
          • gcstats
          • getcompactionthreshold
          • getcompactionthroughput
          • getconcurrentcompactors
          • getendpoints
          • getinterdcstreamthroughput
          • getlogginglevels
          • getseeds
          • getsstables
          • getstreamthroughput
          • gettimeout
          • gettraceprobability
          • gossipinfo
          • help
          • gethintedhandoffthrottlekb
          • info
          • invalidatecountercache
          • invalidatekeycache
          • invalidaterowcache
          • join
          • listendpointspendinghints
          • listsnapshots
          • mark_unrepaired
          • move
          • netstats
          • pausehandoff
          • proxyhistograms
          • rangekeysample
          • rebuild
          • rebuild_index
          • rebuild_view
          • refresh
          • refreshsizeestimates
          • reloadlocalschema
          • reloadtriggers
          • reloadseeds
          • relocatesstables
          • removenode
          • repair
          • replaybatchlog
          • resetlocalschema
          • resume
          • resumehandoff
          • ring
          • scrub
          • sequence
          • setcachecapacity
          • setcachekeystosave
          • setcompactionthreshold
          • setcompactionthroughput
          • setconcurrentcompactors
          • sethintedhandoffthrottlekb
          • setinterdcstreamthroughput
          • setlogginglevel
          • setstreamthroughput
          • settimeout
          • settraceprobability
          • sjk
          • snapshot
          • status
          • statusbackup
          • statusbinary
          • statusgossip
          • statushandoff
          • statusthrift
          • stop
          • stopdaemon
          • tablehistograms
          • tablestats
          • toppartitions
          • tpstats
          • truncatehints
          • upgradesstables
          • verify
          • version
          • viewbuildstatus
        • dse commands
          • About dse commands
          • dse connection options
          • add-node
          • beeline
          • cassandra
          • cassandra-stop
          • exec
          • fs
          • gremlin-console
          • hadoop fs
          • list-nodes
          • pyspark
          • remove-node
          • spark
          • spark-class
          • spark-jobserver
          • spark-history-server
          • spark-sql
          • spark-sql-thriftserver
          • spark-submit
          • SparkR
          • -v
        • dse client-tool
          • About dse client-tool
          • client-tool connection options
          • cassandra
          • configuration export
          • configuration byos-export
          • configuration import
          • spark
        • dsetool
          • About dsetool
          • dsetool connection options
          • autojt
          • checkcfs
          • core_indexing_status
          • create_core
          • createsystemkey
          • encryptconfigvalue
          • get_core_config
          • get_core_schema
          • help
          • index_checks
          • infer_solr_schema
          • inmemorystatus
          • insights_config
          • insights_filters
          • list_core_properties
          • list_index_files
          • list_subranges
          • managekmip list
          • managekmip expirekey
          • managekmip revoke
          • managekmip destroy
          • node_health
          • partitioner
          • perf
          • read_resource
          • rebuild_indexes
          • repaircfs
          • reload_core
          • ring
          • set_core_property
          • sparkmaster cleanup
          • sparkworker restart
          • status
          • stop_core_reindex
          • tieredtablestats
          • tsreload
          • unload_core
          • upgrade_index_files
          • write_resource
        • SSTable utilities
          • sstabledump
          • sstableexpiredblockers
          • sstablelevelreset
          • sstableloader
          • sstablemetadata
          • sstableofflinerelevel
          • sstablepartitions
          • sstablerepairedset
          • sstablescrub
          • sstablesplit
          • sstableupgrade
          • sstableutil
          • sstableverify
        • Preflight check tool
        • cluster_check and yaml_diff tools
      • Operations
        • Starting and stopping DSE
          • Starting as a service
          • Starting as a stand-alone process
          • Stopping a node
        • Adding or removing nodes, datacenters, or clusters
          • Adding vnodes to an existing cluster
          • Adding a datacenter to a cluster
          • Adding a datacenter to a cluster using a designated datacenter as a data source
          • Replacing a dead node or dead seed node
          • Replacing a running node
            • Adding a node and then decommissioning the old node
            • Replacing a running node
          • Moving a node from one rack to another
          • Decommissioning a datacenter
          • Removing a node
          • Changing the IP address of a node
          • Switching snitches
          • Changing keyspace replication strategy
          • Migrating or renaming a cluster
          • Adding single-token nodes to a cluster
          • Adding a datacenter to a single-token architecture cluster
          • Replacing a dead node in a single-token architecture cluster
        • Backing up and restoring data
          • About snapshots
          • Taking a snapshot
          • Deleting snapshot files
          • Enabling incremental backups
          • Restoring from a snapshot
          • Restoring a snapshot into a new cluster
          • Recovering from a single disk failure using JBOD
        • Repairing nodes
          • Manual repair: Anti-entropy repair
          • When to run anti-entropy repair
          • Changing repair strategies
            • Migrating to full repairs
            • Migrating to incremental repairs
        • Monitoring a DSE cluster
        • Tuning the database
          • Tuning Java resources
            • Changing heap size parameters
            • Configuring the garbage collector
              • G1 MaxGCPauseMillis
              • CMS parameters
          • Tuning Bloom filters
          • Configuring memtable thresholds
        • Data caching
          • Configuring data caches
            • Enabling and configuring caching
            • Tips for efficient cache use
          • Monitoring and adjusting caching
        • Compacting and compressing
          • Configuring compaction
          • Compression
            • When to compress data
            • Configuring compression
          • Testing compaction and compression
        • Migrating data to DSE
        • Collecting node health and indexing scores
        • Clearing data from DataStax Enterprise
      • DSE Management Services
        • DSE Performance Service
          • Performance Service
          • Configuring Performance Service replication strategy
          • Collecting data
            • Collecting slow queries
            • Collecting system level diagnostics
            • Collecting object I/O level diagnostics
            • Statistics gathered for objects
            • Collecting database summary diagnostics
            • Collecting cluster summary diagnostics
            • Collecting histogram diagnostics
            • Collecting user activity diagnostics
            • Statistics gathered for user activity
          • Collecting search data
            • Collecting slow search queries
            • Collecting indexing errors
            • Collecting Apache Solr performance statistics
            • Collecting cache statistics
            • Collecting index statistics
            • Collecting handler statistics
            • Collecting request handler metrics
          • Monitoring Spark with Spark Performance Objects
          • Diagnostic table reference
          • Solr diagnostic table reference
            • Frequently asked questions
            • Slow sub-query log for search
            • Indexing error log
            • Query latency snapshot
            • Update latency snapshot
            • Commit latency snapshot
            • Merge latency snapshot
            • Filter cache statistics
            • Query result cache statistics
            • Index statistics
            • Update handler statistics
            • Update request handler statistics
            • Search request handler statistics
      • DSE In-Memory
        • Creating or altering tables to use DSE In-Memory
        • Verifying table properties
        • Managing memory
        • Backing up and restoring data
      • DSE Tiered Storage
        • About DSE Tiered Storage
        • Configuring DSE Tiered Storage
        • Testing configurations
      • DSE Multi-Instance
        • About DSE Multi-Instance
        • DSE Multi-Instance architecture
        • Adding nodes to DSE Multi-Instance
        • DSE Multi-Instance commands
    • Securing
      • Security FAQs
      • Security checklists
      • Securing the environment
        • Securing ports
        • Securing the TMP directory
      • Configuring DSE Unified Authentication
        • About DSE Unified Authentication
          • Implementing DSE Unified Authentication
          • Implementing without downtime in production
        • Setting security keyspaces replication factors
        • Setting up Kerberos
          • Kerberos guidelines
          • Enabling JCE Unlimited
            • Removing AES-256
          • Preparing DSE nodes for Kerberos
        • Enabling DSE Unified Authentication
          • Defining a Kerberos scheme
          • Defining an LDAP scheme
        • Configuring JMX authentication
        • Managing credentials, role, and permissions cache settings
      • Connecting to authentication enabled clusters
        • About client connections
        • Providing credentials with DSE tools
        • Providing credentials with nodetool
        • Providing credentials with JConsole
        • Providing credentials with cqlsh
        • Using dsetool with Kerberos enabled cluster
        • Using cqlsh with Kerberos or user authentication
        • Loading data into a remote Kerberos enabled cluster
        • Graph and gremlin-console
        • Running Spark jobs with Kerberos
      • Managing roles
        • About roles
        • Creating superuser accounts
        • Creating roles for internal mode
        • Creating roles for LDAP mode
        • Creating roles for Kerberos principals
        • Binding a role to an authentication scheme
        • Configuring proxy roles for applications
      • Authorizing access to database resources
        • About permissions
        • Managing keyspace and table permissions
        • Setting row-level permissions
        • Managing access to DSE Graph keyspaces
        • Authorizing remote procedure calls for CQL execution
        • JMX MBean permissions
        • Search index permissions
        • Managing Spark application permissions
      • Auditing activity
        • Enabling data auditing
        • Configuring audit logging
          • Log formats
        • Configuring audit logging to a database table
          • CassandraAuditWriter table columns
        • Configuring auditing for DSE Search
      • Transparent data encryption
        • About Transparent Data Encryption
        • Configuring local encryption
          • Setting up local encryption keys
          • Encrypting configuration file properties
          • Encrypting system resources
          • Encrypting tables
          • Rekeying existing data
          • Troubleshooting encryption key errors
        • Configuring KMIP encryption
        • Encrypting Search indexes
          • Encrypting new Search indexes
          • Encrypting existing Search indexes
          • Tuning encrypted Search indexes
        • Migrating encrypted tables from earlier versions
        • Bulk loading data between TDE-enabled clusters
      • Configuring SSL
        • About SSL
        • Setting up SSL certificates
        • Securing internal transactional node connections
        • Securing client to cluster connections
        • Securing Spark connections
        • Using CQL shell (cqlsh) with SSL
        • Setting up SSL for nodetool, dsetool, and dse advrep
        • Setting up SSL for jconsole (JMX)
        • Connecting sstableloader to a secured cluster
        • Enabling SSL encryption for DSEFS
      • Dynamically set LDAP Authenticator Connection Search Password
    • Tooling Resources
      • Stress tools
        • The cassandra-stress tool
        • Interpreting the output of cassandra-stress
        • cfs-stress tool
      • OpsCenter services
        • Best Practice Service
        • Capacity Service
        • Repair Service
    • DSE Advanced Replication
      • About DSE Advanced Replication
      • Architecture
      • Traffic between the clusters
      • Terminology
      • Getting started
      • Keyspaces
      • Data Types
      • Operations
      • CQL queries
      • Metrics
      • Managing invalid messages
      • Managing audit logs
      • Command line tool
        • dse advrep commands
          • About the dse advrep command
          • channel create
          • channel update
          • channel delete
          • channel pause
          • channel resume
          • channel status
          • channel truncate
          • conf list
          • conf remove
          • conf update
          • destination create
          • destination update
          • destination delete
          • destination list
          • destination list-conf
          • destination remove-conf
          • metrics list
          • replog count
          • replog analyze-audit-log
    • DSE Analytics
      • Setting the replication factor for analytics keyspaces
      • DSE Analytics and Search integration
        • Using predicate push down in Spark SQL
      • About DSE Analytics Solo
      • DSEFS (DataStax Enterprise file system)
        • About DSEFS
        • Enabling DSEFS
        • Disabling DSEFS
        • Configuring DSEFS
        • Commands DSEFS
        • DSEFS compression
        • DSEFS authentication
        • DSEFS authorization
        • Using the DSEFS REST interface
        • Copying data from CFS to DSEFS
        • Programmatic access to DSEFS
        • Hadoop FileSystem interface implemented by DseFileSystem
        • Using JMX to read DSEFS metrics
      • Cassandra File System (deprecated)
    • DSE Graph
      • DSE Graph Terminology
      • Using JMX to read and execute operation with DSE Graph metrics
      • DSE Graph Configuration
        • Configuring DSE Graph options in the dse.yaml file
        • Configuring the Gremlin console in the remote.yaml
        • Configuring the Gremlin Server in the dse.yaml file
        • Configuring the Graph sandbox
        • Specifying the schema mode
        • Specifying DSE database and graph settings
        • Configuring DSE Graph Security
      • DSE Graph Tools
      • DSE Graph Reference
        • The schema API
          • clear
          • connection
          • config
          • describe
          • edgeLabel
          • exists
          • index - edge index
            • index - property index
            • index - vertex index
            • partitionKey - clusteringKey
            • properties
            • propertyKey
            • vertexLabel
          • The system API
          • create
          • drop
          • exists
          • graphs
          • option
          • replication
          • systemReplication
          • truncate
    • DSE Search
      • About DSE Search
        • DSE Search vs. OSS
        • Unsupported features for DSE Search
        • Apache Solr and Apache Lucene limitations
      • Configuring DSE Search
        • DSE Search reference
          • Search index config
          • Search index schema
          • dsetool search index commands
          • Configuration properties
        • Viewing search index schema and config
        • Customizing the search index schema
        • Changing auto-generated search index settings
        • Using LowerCaseStrField with search indexes
        • Set the location of search indexes
        • DSE Search logging
        • Enabling multi-threaded queries
        • Configuring additional search components
        • Shuffling shards to balance the load
        • Load balancing for distributed search queries
        • Excluding hosts from distributed queries
      • Managing search indexes
        • About search index management
        • Adjusting timeout for index management
        • About search indexes
        • Generating an index with joins disabled
        • Managing search index fields
          • Syntax for changing schema settings
          • Defining index field types
          • Adding a new field type
          • Adding a column to the index
          • Indexing tuples and UDTs fields
            • Tuple configuration example
            • UDT configuration example
            • Nesting tuples and UDTs
            • Tuples and UDTs as CQL map values
          • Indexing map columns
          • Dropping columns from the index
          • Indexing a column for different analysis
        • Configuring search index joins
        • Reloading the search index
        • Removing a search index
        • Updating the index after data expires (TTL)
        • Inserting/updating data
      • Filtering CQL queries with a search index
        • Search index syntax
        • Search index filtering best practices
        • Filtering on terms
          • Filtering on words, phrases, or substrings
          • Advanced term and phrase searches
        • Geospatial queries for Point and LineString
        • Using dynamic fields
        • Joining cores
        • Spatial queries with polygons require JTS
        • Limiting queries by time
        • UDT query examples
        • Querying CQL collections
        • Using date ranges in solr_query
      • Tutorials and demos
        • Creating a healthcare keyspace for tutorials
        • Multi-faceted search using healthcare data
        • Term and phrase searches using the wikipedia demo
          • Using secure cluster
        • Indexing and querying polygons
      • Performance tuning and monitoring DSE Search
        • Tuning search for maximum indexing throughput
        • Resolving query timeouts on restarted nodes
        • Table compression can optimize reads
        • Parallelizing large row reads
        • Changing the stack size and memtable space
        • Tuning index size and range query speed
        • Improving read performance
      • DSE Search operations
        • Initial data migration
        • Shard routing for distributed queries
        • Deleting a search index
        • Verifying indexing status
        • Backing up DSE Search data directories
        • Restoring a search node from backup
        • Monitoring DSE Search
        • Uploading the search index schema and config
      • Solr interfaces
        • Changing the Solr connector port
        • Allowing access from Solr Admin UI for core indexing (deprecated)
        • Changing Tomcat web server settings
        • Configuring the Solr library path
        • Changing the HTTP interface to Apache JServe Protocol
        • URP and FIT
          • FIT transformer API
          • FIT transformer class examples
          • Custom URP example
          • Interface for custom field types
        • Deleting by query
        • Monitoring Solr segments
      • HTTP API SolrJ and other Solr clients
    • DSE Spark
      • About Spark
      • Using Spark with DataStax Enterprise
        • Starting Spark
        • Running Spark commands against a remote cluster
        • Monitoring Spark with the web interface
        • Using DSE Spark with third party tools and integrations
      • Configuring Spark
        • Configuring Spark nodes
        • Automatic Spark Master election
        • Configuring Spark logging options
        • Running Spark processes as separate users
        • Configuring the Spark history server
        • Enabling Spark apps in cluster mode when authentication is enabled
        • Setting Spark Cassandra Connector-specific properties
        • Creating a DSE Analytics Solo datacenter
        • Spark JVMs and memory management
      • Using Spark modules with DataStax Enterprise
        • Getting started with Spark Streaming
        • Using Spark SQL to query data
          • Querying database data using Spark SQL in Scala
          • Querying database data using Spark SQL in Java
          • Querying DSE Graph vertices and edges with Spark SQL
          • Supported syntax of Spark SQL
          • Inserting data into tables with static columns using Spark SQL
          • Running HiveQL queries using Spark SQL
          • Using the DataFrames API
          • Using the Spark SQL Thrift server
          • Enabling SSL for the Spark SQL Thrift Server
          • Accessing the Spark SQL Thrift Server with the Simba JDBC driver
          • Simba ODBC Driver for Apache Spark (Windows)
            • Configuring the Spark ODBC Driver (Windows)
          • Simba ODBC Driver for Apache Spark (Linux)
          • Connecting to the Spark SQL Thrift server using Beeline
        • Using SparkR with DataStax Enterprise
      • Accessing DataStax Enterprise data from external Spark clusters
        • Overview of BYOS support in DataStax Enterprise
        • Generating the BYOS configuration file
        • Connecting to DataStax Enterprise using the Spark shell on an external Spark cluster
        • Generating Spark SQL schema files
        • Starting Spark SQL Thrift Server with Kerberos
        • Accessing HDFS or CFS resources using Kerberos
      • Using the Spark Jobserver
  • DataStax Enterprise 5.1 Documentation
  • Planning
  • Initializing a cluster
  • Initializing datacenters
  • Initializing multiple datacenters per workload type

Initializing multiple datacenters per workload type

In this scenario, a mixed workload cluster has more than one datacenter for each type of workload. For example, the following ten-node cluster is spans five datacenters, whereas a single datacenter cluster has only one datacenter for each node type.

  • DC1 = 2 DSE Analytics nodes

  • DC2 = 2 Transactional nodes

  • DC3 = 2 DSE Search nodes

  • DC4 = 2 DSE Analytics nodes

  • DC5 = 2 Transactional nodes

The ten-node cluster spans two racks across five datacenters. Applications in each datacenter use a default consistency level of LOCAL_QUORUM. One node per rack serves as a seed node.

Table 1. Node IP address, type, and seed
Node IP address Type Seed RAC

node0

110.82.155.0

Transactional

✓

RAC1

node1

110.82.155.1

Transactional

RAC1

node2

110.54.125.1

Transactional

RAC2

node3

110.55.120.1

Transactional

RAC1

node4

110.54.125.2

Analytics

RAC1

node5

110.54.155.2

Analytics

✓

RAC2

node6

110.82.155.3

Analytics

RAC1

node7

110.55.120.2

Analytics

RAC1

node8

110.54.125.3

Search

RAC1

node9

110.82.155.4

Search

RAC2

Where is the cassandra-rackdc.properties file?

The location of the cassandra-rackdc.properties depends on the type of installation:

Installation Type Location

Package installations + Installer-Services installations

/etc/dse/cassandra/cassandra-rackdc.properties

Tarball installations + Installer-No Services

<installation_location>/resources/cassandra/conf/cassandra-rackdc.properties

Where is the cassandra-topology.properties file?

The location of the cassandra-topology.properties file depends on the type of installation:

Installation Type Location

Package installations + Installer-Services installations

/etc/dse/cassandra/cassandra-topology.properties

Tarball installations + Installer-No Services installations

<installation_location>/resources/cassandra/conf/cassandra-topology.properties

Where is the cassandra.yaml file?

The location of the cassandra.yaml file depends on the type of installation:

Installation Type Location

Package installations + Installer-Services installations

/etc/dse/cassandra/cassandra.yaml

Tarball installations + Installer-No Services installations

<installation_location>/resources/cassandra/conf/cassandra.yaml

Where is the dse.yaml file?

The location of the dse.yaml file depends on the type of installation:

Installation Type Location

Package installations + Installer-Services installations

/etc/dse/dse.yaml

Tarball installations + Installer-No Services installations

<installation_location>/resources/dse/conf/dse.yaml

Prerequisites

Complete the prerequisite tasks outlined in Initializing a DataStax Enterprise cluster to prepare the environment.

If the new datacenter uses existing nodes from another datacenter or cluster, complete the following steps to ensure that old data does not interfere with the new cluster:

  1. If the nodes are behind a firewall, open the required ports for internal/external communication.

  2. Decommission each node added to the new datacenter.

  3. Clear the data from DataStax Enterprise (DSE) to completely remove application directories.

  4. Install DSE on each node.

Procedure

  1. Complete the following steps to prevent client applications from prematurely connecting to the new datacenter, and to ensure that the consistency level for reads or writes does not query the new datacenter:

    If client applications, including DSE Search and DSE Analytics, are not properly configured, they might connect to the new datacenter before it is online. Incorrect configuration results in connection exceptions, timeouts, and/or inconsistent data.

    1. Configure client applications to use the DCAwareRoundRobinPolicy.

    2. Direct clients to an existing datacenter. Otherwise, clients might try to access the new datacenter, which might not have any data.

    3. If using the QUORUM consistency level, change to LOCAL_QUORUM.

    4. If using the ONE consistency level, set to LOCAL_ONE.

    See the programming instructions for your driver.

  2. Configure every keyspace using SimpleStrategy to use the NetworkTopologyStrategy replication strategy, including (but not restricted to) the following keyspaces.

    If SimpleStrategy was used previously, this step is required to configure NetworkTopologyStrategy.

    1. Use ALTER KEYSPACE to change the keyspace replication strategy to NetworkTopologyStrategy for the following keyspaces.

      ALTER KEYSPACE keyspace_name WITH REPLICATION =
      {'class' : 'NetworkTopologyStrategy', 'ExistingDC1' : 3};
      • DSE security: system_auth, dse_security

      • DSE performance: dse_perf

      • DSE analytics: dse_leases, dsefs

      • System resources: system_traces, system_distributed

      • OpsCenter (if installed)

      • All keyspaces created by users

    2. Use DESCRIBE SCHEMA to check the replication strategy of keyspaces in the cluster. Ensure that any existing keyspaces use the NetworkTopologyStrategy replication strategy.

      DESCRIBE SCHEMA ;
      CREATE KEYSPACE dse_perf WITH replication =
      {'class': 'NetworkTopologyStrategy, 'DC1': '3'}  AND durable_writes = true;
      ...
      
      CREATE KEYSPACE dse_leases WITH replication =
      {'class': 'NetworkTopologyStrategy, 'DC1': '3'}  AND durable_writes = true;
      ...
      
      CREATE KEYSPACE dsefs WITH replication =
      {'class': 'NetworkTopologyStrategy, 'DC1': '3'}  AND durable_writes = true;
      ...
      
      CREATE KEYSPACE dse_security WITH replication =
      {'class': 'NetworkTopologyStrategy, 'DC1': '3'}  AND durable_writes = true;
  3. In the new datacenter, install DSE on each new node. Do not start the service or restart the node.

    Use the same version of DSE on all nodes in the cluster.

  4. Configure properties in cassandra.yaml on each new node, following the configuration of the other nodes in the cluster.

    Use the yaml_diff tool to review and make appropriate changes to the cassandra.yaml and dse.yaml configuration files.

    1. Configure node properties:

      • -seeds: internal_IP_address of each seed node

        Include at least one seed node from each datacenter. DataStax recommends more than one seed node per datacenter, in more than one rack. Do not make all nodes seed nodes.

      • auto_bootstrap: true

        This setting has been removed from the default configuration, but, if present, should be set to true.

      • listen_address: empty

        If not set, DSE asks the system for the local address, which is associated with its host name. In some cases, DSE does not produce the correct address, which requires specifying the listen_address.

      • endpoint_snitch: snitch

        See endpoint_snitch and snitches.

        Do not use the DseSimpleSnitch (default). The DseSimpleSnitch is used only for single-datacenter deployments (or single-zone deployments in public clouds), and does not recognize datacenter or rack information.

        Table 2. Snitch configuration files
        Snitch Configuration file

        GossipingPropertyFileSnitch

        cassandra-rackdc.properties file

        Amazon EC2 single-region snitch

        Amazon EC2 multi-region snitch

        Google Cloud Platform snitch

        PropertyFileSnitch

        cassandra-topology.properties file

      • If using a cassandra.yaml or dse.yaml file from a previous version, check the Upgrade Guide for removed settings.

    2. Configure node architecture (all nodes in the datacenter must use the same type):

      Virtual node (vnode) allocation algorithm settings
      • Set num_tokens to 8 (recommended).

      • Set allocate_tokens_for_local_replication_factor to the target replication factor for keyspaces in the new datacenter. If the keyspace RF varies, alternate the settings to use all the replication factors.

      • Comment out the initial_token property.

      DataStax recommends not using vnodes with DSE Search. However, if you decide to use vnodes with DSE Search, do not use more than 8 vnodes and ensure that allocate_tokens_for_local_replication_factor option in cassandra.yaml is correctly configured for your environment.

      For more information, refer to Virtual node (vnode) configuration.

      Single-token architecture settings
      • Generate the initial token for each node and set this value for the initial_token property.

      See Adding or replacing single-token nodes for more information.

      • Comment out both num_tokens and allocate_tokens_for_local_replication_factor.

  5. In the cassandra-rackdc.properties (GossipingPropertyFileSnitch) or cassandra-topology.properties (PropertyFileSnitch) file, assign datacenter and rack names to the IP addresses of each node, and assign a default datacenter name and rack name for unknown nodes.

    Migration information: The GossipingPropertyFileSnitch always loads cassandra-topology.properties when the file is present. Remove the file from each node on any new cluster, or any cluster migrated from the PropertyFileSnitch.

    # Transactional Node IP=Datacenter:Rack
    110.82.155.0=DC_Transactional:RAC1
    110.82.155.1=DC_Transactional:RAC1
    110.54.125.1=DC_Transactional:RAC2
    110.54.125.2=DC_Analytics:RAC1
    110.54.155.2=DC_Analytics:RAC2
    110.82.155.3=DC_Analytics:RAC1
    110.54.125.3=DC_Search:RAC1
    110.82.155.4=DC_Search:RAC2
    
    # default for unknown nodes
    default=DC1:RAC1

    After making any changes in the configuration files, you must the restart the node for the changes to take effect.

  6. Make the following changes in the existing datacenters.

    1. On nodes in the existing datacenters, update the -seeds property in cassandra.yaml to include the seed nodes in the new datacenter.

    2. Add the new datacenter definition to the cassandra.yaml properties file for the type of snitch used in the cluster. If changing snitches, see Switching snitches.

  7. After you have installed and configured DSE on all nodes, start the seed nodes one at a time, and then start the rest of the nodes:

    • Package installations: Starting DataStax Enterprise as a service

    • Tarball installations: Starting DataStax Enterprise as a stand-alone process

  8. Rotate starting DSE through the racks until all the nodes are up.

  9. After all nodes are running in the cluster and the client applications are datacenter aware, use cqlsh to alter the keyspaces to add the desired replication in the new datacenter.

    ALTER KEYSPACE keyspace_name WITH REPLICATION =
    {'class' : 'NetworkTopologyStrategy', 'ExistingDC1' : 3, 'NewDC2' : 2};

    If client applications, including DSE Search and DSE Analytics, are not properly configured, they might connect to the new datacenter before it is online. Incorrect configuration results in connection exceptions, timeouts, and/or inconsistent data.

  10. Run nodetool rebuild on each node in the new datacenter, specifying the datacenter to rebuild from. This step replicates the data to the new datacenter in the cluster.

    nodetool rebuild -- datacenter_name

    You must specify an existing datacenter in the command line, or the new nodes appear to rebuild successfully, but might not contain all anticipated data.

    Requests to the new datacenter with LOCAL_ONE or ONE consistency levels can fail if the existing datacenters are not completely in-sync.

    1. Use nodetool rebuild on one or more nodes at the same time. Run on one node at a time to reduce the impact on the existing cluster.

    2. Alternatively, run the command on multiple nodes simultaneously when the cluster can handle the extra I/O and network pressure.

  11. Check that your cluster is up and running:

    dsetool status

    If the cluster has problems starting, look for starting DSE troubleshooting and other articles in the Support Knowledge Center.

  12. Complete step 3 through step 11 to add the remaining datacenters to the cluster.

Results

The datacenters in the cluster are now replicating with each other.

DC: Cassandra   Workload: Cassandra  Graph: no
==============================================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address         Load        Tokens    Owns    Host ID             Rack
UN 110.82.155.0    21.33 KB    256       50.2%   a9fa31c7-f3c0-...   RAC1
UN 110.82.155.1    21.33 KB    256       49.8%   f5bb416c-db51-...   RAC1

DC: Analytics
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address         Load        Owns      Host ID               Tokens         Rack
UN 110.54.125.2    28.44 KB    50.2.%    e2451cdf-f070- ...    -922337....    RAC1
UN 110.82.155.2    44.47 KB    49.8%     f9fa427c-a2c5- ...    30745512...    RAC2

DC: Solr
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address         Load        Owns      Host ID               Tokens         Rack
UN 110.54.125.3    15.44 KB    50.2.%    e2451cdf-f070- ...    9243578....    RAC1
UN 110.82.155.4    18.78 KB    49.8.%    e2451cdf-f070- ...    10000          RAC2

DC: Cassandra2   Workload: Cassandra  Graph: no
==============================================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address         Load        Tokens    Owns    Host ID             Rack
UN 110.54.125.1    21.33 KB    256       16.7%   b836748f-c94f-...   RAC2
UN 110.55.120.1    21.33 KB    256       16.7%   b354798g-c94f-...   RAC2

DC: Analytics2
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address         Load        Owns      Host ID               Tokens         Rack
UN 110.82.155.3    54.33 KB    50.2%     b9fc31c7-3bc0- ..-    45674488...    RAC1
UN 110.55.120.2    54.33 KB    49.8%     b8gd45e4-3bc0- ..-    45674488...    RAC2

Next steps

  • Initializing single-token architecture datacenters

  • Setting security keyspaces replication factors

Initializing a single datacenter per workload type Initializing single-token architecture datacenters

General Inquiries: +1 (650) 389-6000 info@datastax.com

© DataStax | Privacy policy | Terms of use

DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.

Kubernetes is the registered trademark of the Linux Foundation.

landing_page landingpage