Anti-patterns

The following implementation or design patterns are either ineffective or counterproductive, or both, in Apache Cassandra™ production installations. Correct patterns are suggested in most cases.

Storage Area Network (SAN)

DataStax strongly recommends against using traditional SAN storage, meaning aggregated storage that is external to the server, for on-premise deployments.

Storage in clouds works very differently and is not an issue. Virtual SAN storage, which is local to compute nodes, is less susceptible to the issues described below.

Although used frequently in Enterprise IT environments, traditional SAN storage has proven to be a difficult and expensive architecture to use with distributed databases for a variety of reasons, including:

  • Traditional SAN return on investment (ROI) does not scale along with that of DataStax databases, with regards to capital expenses and engineering resources.

  • In distributed architectures, traditional SAN generally introduces a bottleneck and single point of failure because the database’s I/O frequently surpasses the ability of the array controller to keep pace.

  • External storage, even when used with a high-speed network and SSD, adds latency for all operations.

  • Heap pressure is increased because pending I/O operations take longer.

  • When the SAN transport shares operations with internal and external database traffic, it can saturate the network and lead to network availability problems.

Taken together, these factors can create problems that are difficult to resolve in production. In particular, new users deploying DataStax databases with an external SAN must first develop adequate methods and allocate sufficient engineering resources to identify those issues before they become a problem in production. For example, methods are needed for all key scaling factors, such as operational rates and SAN fiber saturation.

If you still intend to use traditional SAN, contact the DataStax Services team or the DataStax Luna team for detailed information about why the use of SAN is not recommended.

Network Attached Storage (NAS)

DataStax does not recommend storing SSTables on a Network-Attached Storage (NAS) device. Using a NAS device often results in network-related bottlenecks caused by high levels of I/O wait time on reads and writes. The causes of these bottlenecks include:

  • Router latency.

  • The Network Interface Card (NIC) in the node.

  • The NIC in the NAS device.

If you are required to use NAS for your environment, contact the DataStax Services team or the DataStax Luna team.

CPU Frequency Scaling

Recent Linux systems include a feature called CPU frequency scaling or CPU speed scaling. It allows a server’s clock speed to be dynamically adjusted so that the server can run at lower clock speeds when the demand or load is low. This reduces the server’s power consumption and heat output (which significantly impacts cooling costs). Unfortunately, this behavior has a detrimental effect on servers running DataStax products because throughput can get capped at a lower rate.

For more information, see Disable CPU frequency scaling for DSE.

Rack Feature in Single-Token Architecture Deployments

This information applies only to single-token architecture, not to virtual nodes. See About data distribution and replication for DSE. See "Data distribution and replication" (3.x | 3.0 | 2.2 | 2.1).

Defining one rack for the entire cluster is the simplest and most common implementation. Multiple racks should be avoided for the following reasons:

  • The rack requirement that racks should be organized in an alternativing order is often ignored or forgotten. This order allows the data to get distributed safely and appropriately.

  • The rack information is not used effectively. For example, setting up with as many racks as nodes or similar non-beneficial scenarios.

  • Expanding a cluster when using racks can be tedious. The procedure typically involves several node moves and must ensure that racks are distributing data correctly and evenly. When clusters need immediate expansion, racks should be the last concern.

To set up racks correctly:

  • Use the same number of nodes in each rack.

  • Use one rack and place the nodes in different racks in an alternating pattern. The rack feature benefits from quick and fully functional cluster expansions. Once the cluster is stable, you can swap nodes and make the appropriate moves to ensure that nodes are placed in the ring in an alternating fashion with respect to the racks.

SELECT ... IN or Index Lookups

SELECT …​ IN and index lookups (formerly secondary indexes) should be avoided except for specific scenarios. See

Using the Byte Ordered Partitioner

The Byte Ordered Partitioner (BOP) is not recommended.

Use virtual nodes (vnodes) instead. Vnodes allow each node to own a large number of small ranges distributed throughout the cluster. Using vnodes saves you the effort of generating tokens and assigning tokens to your nodes. If not using vnodes, these partitioners are recommended because all writes occur on the hash of the key and are therefore spread out throughout the ring amongst tokens range. These partitioners ensure that your cluster evenly distributes data by placing the key at the correct token using the key’s hash value.

Reading Before Writing

Reads take time for every request, as they typically have multiple disk hits for uncached reads. In workflows requiring reads before writes, this small amount of latency can affect overall throughput. All write I/O in the database is sequential so there is very little performance difference regardless of data size or key distribution.

Load Balancers

The DataStax database was designed to avoid the need for load balancers. Putting load balancers between the database and clients is harmful to performance, cost, availability, debugging, testing, and scaling. All high-level clients, such as the Java and Python drivers, implement load balancing directly. For available drivers, see DataStax drivers.

Insufficient Testing

Be sure to test at scale and production loads. This the best way to ensure that your system functions properly when your application goes live. The information you gather from testing is the best indicator of what throughput per node is needed for future expansion calculations.

Too Many Tables

Having too many tables in a cluster can cause substantial performance degradation. Because performance is influenced by a range of other factors, it is difficult to determine a universal table threshold; however, DataStax provides the following guidelines:

  • Warning threshold: 200 tables

    The cluster may run smoothly above this threshold, but once your database reaches the warning threshold, it’s time to start monitoring and planning a re-architecture. If possible, remove unused and underutilized tables.

  • Failure threshold: 500 tables

    On a cluster that has exceeded 500 tables, expect problems and errors, including (but not limited) issues related to high memory usage and compactions.

The table thresholds have additional dependencies on JVM Heap and the byte count. Each table uses approximately 1 MB of memory. For each table being acted on, there is a memtable representation in JVM Heap. Tables with large data models increase pressure on memory. Each keyspace also causes additional overhead in JVM memory; therefore having lots of keyspaces may also reduce the table threshold. See this blog about various impacts of having too many tables.

Lack of Familiarity with Linux

Linux has a great collection of tools. Being familiar with the Linux built-in tools greatly helps you and eases operation and management costs in normal, routine functions. Learn these essential tools and techniques:

  • Parallel Secure Shell (SSH) and Cluster SSH: The pssh and cssh tools allow SSH access to multiple nodes. This is useful for inspections and cluster wide changes.

  • Passwordless SSH: SSH authentication is carried out by using public and private keys. This allows SSH connections to easily move from node to node without password access. In cases where more security is required, you can implement either a bastion host or a VPN, or both.

  • Useful common command-line tools include:

    • dstat: Shows all system resources instantly. For example, you can compare disk usage in combination with interrupts from your IDE controller, or compare the network bandwidth numbers directly with the disk throughput (in the same interval).

    • top: Provides an ongoing look at CPU processor activity in real time.

    • System performance tools: Tools such as iostat, mpstat, iftop, sar, lsof, netstat, htop, vmstat, and similar can collect and report a variety of metrics about the operation of the system.

    • vmstat: Reports information about processes, memory, paging, block I/O, traps, and CPU activity.

    • iftop: Shows a list of network connections. Connections are ordered by bandwidth usage, with the pair of hosts responsible for the most traffic at the top of list. This tool makes it easier to identify the hosts causing network congestion.

Be sure to use the recommended settings for your DataStax database. See Recommended production settings for DataStax Enterprise and Recommended Production Settings for Apache Cassandra versions 3.x | 3.0 | 2.2 | 2.1.

Using DSE Search on DSE Tiered Storage Tables

DataStax recommends not using DSE Search on DSE Tiered Storage tables. The data sets used by DSE Tiered Storage can be very large.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com