Testing Your Cluster Before Production

To ensure that your selections of memory, CPU, disks, number of nodes, and network settings meet your business needs, test your cluster prior to production. By simulating your production workload, you can avoid unpleasant surprises and ensure successful production deployment.

Match your testing workload to the production workload as closely as possible. Variances from production in testing must fully account for any differences between workloads.

Considerations if You Are Coming from a Relational Database World

Cassandra-based applications and clusters are much different than relational databases and use a data model based on the types of queries, not on modeling entities and relationships. In addition to learning a new database technology and an API, writing an application that scales on a distributed database instead of a single machine requires a new way of thinking.

A common strategy new users employ for making the transition from single machine to distributed node is to make heavy use of materialized views, secondary indexes, and UDF (user-defined functions).

In most applications at large scale these techniques are not heavily used, and if they are, allowances must be made for the performance penalty in node sizing and SLAs.

Service Level Agreements Targets

Before going into production, decide what constitutes fast and slow Service level agreements (SLA), and how often slow SLAs are allowed to occur.

Basic Rules for Determining Request Percentiles

Percentile Use case

100

Not recommended: The worst measurement is often irrelevant to testing and based on factors external to the database.

99.9

Aggressive: Use only for latency sensitive use cases. Be aware that hardware expenses will be high if you need to keep this close to average.

99

Sweet spot and the most common.

95

Loose: Use when where latency is not easily noticed this is a good metric to measure.

The following metrics are based on clusters with more data on disk than RAM. Measurement is in the 99th percentile using nodetool tablehistograms for DSE 6.8 or Cassandra (4.x | 3.x | 3.0 | 2.2).

Latency Metrics

Hardware Latency

Top-of-the-line high CPU nodes with NVM flash storage

< 10ms

Common SATA based SSDs

< 150msFactors, such as testing, controller, and contention, can shift this measurement significantly. This can result in some hardware approaching NVM specifications.

Common SATA based-systems

500ms is common. This metric varies significantly due to controller, contention, data model, and so on. Tuning can shift this figure an order of magnitude.

Establish TPS Targets

Determine how many transactions per second (TPS) you expect your cluster to handle. Ideally, match this number with the same node count in your test cluster. Alternately, test with ratio of TPS to nodes to evaluate how the cluster behaves.

If the cluster does not meet your SLA requirements, look for the root cause. Helpful questions include:

  • Does the cluster meet SLAs below that level?

  • At what TPS level do your SLAs fail?

Monitor the Important Metrics

For Cassandra, use JMX MBeans or Linux iostat or dstat commands. Monitor the following metrics:

  • System hints

    Generated system hints should not be continually high. If a node is down or a datacenter link is down, expect this metric to accumulate (by design).

  • Pending compactions

    The number of acceptable pending compactions depends on the type of compaction strategy.

    Compaction strategy Pending compactions

    SizedTieredCompactionStrategy (STCS)

    Should not exceed 100 for more than short periods.

    LeveledCompactionStrategy (LCS)

    Not an accurate measure for pending compactions. Staying over 100 is generally an indication of stress. Over 1000 is pathological.

    TimeWindowCompactionsStrategy (TWCS)

  • Pending writes

    Should not go over 10,000 writes for more than short periods. Otherwise, a tuning or sizing problem exists.

  • heap usage

    If huge spikes with long pauses exist, adjust the heap.

  • I/O wait

    Metrics greater than 5 indicates that the CPU is waiting too long on I/O and is a sign of being I/O bound.

  • CPU Usage

    For CPUs using hyper-threading, it is helpful to remember that for each processor core that is physically present, the operating system addresses two virtual (logical) cores and shares the workload between them. This means that if half your threads are at 100% CPU usage, the CPU capacity is at maximum even if the tool reports 50% usage. To see the actual thread usage, use the ttop command.

  • Monitor your SLAs

    Monitor according to the targets you set in Service Level Agreements Targets.

  • SSTable counts

    If SSTable counts go over 10,000, see CASSANDRA-11831.

    DateTieredCompactionStrategy (DTCS) is deprecated in Cassandra 3.0.8/3.8.

  • Log warnings and errors

    Look for the following:

    • SlabPoolCleaner

    • WARN

    • ERROR

    • GCInspector

    • hints

Simulate Real Outages

Simulating outages is key to ensuring that your cluster functions well in production. To simulate outages:

  1. Take one or two nodes offline (more is preferable, depending on the cluster size).

    • Do your queries start failing?

    • Do the nodes start working without any change when they come back online?

  2. Take a datacenter offline while taking load.

    How long can it be offline before system.hints build up to unmanageable levels?

  3. Run an expensive task that eats up CPU on a couple of nodes, such as find "a /

    This scenario helps you prepare for the effect on your nodes in case this happens.

  4. Do expensive range queries in CQLSH while taking load to see how the cluster behaves. For example:

    PAGING OFF;
    SELECT * FROM FOO LIMIT 100000000;

    This test ensures your cluster is ready for naive or unintended actions by users.

Simulate Production Operations

Do the following during load testing and repeat under outages as described above:

  1. Run repair to ensure that you have enough overhead to complete the repair.

  2. Bootstrap a new node into the cluster.

  3. Add a new datacenter. This exercise gives you an idea of the process when you need to do it in production.

  4. Change the schema by adding new tables, removing tables, and altering existing tables.

  5. Replace a failed node. Note how long it takes to bootstrap a new node.

Run a Real Replication Factor on at Least 5 Nodes

Ideally run the same number of nodes that you expect to run in production. Moreover, if you’re planning on running only 3 nodes in production, test with 5. You may find that the queries that were fast with 3 nodes are much slower with 5 nodes. In this scenario, secondary indexes as a common query path are exposed as being slower at scale.

Be sure to use the same replication factor as in production.

Simulate the Production Datacenter Setup

Each datacenter adds additional write costs. If your production cluster has more datacenters than your test cluster, your writes might be greater than you anticipate. For example, if you use RF 3 and test your cluster with 2 datacenters, each write goes to 4 nodes in 2 data centers. If your production cluster uses the same RF and has 5 datacenters, each write goes to 7 nodes with 5 data centers. This means writes are now almost twice as expensive on the coordinator node because in each remote datacenter, the forwarding coordinator sends the data to the other replicas in its datacenter.

Simulate the Network Topology

Simulate your network topology with the same latency and pipe. Test with the expected TPS. Be aware that remote links can be a limiting factor for many use cases and expected bandwidth.

Simulate Expected per Node Density

Generally, unless using SSDs, a high number of CPUs, and significant RAM, using more than 2TB per DSE node or using more than 1TB per Apache Cassandra® node is rarely a good idea. See Capacity per node recommendations for DSE and Capacity per node recommendations.

Once you’ve determined your target per node density, add data to the test cluster until the target capacity is reached. Test for the following:

  • What happens to your SLAs as the target density is approached?

  • What happens if you go over your the density?

  • How long does bootstrap take?

  • How long does repair take?

Use the NoSQLBench Tool for Testing

NoSQLBench is an open-source, powerful testing tool that emulates real application workloads in your development environment.

With this capability, you can fast-track performance, sizing, and data-model testing:

  • without having to write your own testing harness

  • before deploying your application workloads to production.

With NoSQLBench, you can:

  • run common testing workloads directly from the command line

  • generate virtual data sets of arbitrary size, with deterministic data and statistically shaped values

  • design custom workloads that emulate your application, contained in a single file, based on statement templates - no IDE or coding is required

  • plot your results in a docker and grafana stack on Linux with a single command line option.

  • customize the runtime behavior of NoSQLBench (if needed) for advanced testing, including the use of a JavaScript environment

To get started, see the NoSQLBench documentation.

For comprehensive examples, follow along with this NoSQLBench Workloads 101 tutorial.

Ready to install? See the download instructions in the NoSQLBench GitHub repo.

DSE Search Stress Demo

The solr_stress demo is a benchmark tool that simulates the number of requests for reading and writing data on a DSE Search cluster using a specified sequential or random set of iterations. See MBeans search demo.

The default location of the demos directory depends on the type of installation:

  • Package installations: /usr/share/dse/demos

  • Tarball installations: <installation_location>/demos

The hardware used for load testing should mimic the parameters and number of nodes in the production environment. Performing load testing on a single node, or in environments where the number of nodes is equal to the replication factor, results in incorrect measurements.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com