HCD architecture
This page provides essential information for understanding and using Hyper-Converged Database (HCD). Because HCD differs from a relational database, you will save time by reading and understanding each section on this page.
How HCD works
HCD is designed to handle big data workloads across multiple nodes with no single point of failure. Its architecture is based on the understanding that system and hardware failures can and do occur.
HCD addresses the problem of failures by employing a peer-to-peer distributed system across homogeneous nodes where data is distributed among all nodes in the cluster. Each node frequently exchanges state information about itself and other nodes across the cluster using peer-to-peer gossip communication protocol. A sequentially written commit log on each node captures write activity to ensure data durability. Data is then indexed and written to an in-memory structure, called a memtable, which resembles a write-back cache.
Each time the memory structure is full, the data is written to disk in an SSTables data file. All writes are automatically partitioned and replicated throughout the cluster. HCD periodically consolidates SSTables using compaction, which discards obsolete data marked for deletion with a tombstone. A tombstone is a marker in a row that indicates a column will be deleted. During compaction, marked columns are deleted. To ensure all data across the cluster stays consistent, various repair mechanisms are employed.
The HCD database is a partitioned row store database, where rows are organized into tables with a (required) primary key. The database’s architecture allows any authorized user to connect to any node in any datacenter and access data using the CQL language. For ease of use, CQL uses a similar syntax to SQL and works with table data. Developers can access CQL through CQL shell (cqlsh) commands, and by using the drivers for application languages. Typically, a cluster has one keyspace per application composed of many different tables.
HCD APIs use the term keyspace to refer to both namespaces and keyspaces. |
Client read or write requests can be sent to any node in the cluster. When a client connects to a node with a request, that node serves as the coordinator node for that particular client operation. The coordinator acts as a proxy between the client application and the nodes that own the data being requested. The coordinator determines which nodes in the ring should get the request based on how the cluster is configured.
Key structures
The following components are essential to understanding the HCD architecture:
- Node
-
Nodes are where you store your data. It is the basic database infrastructure component.
- Cluster
-
A group of distributed nodes for storing data. A cluster can have a single node, single datacenter, or multiple datacenters.
- Datacenter
-
A group of related nodes configured together within a cluster for replication purposes. A datacenter can be a physical datacenter or virtual datacenter. Using separate datacenters prevents transactions from being impacted by other workloads and lowers latency. Depending on the replication factor, data can be written to multiple datacenters. Datacenters must never span physical locations.
- Replication
-
The process of storing copies of data on multiple nodes. Replication ensures reliability and fault tolerance. The number of copies is set by the replication factor.
- Commit log
-
All data is written first to the commit log for durability. After all its data has been flushed to SSTables, it can be archived, deleted, or recycled.
- SSTable
-
A sorted string table (SSTable) is an immutable data file to which the database writes memtables periodically. SSTables are append only and stored on disk sequentially and maintained for each database table.
- tombstone
-
A marker in a row that indicates a column will be deleted. During compaction, marked columns are deleted.
- CQL Table
-
A collection of ordered columns fetched by table row. A table consists of columns and has a primary key.
Key components to configure HCD
The following components are essential to configure HCD:
- Gossip
-
A peer-to-peer communication protocol to discover and share location and state information about the other nodes in an HCD cluster. Gossip information is persisted locally by each node to use immediately when a node restarts.
- Partitioner
-
A partitioner distributes data evenly across the nodes in the cluster for load balancing.
Specifically, a partitioner determines which node receives the first replica of a piece of data, and how to distribute other replicas across other nodes in the cluster. Each row of data is uniquely identified by a primary key, which may be the same as its partition key, but which may also include other clustering columns. A partitioner is a hash function that derives a token from the primary key of a row. The partitioner uses the token value to determine which nodes in the cluster receive the replicas of that row. The
Murmur3Partitioner
is the default partitioning strategy for new HCD clusters and the right choice for new clusters in almost all cases. - Replication factor
-
Replication is the process of storing copies of data on multiple nodes. Replication ensures reliability and fault tolerance. The number of copies is set by the replication factor.
A replication factor of 1 means that there is only one copy of each row on one node. A replication factor of 2 means two copies of each row, where each copy is on a different node. All replicas are equally important; there is no primary or master replica. You define the replication factor for each datacenter. Generally, set the replication strategy greater than one, but no more than the number of nodes in the cluster.
- Replica placement strategy
-
A replication strategy determines which nodes to place replicas on. The first replica of data is simply the first copy; it is not unique in any sense. The
NetworkTopologyStrategy
is highly recommended for most deployments because it is much easier to expand to multiple datacenters when required by future expansion.When creating a keyspace, you must define the replica placement strategy and the number of replicas you want.
- Snitch
-
A snitch maps from the IP addresses of nodes to physical and virtual locations, such as racks and datacenters. Snitches inform the database about the network topology so that requests are routed efficiently and allows the database to distribute replicas by grouping machines into datacenters and racks.
You must configure a snitch when you create a cluster. All snitches use a dynamic snitch layer, which monitors performance and chooses the best replica for reading. The dynamic snitch is enabled by default and recommended for use in most deployments. Configure dynamic snitch thresholds for each node in the
cassandra.yaml
configuration file.The default SimpleSnitch does not recognize datacenter or rack information. Use it for single-datacenter or single-zone deployments in public clouds. The
GossipingPropertyFileSnitch
is recommended for production. It defines a node’s datacenter and rack and uses gossip for propagating this information to other nodes.
cassandra.yaml
configuration file-
The main configuration file for setting the initialization properties for a cluster, caching parameters for tables, properties for tuning and resource utilization, timeout settings, client connections, backups, and security. By default, a node is configured to store the data it manages in a directory set in the
cassandra.yaml
file.In a production cluster deployment, DataStax recommends changing the commitlog directory to a different disk drive from the data file directories.
Basic concepts for data modeling
Data modeling in HCD is different from relational databases. Here are some basic concepts to consider when considering how to model your data:
- Design the data model
-
The design of the data model is based on the queries you want to perform, not on modeling entities and relationships like you do for relational databases.
- Keyspace
-
The outermost grouping of data, similar to a schema in a relational database. All tables belong to a keyspace. A keyspace is the defining container for replication.
- Table
-
A table stores data based on a primary key, which consists of a partition key and optional clustering columns. Materialized views can also be added for high cardinality data.
-
A partition key defines the node on which the data is stored, and divides data into logical groups. Define partition keys that evenly distribute the data and also satisfy specific queries. Query and write requests across multiple partitions should be avoided if possible.
-
A clustering column defines the sort order of rows within a partition. When defining a clustering column, consider the purpose of the data. For example, retrieving the most recent transactions, sorted by date, in descending order.
-
Materialized views are tables built from another table’s data with a new primary key and new properties. Queries are optimized by the primary key definition. Data in the materialized view is automatically updated by changes to the source table.
In earlier versions of Cassandra, a column family was synonymous in many respects to a table.
Table example
-
See also
-
Data modeling concepts in the CQL documentation.
-
CQL reference in the CQL documentation.
-
Get Started with User Profile Data Modeling whitepaper.
-
Become a Super Modeler webinar.