Data modeling methodology for Cassandra-based databases

An effective data model is essential for building high-performance, scalable applications. A well-structured model aligns with business needs, supports key application workflows, and adapts to changing requirements over time.

There are five steps in the data modeling methodology for Cassandra-based databases:

  1. Create a conceptual data model.

  2. Create an application workflow.

  3. Use your conceptual data model and application workflow to create a logical data model.

  4. Add implementation details to the logical data model to create a physical data model.

  5. As application development continues, regularly optimize and tune your data model.

Data modeling methodology

To demonstrate this process, this guide creates a data model for a video-sharing application.

Conceptual data model

A conceptual data model is a high-level representation of the data and the relationships between different entities. This model should be business-centric and technology-agnostic:

  • Focus on how your business views the data, not how the data is stored or queried.

  • Reflect real-world business concepts and how they relate to each other, such as customers and products.

  • Avoid specific implementation details like data types or database systems.

Conceptual data models are often illustrated using an Entity-Relationship (ER) diagram, which visually represents entities, such as users or product categories, and the relationships between them.

The following ER diagram represents a simple conceptual data model for a video-sharing application:

ER diagram

From a business operations perspective, the application expects that users will upload videos. In the diagram, the user entity is connected to the video entity by an upload action. Then, relevant data are attached to each of these three central nodes. For example:

  • The user entity has user-specific data like email, name, and ID.

  • The upload action has timestamp data.

  • The video entity has video-specific data like title, ID, and description.

This is a simple example. A conceptual data model for a real-world application would include many more entities, relationships, and data. == Application workflow

The application workflow identifies the key queries that the application must support to deliver its functionality. These queries help define how users will interact with the system, and they guide the structure of your logical and physical data models.

You can write your application workflow queries in plain sentences, similar to user stories.

For a video-sharing application, typical queries might include the following:

  • Find all videos uploaded by a user.

  • Upload a video.

  • Modify a video description.

  • Find all uploads for a user within a specific time range, sorted by most recent upload first.

Logical data model

The logical data model combines the entities and relationships from the conceptual data model with the queries defined in the application workflow. It defines tables, key columns, user-defined types, and indexes.

For example, the following logical data model supports the query Find all uploads for a user within a specific time range, sorted by most recent upload first:

Logical data model

This model uses a table named videos_by_user. Because the query retrieves uploads by a single user, the user_id column is set as the partition key (K). Then, because the results are ordered by upload time, the upload column is set as a clustering column (C) ordered in descending order. The model also includes other video and user metadata columns in the table, even thought they don’t directly address the query. This data is valuable to the response passed to the user so the user can understand the results.

Physical data model

The physical data model adds implementation-specific details to the logical model, including the data types needed to define tables and columns. You can translate the physical data model directly into CQL statements to create tables and indexes.

For example, the following physical data model builds on the logical data model for the query Find all uploads for a user within a specific time range, sorted by most recent upload first:

Physical data model

This model uses the same videos_by_user table as the corresponding logical data model, specifying data types for each column:

  • user_id: Use the uuid type to ensure each user is uniquely identified.

  • upload: Use the timestamp or timeuuid type to record the time of the upload. Assuming the application doesn’t allow concurrent uploads from the same user, a timestamp is sufficient.

  • video_id: Use the uuid type so that each video has a globally unique identifier.

  • email, first_name, last_name, title, and description: Use the text type to store non-unique user and video metadata.

After creating the physical data model, you have a mapping for the resulting CQL CREATE TABLE statement. For example:

CREATE TABLE videos_by_user (
  user_id uuid,
  upload timestamp,
  video_id uuid,
  email text,
  first_name text,
  last_name text,
  title text,
  description text,
  PRIMARY KEY ((user_id), upload)
);

Optimization and tuning

The data modeling methodology is iterative. As your application evolves, you may need to revisit and refine the data model to maintain performance, scalability, and functionality. Common reasons for revisiting the model include:

  • Changing requirements: As the business grows or priorities shift, new features may demand changes to the data structure, such as additional fields or new types of queries.

  • Performance issues: If queries begin to slow down or resource usage increases, adjustments to partitioning, indexing, or data layout may be necessary to restore performance.

  • Imbalanced partitions: A poor partitioning strategy can lead to hot spots, where some nodes carry a heavier load than others. Redesigning the model to balance partitions can improve reliability and throughput.

  • New queries or access patterns: When developers introduce new ways of accessing the data—especially if those queries weren’t part of the original application workflow—you may need to redesign or extend the data model to support them efficiently.

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2025 DataStax | Privacy policy | Terms of use | Manage Privacy Choices

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000, info@datastax.com