Returns data from a table.

Returns data from a single table. A select statement without a where clause returns all rows from all partitions, but is not recommended.

Note: To retrieve data using a SSTable Attached Secondary Index, see Using SASI.


SELECT selectors | DISTINCT partition 
FROM [keyspace_name.] table_name 
[WHERE partition_conditions [ AND solr_query = 'search_expression' [LIMIT n] |
   [solr_query = 'search_expression'  [LIMIT n]]
   [AND clustering_conditions 
   [AND regular_column_conditions]]] 
[GROUP BY column_name]
[ORDER BY PK_column_name ASC|DESC] 
Syntax conventions Description
UPPERCASE Literal keyword.
Lowercase Not literal.
Italics Variable value. Replace with a user-defined value.
[] Optional. Square brackets ( [] ) surround optional command arguments. Do not type the square brackets.
( ) Group. Parentheses ( ( ) ) identify a group to choose from. Do not type the parentheses.
| Or. A vertical bar ( | ) separates alternative elements. Type any one of the elements. Do not type the vertical bar.
... Repeatable. An ellipsis ( ... ) indicates that you can repeat the syntax element as often as required.
'Literal string' Single quotation ( ' ) marks must surround literal strings in CQL statements. Use single quotation marks to preserve upper case.
{ key : value } Map collection. Braces ( { } ) enclose map collections or key value pairs. A colon separates the key and the value.
<datatype1,datatype2> Set, list, map, or tuple. Angle brackets ( < > ) enclose data types in a set, list, map, or tuple. Separate the data types with a comma.
cql_statement; End CQL statement. A semicolon ( ; ) terminates all CQL statements.
[--] Separate the command line options from the command arguments with two hyphens ( -- ). This syntax is useful when arguments might be mistaken for command line options.
' <schema> ... </schema> ' Search CQL only: Single quotation marks ( ' ) surround an entire XML schema declaration.
@xml_entity='xml_entity_type' Search CQL only: Identify the entity and literal value to overwrite the XML element in the schema and solrConfig files.


Determines the columns returned in the results set.
column_list | DISTINCT partition_key [AS column_name]
Restriction: Use either a column list or DISTINCT partition_key.
Determines the columns and column order returned in the result set. Specify a comma separated list of columns or use an asterisk to return all columns in the stored order.
column_name, function_name(argument_list)
  • column_name: Includes a column in result set.
  • function_name(arguments): Execute a function on the specified argument for each row in the result set. User-defined functions and native functions, including timeuuid, and blob conversion, are supported.
  • aggregate_name(arguments): Executes the aggregate on matching data and returns a single result. Use aggregates, such as count(column_name) or user-defined aggregates.
DISTINCT partition_key 

Returns unique values for the partition key. Use a comma-separated list for compound partition keys.

Tip: Run DESC TABLE table_name to get the PRIMARY KEY definition and then SELECT DISTINCT partition_key FROM table_name to list of the table partition values.
column_name AS alias_name
Renames the column in the result set to the alias name, for example count(id) AS cyclist_count.

functions and aggregates 

Use functions and aggregates in the column list to execute a system defined or user-defined action on values in a column or columns. Replaces or adds columns to the result set.
function_name(argument_list) | aggregate_name (argument_list)
Converts the data returned by the selector to a native CQL data type.
cast( selector AS to_type )
Cast strictly relies on the Java semantics, for more details on the underlying type see Table 1. Only use in SELECT statements.
Note: Silently ignores casting a column into its own data type.
Selector column type Output data type (to_type)
ascii text, varchar

tinyint, smallint, int, varint, float, double, decimal, text, varchar

boolean text, varchar
counter tinyint, smallint, int, bigint, float, double, decimal, varint, text, varchar
date timestamp
decimal tinyint, smallint, int, bigint, float, double, varint, text, varchar
double tinyint, smallint, int, bigint, float, decimal, varint, text, varchar
float tinyint, smallint, int, bigint, double, decimal, varint, text, varchar
inet text, varchar
int tinyint, smallint, bigint, float, double, decimal, varint, text, varchar
smallint tinyint, int, bigint, float, double, decimal, varint,text, varchar
time text, varchar
timestamp date, text, varchar
timeuuid timestamp, date, text, varchar
tinyint tinyint, smallint, int, bigint, float, double, decimal, varint, text, varchar
uuid text, varchar
varint tinyint, smallint, int, bigint, float, double, decimal, text, varchar
The Java types, from which most CQL types are derived, are obvious to Java programmers. The derivation of the following types, however, might not be obvious:
CQL types Java derivation
CQL type Java type
decimal java.math.BigDecimal
float java.lang.Float
double java.lang.Double
varint java.math.BigInteger
Converts the target column or literal (enclose strings in single quotes) from a blob to the specified To_type.
blobAsTo_type(column_name | literal)
  • Display blob columns as another data type in results of SELECT statements.
  • Convert raw blob data into another type for storage INSERT and UPDATE.
Converts the target column or literal (enclose strings in single quotes) from a blob into the specified from_type, where the from type corresponds to a valid CQL data type.
from_typeAsBlob(column_name | literal )
Use in the following types of CQL statements:
  • SELECT selectors to return a value stored in another CQL type as a blob.
  • INSERT and UPDATE convert another CQL data type into a blob for storage.
Computes a token for values in the target column.
The exact signature of the token function depends on the table concerned and of the partitioner used by the cluster.
  • Murmur3Partitioner type is bigint
  • RandomPartitioner type is varint
  • ByteOrderedPartitioner type is blob
Use in the following CQL requests:
  • SELECT selector clause to return values in the target column as tokens; useful for manually paging through the data or to determine the physical location of a row or set of rows.
  • SELECT where clause to return a set of records in a given token range. Allows comparison (greater than and less then) in the where clause without the use of ALLOW FILTERING. Useful when dividing query workloads between clients, such as multiple Spark clients for analytics.
Note: UPDATE does not support the token function in the where clause.
Computes the smallest fake timeuuid from the specified date ('yyyy-mm-dd') or timestamp ('yyyy-mm-dd [hh:mm:ss[.fff][+/-NNNN]]') formatted string.
minTimeuuid( date_string | timestamp_string )

Use in a SELECT statement WHERE clause to return rows in a date range from a timeuuid column, for example WHERE id >= minTimeuuid('2017-05-01').

Computes the largest fake timeuuid from the specified date ('yyyy-mm-dd') or timestamp ('yyyy-mm-dd [hh:mm:ss[.fff][+/-NNNN]]') formatted string.
maxTimeuuid( date_string | timestamp_string )

Use in a SELECT statement where clause to return rows in a date range from a timeuuid column, for example WHERE id <= maxTimeuuid('2017-05-01').

Takes no arguments and generates, on the coordinator node, a new unique timeuuid at the time when the statement executed.

Use in INSERT or UPDATE statements to generate a new timeuuid value.

Although allowed in WHERE clauses, no values would match a given SELECT or UPDATE statement because now() is guaranteed to be unique.

Restriction: UPDATE statements SET clause cannot be used to change PRIMARY KEY fields; therefore a new timeuuid can only be set if the target field is not part of the PRIMARY KEY field.
Converts a timestamp value from milliseconds to days (since epoch) and for timeuuid extracts the timestamp and converts it to days since epoch.
Note: Dates and times display in string format, but are stored and compared as integers.
Converts a date value from days into milliseconds since epoch and for timeuuid extracts the timestamp.
Note: Time (hh:mm:ss.ffffff+NNNN) of a date are all set to zero in the resulting timestamp.
Converts the timeuuid, timestamp, or date column into the number of seconds since epoch.
Takes no parameters and generates a random type 4 uuid.
Use to generate a value in INSERT and UPDATE.
Restriction: UPDATE statements SET clause cannot be used to change PRIMARY KEY fields; therefore a new UUID can only be set if the target field is not part of the PRIMARY KEY field.
Shows the unix timestamp (number of seconds since epoch) that the data was written.
Use only in selectors clause of SELECT statements on non-partition key columns.
Count down in seconds until the value in the field expires and the data is automatically removed; null indicates that TTL for the column is not set (or the count down has ended).
Use only in selectors clause of SELECT statements on non-partition key columns.

WHERE clause 

Improves the efficiency of the query by identifying the data location, and/or filters secondary indexes and regular columns. Create logical statements that identify the PRIMARY KEY, that is data location (partition key) and sort order (clustering key).

Warning: Queries that span multiple partitions can impact performance. Querying across multiple partitions is not recommended.
For logic statements use the syntax: column_name operator value. Separate multiple statements with AND, rows that meet all the conditions are returned. Supported operators include:
Operator Description
= Equal
IN Equal to any value in a comma-separated list of values
>= Greater than or equal to
<= Less than or equal to
> Greater than
< Less than
CONTAINS Matches a value in any type of collection. Only use on indexed collections.
CONTAINS KEY Matches a key name in a map. Only use on maps with indexed keys.

Required (except when querying a secondary index). Logic statements for all the partition key columns. Partition keys define the node where the data is stored. Only supports the following operators:

  • Equals: Use on any partition key column.
  • IN: Restricted. Use on the last column of the partition key to search multiple partitions.
Note: If the WHERE clause only contains logic statements for partition keys, the query returns all rows on the matching partitions.

The DSE Search query expression uses the syntax supported by the Solr q parameter. See DSE Search CQL query syntax and JSON query syntax.


Logic statements that identify the clustering segment. Clustering sets the sort order of the stored data. The order is nested for tables with multiple clustering columns; data is sorted by the first column, then the second, and so forth. Therefore, a query on a table with multiple clustering columns, must have logic statements using the equals or IN operators for all clustering columns above the target and range operators (>=, <=, > or <) are supported on the lowest targeted column.

For example to query a range in clustering column 2 where PRIMARY KEY ((a,b), 1,2,3), create a statement for column 1 (using equals or IN) and then a range on column 2. No statement is required for 3; but to filter 3 both 1 and 2 require equal or IN statements.

Use any operator except CONTAINS or CONTAINS KEY to filter regular columns. Requires definitions for all partition keys and clustering columns.

Additional options

Change the scope and order of the data returned by the query.

GROUP BY column_name
Return unique values of the target column name. When used with an aggregate function, the function is applied to each group of data in the results set.
Sorts the result set in either ascending (ASC) or descending (DESC) order.
Note: When no order is specified the results are returned in the ordered that they are stored.
Enables filtering without including logic statements that identify partition and clustering keys.
Note: For more information on when to ALLOW FILTERING see
Limits the number of records returned in the results set.


Using a column alias

When your selection list includes functions or other complex expressions, use aliases to make the output more readable. This example applies aliases to the dateOf(created_at) and blobAsText(content) functions:

SELECT event_id, 
  dateOf(created_at) AS creation_date,
  blobAsText(content) AS content 
  FROM timeline;

The output labels these columns with more understandable names:

event_id                | creation_date            | content
 550e8400-e29b-41d4-a716 | 2013-07-26 10:44:33+0200 | Some stuff

Specifying the source table using FROM 

The following example SELECT statement returns the number of rows in the IndexInfo table in the system keyspace:

FROM system.IndexInfo;

Controlling the number of rows returned using LIMIT 

The LIMIT option sets the maximum number of rows that the query returns:

SELECT lastname 
FROM cycling.cyclist_name 
LIMIT 50000;

Even if the query matches 105,291 rows, the database only returns the first 50,000.

The cqlsh shell has a default row limit of 10,000. The DataStax Enterprise server and native protocol do not limit the number of returned rows, but they apply a timeout to prevent malformed queries from causing system instability.

Selecting partitions 

Simple partition key, select a single partition:
partition_column = value
Simple partition key, select multiple partitions:
partition_column IN(value1,value2[,...])
For compound partition keys, create a condition for each key separated by AND:
partition_column1 = value1 
AND partition_column2 = value2 [AND ...])

Controlling the number of rows returned using PER PARTITION LIMIT 

In DataStax Enterprise 5.1 and later, the PER PARTITION LIMIT option sets the maximum number of rows that the query returns from each partition. For example, create a table that will sort data into more than one partition.

CREATE TABLE cycling.rank_by_year_and_name ( 
     race_year int, 
     race_name text, 
     cyclist_name text, 
     rank int, 
     PRIMARY KEY ((race_year, race_name), rank) );

After inserting data, the table holds:

race_year | race_name                                  | rank | cyclist_name
      2014 |                        4th Tour of Beijing |    1 |    Phillippe GILBERT
      2014 |                        4th Tour of Beijing |    2 |        Daniel MARTIN
      2014 |                        4th Tour of Beijing |    3 | Johan Esteban CHAVES
      2015 |   Giro d'Italia - Stage 11 - Forli > Imola |    1 |        Ilnur ZAKARIN
      2015 |   Giro d'Italia - Stage 11 - Forli > Imola |    2 |      Carlos BETANCUR
      2015 | Tour of Japan - Stage 4 - Minami > Shinshu |    1 |      Benjamin PRADES
      2015 | Tour of Japan - Stage 4 - Minami > Shinshu |    2 |          Adam PHELAN
      2015 | Tour of Japan - Stage 4 - Minami > Shinshu |    3 |         Thomas LEBAS

To get the top two racers in every race year and race name, use the SELECT statement with PER PARTITION LIMIT 2.

FROM cycling.rank_by_year_and_name 


race_year | race_name                                  | rank | cyclist_name
      2014 |                        4th Tour of Beijing |    1 | Phillippe GILBERT
      2014 |                        4th Tour of Beijing |    2 |     Daniel MARTIN
      2015 |   Giro d'Italia - Stage 11 - Forli > Imola |    1 |     Ilnur ZAKARIN
      2015 |   Giro d'Italia - Stage 11 - Forli > Imola |    2 |   Carlos BETANCUR
      2015 | Tour of Japan - Stage 4 - Minami > Shinshu |    1 |   Benjamin PRADES
      2015 | Tour of Japan - Stage 4 - Minami > Shinshu |    2 |       Adam PHELAN

Filtering data using WHERE 

The WHERE clause introduces one or more relations that filter the rows returned by SELECT.

The column specifications 

The column specification of the relation must be one of the following:
  • One or more members of the partition key of the table
  • A clustering column, only if the relation is preceded by other relations that specify all columns in the partition key
  • A column that is indexed using CREATE INDEX.
Restriction: In the WHERE clause, refer to a column using the actual name, not an alias.

Filtering on the partition key 

For example, the following table definition defines id as the table's partition key:
CREATE TABLE cycling.cyclist_career_teams ( id UUID PRIMARY KEY, lastname text, teams set<text> );
In this example, the SELECT statement includes in the partition key, so the WHERE clause can use the id column:
SELECT id, lastname, teams 
FROM cycling.cyclist_career_teams 
WHERE id=5b6962dd-3f90-4c93-8f61-eabfa4a803e2;

Restriction: a relation that references the partition key can only use an equality operator — = or IN. For more details about the IN operator, see Examples below.

Filtering on a clustering column 

Use a relation on a clustering column only if it is preceded by relations that reference all the elements of the partition key.


CREATE TABLE cycling.cyclist_points (
   id UUID, 
   firstname text, 
   lastname text, 
   race_title text, 
   race_points int, 
   PRIMARY KEY (id, race_points ));
SELECT sum(race_points) 
FROM cycling.cyclist_points 
WHERE id=e3b19ec4-774a-4d1c-9e5a-decec1e30aac 
      AND race_points > 7;



(1 rows)

In DataStax Enteprise 5.1 and later, add ALLOW FILTERING to filter only on a non-indexed cluster column. The table definition is included in this example to show that race_start_date is a clustering column without a secondary index.


CREATE TABLE cycling.calendar (
   race_id int, 
   race_name text, 
   race_start_date timestamp, 
   race_end_date timestamp, 
   PRIMARY KEY (race_id, race_start_date, race_end_date));
FROM cycling.calendar 
WHERE race_start_date='2015-06-13' 


race_id | race_start_date                 | race_end_date                   | race_name
     102 | 2015-06-13 07:00:00.000000+0000 | 2015-06-13 07:00:00.000000+0000 | Tour de Suisse
     103 | 2015-06-13 07:00:00.000000+0000 | 2015-06-17 07:00:00.000000+0000 | Tour de France

It is possible to combine the partition key and a clustering column in a single relation. For details, see Comparing clustering columns.

Filtering on indexed columns 

A WHERE clause in a SELECT on an indexed table must include at least one equality relation to the indexed column. For details, see Indexing a column.

Using the IN operator 

Use IN, an equals condition operator, to list multiple possible values for a column. Thjs example selects two columns, first_name and last_name, from three rows having employee ids (primary key) 105, 107, or 104:

SELECT first_name, last_name 
FROM emp 
WHERE empID IN (105, 107, 104);

The list can consist of a range of column values separated by commas.

Using IN to filter on a compound or composite primary key 

Use an IN condition on the last column of the partition key only when it is preceded by equality conditions for all preceding columns of the partition key. For example:
   part_type text, 
   part_name text, 
   part_num int, 
   part_year text, 
   serial_num text, 
   PRIMARY KEY ((part_type, part_name), part_num, part_year));
FROM parts 
WHERE part_type='alloy' AND part_name='hubcap' 
AND part_num=1249 AND part_year IN ('2010', '2015');
When using IN, you can omit the equality test for clustering columns other than the last. But this usage may require the use of ALLOW FILTERING, so its performance can be unpredictable. For example:
FROM parts 
WHERE part_num=123456 AND part_year IN ('2010', '2015') 

CQL supports an empty list of values in the IN clause, useful in Java Driver applications when passing empty arrays as arguments for the IN clause.

When not to use IN 

Under most conditions, using IN in relations on the partition key is not recommended. To process a list of values, the SELECT may have to query many nodes, which degrades performance. For example, consider a single local datacenter cluster with 30 nodes, a replication factor of 3, and a consistency level of LOCAL_QUORUM. A query on a single partition key query goes out to two nodes. But if the SELECT uses the IN condition, the operation can involve more nodes — up to 20, depending on where the keys fall in the token range.

Using IN for clustering columns is safer. See Cassandra Query Patterns: Not using the “in” query for multiple partitions for additional logic about using IN.

Filtering on collections 

Your query can retrieve a collection in its entirety. It can also index the collection column, and then use the CONTAINS condition in the WHERE clause to filter the data for a particular value in the collection, or use CONTAINS KEY to filter by key. This example features a collection of tags in the playlists table. The query can index the tags, then filter on 'blues' in the tag set.

SELECT album, tags 
FROM playlists 
WHERE tags CONTAINS 'blues';

After indexing the music venue map, filter on map values, such as 'The Fillmore':
FROM playlists 
WHERE venue 
CONTAINS 'The Fillmore';
After indexing the collection keys in the venues map, filter on map keys.
FROM playlists 
WHERE venue CONTAINS KEY '2014-09-22 22:00:00-0700';

Filtering a map's entries 

Follow this example query to retrieve rows based on map entries. (This method only works for maps.)
CREATE INDEX blist_idx 
ON cycling.birthday_list (ENTRIES(blist));
This query finds all cyclists who are 23 years old based on their entry in the blist map of the table birthday_list.
FROM cycling.birthday_list 
WHERE blist['age'] = '23';

Filtering a full frozen collection 

This example presents a query on a table containing a FROZEN collection (set, list, or map). The query retrieves rows that fully match the collection's values.
CREATE INDEX rnumbers_idx 
ON cycling.race_starts (FULL(rnumbers));
The following SELECT finds any cyclist who has 39 Pro wins, 7 Grand Tour starts, and 14 Classic starts in a frozen list.
FROM cycling.race_starts 
WHERE rnumbers = [39,7,14];

Range relations

DataStax Enterprise supports greater-than and less-than comparisons, but for a given partition key, the conditions on the clustering column are restricted to the filters that allow selection of a contiguous set of rows.

For example:
CREATE TABLE ruling_stewards (
  steward_name text,
  king text,
  reign_start int,
  event text,
  PRIMARY KEY (steward_name, king, reign_start)
This query constructs a filter that selects data about stewards whose reign started by 2450 and ended before 2500. If king were not a component of the primary key, you would need to create an index on king to use this query:
SELECT * FROM ruling_stewards
WHERE king = 'Brego'
  AND reign_start >= 2450
  AND reign_start < 2500 
The output:
 steward_name | king  | reign_start | event
      Boromir | Brego |        2477 |   Attacks continue
       Cirion | Brego |        2489 | Defeat of Balchoth
(2 rows)

To allow selection of a contiguous set of rows, the WHERE clause must apply an equality condition to the king component of the primary key. The ALLOW FILTERING clause is also required. ALLOW FILTERING provides the capability to query the clustering columns using any condition.


Only use ALLOW FILTERING for development! When you attempt a potentially expensive query, such as searching a range of rows, DataStax Enterprise displays this message:

Bad Request: Cannot execute this query as it might involve data
filtering and thus may have unpredictable performance. If you want
to execute this query despite the performance unpredictability,

To run this type of query, use ALLOW FILTERING, and restrict the output to n rows using LIMIT n. For example:

Select * 
FROM ruling_stewards
WHERE king = 'none'
  AND reign_start >= 1500
  AND reign_start < 3000 

Using LIMIT does not prevent all problems caused by ALLOW FILTERING. In this example, if there are no entries without a value for king, the SELECT scans the entire table, no matter what the LIMIT is.

It is not necessary to use LIMIT with ALLOW FILTERING, and LIMIT can be used by itself. But LIMIT can prevent a query from ranging over all partitions in a datacenter, or across multiple datacenters..

Comparing clustering columns 

The partition key and clustering columns can be grouped and compared to values for scanning a partition. For example:
FROM ruling_stewards 
WHERE (steward_name, king) = ('Boromir', 'Brego');

The syntax used in the WHERE clause compares records of steward_name and king as a tuple against the Boromir, Brego tuple.

Using compound primary keys and sorting results 

ORDER BY clauses can only work on a single column. That column must be the second column in a compound PRIMARY KEY. This also applies to tables with more than two column components in the primary key. Ordering can be done in ascending or descending order using the ASC or DESC keywords (default is ascending).

In the ORDER BY clause, refer to a column using the actual name, not an alias.

For example, set up the playlists table (which uses a compound primary key), and use this query to get information about a particular playlist, ordered by song_order. You do not need to include the ORDER BY column in the select expression.

SELECT * FROM playlists 
WHERE id = 62c36092-82a1-3a00-93d1-46196ee77204
ORDER BY song_order DESC 


Or, create an index on playlist artists, and use this query to get titles of Fu Manchu songs on the playlist:

CREATE INDEX ON playlists(artist);
SELECT album, title 
FROM playlists 
WHERE artist = 'Fu Manchu';


Computing aggregates 

DataStax Enterprise provides standard built-in functions that return aggregate values to SELECT statements.

Using COUNT() to get the non-NULL value count for a column 

A SELECT expression using COUNT(column_name) returns the number of non-NULL values in a column.

For example, count the number of last names in the cyclist_name table:

SELECT COUNT(lastname) 
FROM cycling.cyclist_name;

Getting the number of matching rows and aggregate values with COUNT() 

A SELECT expression using COUNT(*) returns the number of rows that matched the query. Use COUNT(1) to get the same result. COUNT(*) or COUNT(1) can be used in conjunction with other aggregate functions or columns.

This example returns the number of rows in the users table:

FROM users;

This example counts the number of rows and calculates the maximum value for points in the users table:

SELECT name, max(points), COUNT(*) 
FROM users; 

Getting maximum and minimum values in a column 

A SELECT expression using MAX(column_name) returns the maximum value in a column. If the column's data type is numeric (bigint, decimal, double, float, int, smallint), this is the highest value.
SELECT MAX(points) 
FROM cycling.cyclist_category;


MIN returns the minimum value. If the query includes a WHERE clause, MAX or MIN returns the largest or smallest value from the rows that satisfy the WHERE condition.
SELECT category, MIN(points) 
FROM cycling.cyclist_category 
WHERE category = 'GC';


Note: If the column referenced by MAX or MIN has an ascii or text data type, these functions return the last or first item in an alphabetic sort of the column values. If the specified column has data type date or timestamp, these functions return the most recent or least recent times/dates. If the specified column has null values, the MIN function ignores it.
Note: DataStax Enterprise does not return a null value as the MIN.

Getting the sum or average of a column of numbers 

DataStax Enterprise computes the sum or average of all values in a column when SUM or AVG is used in the SELECT statement:
Note: If any of the rows returned have a null value for the column referenced for AVG aggregation, DataStax Enterprise includes that row in the row count, but uses a zero value to calculate the average.
Note: The sum and avg functions do not work with text, uuid or date fields.

Retrieving the date/time a write occurred 

The WRITETIME function applied to a column returns the date/time in microseconds at which the column was written to the database.

R\For example, to retrieve the date/time that a write occurred to the first_name column of the user whose last name is Jones:

SELECT WRITETIME (first_name) 
FROM users 
WHERE last_name = 'Jones';

The WRITETIME output in microseconds converts to November 15, 2012 at 12:16:34 GMT-8.

Retrieving the time-to-live of a column 

The time-to-live (TTL) value of a cell is the number of seconds before the cell is marked with a tombstone. To set the TTL for a single cell, a column, or a column family, for example:
INSERT INTO cycling.calendar (race_id, race_name, race_start_date, race_end_date) 
VALUES (200, 'placeholder', '2015-05-27', '2015-05-27') 
UPDATE cycling.calendar 
SET race_name = 'dummy' 
WHERE race_id = 200 
  AND race_start_date = '2015-05-27' 
  AND race_end_date = '2015-05-27';
After inserting the TTL, use SELECT statement to check its current value:
SELECT TTL(race_name) 
FROM cycling.calendar 
WHERE race_id=200;

(1 rows)

Retrieving values in the JSON format 

This option is available in DataStax Enterprise 5.0 and later. For details, see Retrieval using JSON