Imports and exports CSV (comma-separated values) data to and from Cassandra 1.1.3 and higher.

Imports and exports CSV (comma-separated values) data to and from Cassandra 1.1.3 and higher.


COPY table_name ( column, ...)
FROM ( 'file_name' | STDIN )
WITH option = 'value' AND ...

COPY table_name ( column , ... )
TO ( 'file_name' | STDOUT )
WITH option = 'value' AND ...

Synopsis Legend 

  • Uppercase means literal
  • Lowercase means not literal
  • Italics mean optional
  • The pipe (|) symbol means OR or AND/OR
  • Ellipsis (...) means repeatable
  • Orange ( and ) means not literal, indicates scope

A semicolon that terminates CQL statements is not included in the synopsis.


Using the COPY options in a WITH clause, you can change the CSV format. This table describes these options:
COPY options
COPY Options Default Value Use To:
DELIMITER comma (,) Set the character that separates fields having newline characters in the file.
QUOTE quotation mark (") Set the character that encloses field values.
ESCAPE backslash (\) Set the character that escapes literal uses of the QUOTE character.
HEADER false Set true to indicate that first row of the file is a header.
ENCODING UTF8 Set the COPY TO command to output unicode strings.
NULL an empty string Represents the absence of a value.

The ENCODING option cannot be used in the COPY FROM command. This table shows that, by default, Cassandra expects the CSV data to consist of fields separated by commas (,), records separated by line separators (a newline, \r\n), and field values enclosed in double-quotation marks (""). Also, to avoid ambiguity, escape a literal double-quotation mark using a backslash inside a string enclosed in double-quotation marks ("\""). By default, Cassandra does not expect the CSV file to have a header record on the first line that consists of the column names. COPY TO includes the header in the output if HEADER=true. COPY FROM ignores the first line if HEADER=true.


By default, when you use the COPY FROM command, Cassandra expects every row in the CSV input to contain the same number of columns. The number of columns in the CSV input is the same as the number of columns in the Cassandra table metadata. Cassandra assigns fields in the respective order. To apply your input data to a particular set of columns, specify the column names in parentheses after the table name.

COPY FROM is intended for importing small datasets (a few million rows or less) into Cassandra. For importing larger datasets, use Cassandra bulk loader or the sstable2json / json2sstable2 utility.

COPY TO a CSV file 

For example, assume you have the following table in CQL:

cqlsh> SELECT * FROM test.airplanes;

 name          | mach | manufacturer | year
 P38-Lightning |  0.7 |     Lockheed | 1937

After inserting data into the table, you can copy the data to a CSV file in another order by specifying the column names in parentheses after the table name:

COPY airplanes
(name, mach, year, manufacturer)
 TO 'temp.csv'

Specifying the source or destination files 

Specify the source file of the CSV input or the destination file of the CSV output by a file path. Alternatively, you can use the STDIN or STDOUT keywords to import from standard input and export to standard output. When using stdin, signal the end of the CSV data with a backslash and period (“\.“) on a separate line. If the data is being imported into a table that already contains data, COPY FROM does not truncate the table beforehand. You can copy only a partial set of columns. Specify the entire set or a subset of column names in parentheses after the table name in the order you want to import or export them. By default, when you use the COPY TO command, Cassandra copies data to the CSV file in the order defined in the Cassandra table metadata. In version 1.1.6 and later, you can also omit listing the column names when you want to import or export all the columns in the order they appear in the source table or CSV file.


Copy a table to a CSV file.

  1. Using CQL 3, create a table named airplanes and copy it to a CSV file.
      WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy',
      'datacenter1' : 3 };
    USE test;
    CREATE TABLE airplanes (
      name text PRIMARY KEY,
      manufacturer ascii,
      year int,
      mach float
    INSERT INTO airplanes
      (name, manufacturer, year, mach)
      VALUES ('P38-Lightning', 'Lockheed', 1937, '.7');
    COPY airplanes (name, manufacturer, year, mach) TO 'temp.csv';
    1 rows exported in 0.004 seconds.
  2. Clear the data from the airplanes table and import the data from the temp.csv file.
    TRUNCATE airplanes;
    COPY airplanes (name, manufacturer, year, mach) FROM 'temp.csv';
    1 rows imported in 0.087 seconds.

Copy data from standard input to a table.

  1. Enter data directly during an interactive cqlsh session, using the COPY command defaults.
    COPY airplanes (name, manufacturer, year, mach) FROM STDIN;
  2. At the [copy] prompt, enter the following data:
    "F-14D Super Tomcat", Grumman,"1987", "2.34"
    "MiG-23 Flogger", Russian-made, "1964", "2.35"
    "Su-27 Flanker", U.S.S.R.,"1981", "2.35"
  3. Query the airplanes table to see data imported from STDIN:
    SELECT * FROM airplanes;

Output is:

name               | manufacturer | year | mach
 F-14D Super Tomcat |      Grumman | 1987 | 2.35
      P38-Lightning |     Lockheed | 1937 | 0.7
      Su-27 Flanker |     U.S.S.R. | 1981 | 2.35
     MiG-23 Flogger | Russian-made | 1967 | 2.35