Install DataStax Enterprise 6.9 using the binary tarball
You can install DataStax Enterprise (DSE) 6.9 on a bare metal or virtual machine (VM) environment using a binary tarball. This installation method lets you run DSE on Linux platforms as a standalone process, with or without root permissions. These instructions also describe how to store data and logs in either a default directory or a custom directory.
New installations of DSE create a default To learn how to change or delete the default |
Prerequisites
Ensure that you have the following:
-
-
For RHEL-compatible platforms, make sure to enable Extra Packages for Enterprise Linux (EPEL).
-
-
A compatible version of Java 11.
-
Install the latest release of a Technology Compatibility Kit-certified version of OpenJDK 11 (recommended) or the Oracle Java SE 11.0.x JDK (supported).
-
You must set the
$JAVA_HOME
environment variable to point to Java 11 when running multiple Java runtime environments.
-
-
Python 3.8 through 3.11. Each listed version provides support for
cqlsh
, but DataStax recommends using Python 3.11. -
For production installations, see Recommended production settings.
Download and extract the tarball
By downloading DataStax Enterprise, you agree to the DataStax MSA, including the DSE Supplement. |
-
Open a terminal and verify that you have Java 11 installed:
java -version
Result
Here’s an example result for OpenJDK:
openjdk version "11.0.28" 2025-07-15 LTS OpenJDK Runtime Environment (build 11.0.28+6-LTS) OpenJDK 64-Bit Server VM (build 11.0.28+6-LTS, mixed mode)
Here’s an example result for Oracle Java:
java version "11.0.2" 2019-01-15 LTS Java(TM) SE Runtime Environment 18.9 (build 11.0.2+9-LTS) Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.2+9-LTS, mixed mode)
Ensure that the output shows Java 11. If it does not, follow the instructions in Prerequisites to install a supported version.
-
From a terminal window, install the
libaio
package that matches your environment:-
RHEL platform
-
Debian platform
sudo yum install libaio
sudo apt-get install libaio1
-
-
Download and extract the tarball file manually, or use
curl
:-
Manual download
-
curl download
-
Download the DSE 6.9 tarball file from the DataStax Downloads website. Refer to the DSE 6.9 release notes for the latest patch version.
-
Extract the tarball file into the directory where you want to install DSE 6.9.
sudo tar -xzvf dse-6.9.12-bin.tar.gz -C INSTALLATION_DIRECTORY
The files expand into a
DSE_DIRECTORY
that contains the version number of the release, such asdse-6.9.12
.
-
Use
curl
to download the tarball file:curl -L -O https://downloads.datastax.com/enterprise/dse-6.9.12-bin.tar.gz
-
Extract the tarball file into the directory where you want to install DSE 6.9.
sudo tar -xzvf dse-6.9.12-bin.tar.gz -C INSTALLATION_DIRECTORY
The files expand into a
DSE_DIRECTORY
that contains the name of the release, such as/dse-6.9.12
.
-
Configure data and log directories
-
Use either the default DSE data and log directory locations, or define your own custom directory locations:
-
Default directory locations
-
Custom directory locations
DSE stores its runtime data and logs in default locations unless otherwise configured in
cassandra.yaml
. Make sure these directories exist and have the correct ownership before starting DSE.Run the following commands to create and change ownership for the default DSE data and log directories:
sudo mkdir -p /var/lib/cassandra /var/log/cassandra
sudo chown -R cassandra:cassandra /var/lib/cassandra /var/log/cassandra
If you plan to run DSE as a different user, replace
cassandra:cassandra
with that user and group.Follow these steps to store DSE data and logs in custom locations:
-
Create and change ownership for your custom DSE data and log directories. For example:
sudo mkdir -p \ CUSTOM_DIRECTORY/dse-data/commitlog \ CUSTOM_DIRECTORY/dse-data/saved_caches \ CUSTOM_DIRECTORY/dse-data/hints \ CUSTOM_DIRECTORY/cdc_raw
sudo chown -R cassandra:cassandra \ CUSTOM_DIRECTORY/dse-data \ CUSTOM_DIRECTORY/cdc_raw
Replace
CUSTOM_DIRECTORY
with the absolute path to your custom directory locations.If you plan to run DSE as a different user, replace
cassandra:cassandra
with that user and group. -
Update the following settings in the
cassandra.yaml
file to point to your custom directory locations:data_file_directories: - CUSTOM_DIRECTORY/dse-data commitlog_directory: CUSTOM_DIRECTORY/dse-data/commitlog saved_caches_directory: CUSTOM_DIRECTORY/dse-data/saved_caches hints_directory: CUSTOM_DIRECTORY/dse-data/hints cdc_raw_directory: CUSTOM_DIRECTORY/cdc_raw
Replace
CUSTOM_DIRECTORY
with the absolute path to your custom directory locations.Where is the
cassandra.yaml
file?The location of the
cassandra.yaml
file depends on the type of installation:Installation Type Location Package installations + Installer-Services installations
/etc/dse/cassandra/cassandra.yaml
Tarball installations + Installer-No Services installations
<installation_location>/resources/cassandra/conf/cassandra.yaml
-
-
To store logs and data in the installation location, use the environment variable
CASSANDRA_LOG_DIR
to specify the location of the logs directory:cd dse-6.9.x CASSANDRA_LOG_DIR=
<pwd>
/logs bin/dse cassandra -
Optional: If you plan to use DSE Analytics, you can use either the default Spark data and log directory locations or define your own custom directory locations:
-
Default directory locations
-
Custom directory locations
When you enable DSE Analytics, the Spark components write temporary data and logs to specific directories unless otherwise configured in
spark-env.sh
ordse.yaml
. Make sure these directories exist and have the correct ownership before starting DSE.Run the following commands to create and change ownership for the default Spark data and log directories:
sudo mkdir -p \ /var/lib/dsefs \ /var/lib/spark \ /var/lib/spark/rdd \ /var/lib/spark/worker \ /var/log/spark \ /var/log/spark/master \ /var/log/spark/alwayson_sql
sudo chown -R cassandra:cassandra /var/lib/dsefs /var/lib/spark /var/log/spark
If you plan to run DSE as a different user, replace
cassandra:cassandra
with that user and group.Follow these steps to store Spark data and logs in custom locations:
-
Create and change ownership for your custom Spark data and log directories. For example:
sudo mkdir -p \ CUSTOM_DIRECTORY/dsefs \ CUSTOM_DIRECTORY/spark/rdd \ CUSTOM_DIRECTORY/spark/worker \ CUSTOM_DIRECTORY/spark/log/worker \ CUSTOM_DIRECTORY/spark/log/master \ CUSTOM_DIRECTORY/spark/log/alwayson_sql
sudo chown -R cassandra:cassandra \ CUSTOM_DIRECTORY/dsefs \ CUSTOM_DIRECTORY/spark
Replace
CUSTOM_DIRECTORY
with the absolute path to your custom directory locations.If you plan to run DSE as a different user, replace
cassandra:cassandra
with that user and group. -
Update the following environment variables in the
spark-env.sh
file to point to your custom directory locations:export SPARK_WORKER_DIR="CUSTOM_DIRECTORY/spark/worker" export SPARK_EXECUTOR_DIRS="CUSTOM_DIRECTORY/spark/rdd" export SPARK_WORKER_LOG_DIR="CUSTOM_DIRECTORY/spark/log/worker" export SPARK_MASTER_LOG_DIR="CUSTOM_DIRECTORY/spark/log/master" export ALWAYSON_SQL_LOG_DIR="CUSTOM_DIRECTORY/spark/log/alwayson_sql"
Replace
CUSTOM_DIRECTORY
with the absolute path to your custom directory locations.Where is the
spark-env.sh
file?The default location of the
spark-env.sh
file depends on the type of installation:Installation Type Location Package installations + Installer-Services installations
/etc/dse/spark/spark-env.sh
Tarball installations + Installer-No Services installations
<installation_location>/resources/spark/conf/spark-env.sh
-
In the
dse.yaml
file, update the location of the DataStax Enterprise file system (DSEFS) work directory to point to your custom location:dsefs_options: work_dir: CUSTOM_DIRECTORY/dsefs
Replace
CUSTOM_DIRECTORY
with the absolute path to your custom directory location.Where is the
dse.yaml
file?The location of the
dse.yaml
file depends on the type of installation:Installation Type Location Package installations + Installer-Services installations
/etc/dse/dse.yaml
Tarball installations + Installer-No Services installations
<installation_location>/resources/dse/conf/dse.yaml
-
Configure optimizations
To run stress or performance tests, you may be required to do some additional set up on the node running DSE. Particularly for vector search and SAI experiments, you need to set user resource limits temporarily for DSE. Configure the following settings:
-
Make the root user changes permanent in the
/etc/security/limits.conf
tarball installation configuration file:<cassandra_user> - memlock unlimited <cassandra_user> - nofile 1048576 <cassandra_user> - nproc 32768 <cassandra_user> - as unlimited
-
Disable swap:
$ sudo swapoff --all
Start DSE
-
From the directory where you extracted the tarball files, run the following command to start DSE as a standalone process:
DSE_DIRECTORY/bin/dse cassandra
For other startup options, see Start DataStax Enterprise as a standalone process.
-
Verify that DSE is running:
DSE_DIRECTORY/bin/nodetool status
Result
Datacenter: Cassandra ===================== Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns Host ID Rack UN 127.0.0.1 82.43 KB 128 ? 40725dc8-7843-43ae-9c98-7c532b1f517e rack1
Next steps
-
Configure startup options: service or stand-alone.
-
If performing an upgrade, go to the next step in the Upgrade Guide.
-
Configure DataStax Enterprise settings for DSE Advanced Security, DSE In-Memory, DSE Advanced Replication, DSE Multi-Instance, DSE Tiered Storage, and more.
-
Change the logging locations after installation.
-
Configure the heap dump directory to avoid server crashes.