Running Spark processes as separate users

Spark processes can be configured to run as separate operating system users.

By default, processes started by DSE are run as the same OS user who started the DSE server process. This is called the DSE service user. One consequence of this is that all applications that are run on the cluster can access DSE data and configuration files, and access files of other applications.

You can delegate running Spark applications to runner processes and users by changing options in dse.yaml.

Overview of the run_as process runner

The run_as process runner allows you to run Spark applications as a different OS user than the DSE service user. When this feature is enabled and configured:

  • All simultaneously running applications deployed by a single DSE service user will be run as a single OS user.

  • Applications deployed by different DSE service users will be run by different OS users.

  • All applications will be run as a different OS user than the DSE service user.

This allows you to prevent an application from accessing DSE server private files, and prevent one application from accessing the private files of another application.

How the run_as process runner works

DSE uses sudo to run Spark applications components (drivers and executors) as specific OS users. DSE doesn’t link a DSE service user with a particular OS user. Instead, a configurable number of spare user accounts or slots are used. When a request to run an executor or a driver is received, DSE finds an unused slot, and locks it for that application. Until the application is finished, all of that application’s processes run as that slot user. When the application completes, the slot user will be released and will be available to other applications.

Since the number of slots is limited, a single slot is shared among all the simultaneously running applications run by the same DSE service user. Such a slot is released once all the applications of that user are removed. When there is not enough slots to run an application, an error is logged and DSE will try to run the executor or driver on a different node. DSE does not limit the number of slots you can configure. If you need to run more applications simultaneously, create more slot users.

Slots assignment is done on a per node basis. Executors of a single application may run as different slot users on different DSE nodes. When DSE is run on a fat node, different DSE instances running within the same OS should be configured with different sets of slot users. If they use the same slot users, a single OS user may run the applications of two different DSE service users.

When a slot is released, all directories which are normally managed by Spark for the application are removed. If the application doesn’t finish, but all executors are done on a node, and a slot user is about to be released, all the application files are modified so that their ownership is changed to the DSE service user with owner-only permission. When a new executor for this application is run on this node, the application files are reassigned back to the slot user assigned to that application.

Configuring the run_as process runner

The administrator needs to prepare slot users in the OS before configuring DSE. The run_as process runner requires:

  • Each slot user has its own primary group, which name is the same as the name of slot user. This is typically the default behaviour of the OS. For example, the slot1 user’s primary group is slot1.

  • The DSE service user is a member of each slot’s primary group. For example, if the DSE service user is cassandra, the cassandra user is a member of the slot1 group.

  • The DSE service user is a member of a group with the same name as the service user. For example, if the DSE service user is cassandra, the cassandra user is a member of the cassandra group.

  • sudo is configured so that the DSE service user can execute any command as any slot user without providing a password.

Override the umask setting to 007 for slot users so that files created by sub-processes will not be accessible by anyone else by default, and DSE configuration files are not visible to slot users.

You may further secure the DSE server environment by modifying the OS’s limits.conf file to set exact disk space quotas for each slot user.

After adding the slot users and groups and configuring the OS, modify the dse.yaml file. In the spark_process_runner section enable the run_as process runner and set the list of slot users on each node.

    # Allowed options are: default, run_as
    runner_type: run_as

            - slot1
            - slot2

Example configuration for run_as process runner

In this example, two slot users, slot1 and slot2 will be created and configured with DSE. The default DSE service user of cassandra is used.

  1. Create the slot users.

    sudo useradd -r -s /bin/false slot1 &&
    sudo useradd -r -s /bin/false slot2
  2. Add the slot users to the DSE service user’s group.

    sudo usermod -a -G slot1,slot2 cassandra
  3. Make sure the DSE service user is a member of a group with the same name as the service user. For example, if the DSE service user is cassandra:

    groups cassandra
    cassandra : cassandra
  4. Log out and back in again to make the group changes take effect.

  5. Modify the sudoers file with the slot users.

    Runas_Alias     SLOTS = slot1, slot2
    Defaults>SLOTS  umask=007
    Defaults>SLOTS  umask_override
    cassandra       ALL=(SLOTS) NOPASSWD: ALL
  6. Modify dse.yaml to enable the run_as process runner and add the new runners.

    # Configure the way how the driver and executor processes are created and managed.
        # Allowed options are: default, run_as
        runner_type: run_as
        # RunAs runner uses sudo to start Spark drivers and executors. A set of predefined fake users, called slots, is used
        # for this purpose. All drivers and executors owned by some DSE user are run as some slot user x. At the same time
        # drivers and executors of any other DSE user use different slots.
                - slot1
                - slot2

Was this helpful?

Give Feedback

How can we improve the documentation?

© 2024 DataStax | Privacy policy | Terms of use

Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Pulsar, Pulsar, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries. Kubernetes is the registered trademark of the Linux Foundation.

General Inquiries: +1 (650) 389-6000,