Running Spark commands against a remote cluster
To run Spark commands against a remote cluster, you must copy your Hadoop configuration files from one of the remote nodes to the local client machine.
To run Spark commands against a remote cluster, you must copy your Hadoop configuration files from one of the remote nodes to the local client machine.
The default location
of the Hadoop configuration files depends on the type of
installation:
Installer-Services and Package installations | /etc/dse/hadoop/ |
Installer-No Services and Tarball installations | install_location/resources/hadoop/conf/ |
To run a driver application remotely, there must be full public network communication between the remote nodes and the client machine.