Analytics node configuration for DSE Hadoop (deprecated)

Steps to configure analytic nodes for DSE Hadoop. Hadoop is deprecated for use with DataStax Enterprise. DSE Hadoop and BYOH (Bring Your Own Hadoop) are also deprecated.

Note: Hadoop is deprecated for use with DataStax Enterprise. DSE Hadoop and BYOH (Bring Your Own Hadoop) are also deprecated.
Important configuration changes for DSE Hadoop, excluding those related to the Job Tracker, are:

Advanced users can also configure DataStax Enterprise to run jobs remotely.

Attention: DataStax does not recommend use virtual nodes (vnodes) for DSE Hadoop or BYOH nodes. If you want to use vnodes on for Hadoop, understand the implications.

Setting the replication factor 

Change the default replication factor to a production-appropriate value of at least 3.

Configuring the verbosity of log messages 

To adjust the verbosity of log messages for Hadoop map/reduce tasks, add the following settings to the logback.xml file on each analytic node:

logback.logger.org.apache.hadoop.mapred=WARN 
logback.logger.org.apache.hadoop.filecache=WARN
The location of the logback.xml file depends on the type of installation:
Installer-Services and Package installations /etc/dse/cassandra/logback.xml
Installer-No Services and Tarball installations install_location/resources/cassandra/conf/logback.xml

Connecting to non-standard Cassandra native port 

If the Cassandra native port was changed to a port other than the default port 9042, you must change the cassandra.input.native.port configuration setting for Hive and Hadoop to use the non-default port. The following examples change the Cassandra native port protocol connections to use port 9999.
  • Inside the Hive shell, set the port after starting the DataStax Enterprise Hive shell:
    dse hive
    hive> set cassandra.input.native.port=9999; 
  • General Hive, add cassandra.input.native.port to the hive-site.xml file:
    There are two instances of the hive-site.xml file.

    For use with Spark, the default location of the hive-site.xml file is:

    Installer-Services and Package installations /etc/dse/spark/hive-site.xml
    Installer-No Services and Tarball installations install_location/resources/spark/conf/hive-site.xml

    For use with Hive, the default location of the hive-site.xml file is:

    Installer-Services and Package installations /etc/dse/hive/hive-site.xml
    Installer-No Services and Tarball installations install_location/resources/hive/conf/hive-site.xml
    <property> 
        <name>cassandra.input.native.port</name>
        <value>9999</value> 
    </property> 
  • For Hadoop, add cassandra.input.native.port to the core-site.xml file:
    The default location of the core-site.xml file depends on the type of installation:
    Installer-Services and Package installations /etc/dse/hadoop/conf/core-site.xml
    Installer-No Services and Tarball installations install_location/resources/hadoop/conf/core-site.xml
    <property> 
        <name>cassandra.input.native.port</name> 
        <value>9999</value> 
    </property>

Configuration for running jobs on a remote cluster 

This information is intended for advanced users.

Procedure

To connect to external addresses:

  1. Make sure that the hostname resolution works properly on the localhost for the remote cluster nodes.
  2. Copy the dse-core-default.xml and dse-mapred-default.xml files from any working remote cluster node to your local Hadoop conf directory.
  3. Run the job using dse hadoop.
  4. To override the Job Tracker location or if DataStax Enterprise cannot automatically detect the Job Tracker location, define the HADOOP_JT environment variable before running the job:
    export HADOOP_JT=jobtracker host:jobtracker port dse hadoop jar ....
  5. If you need to connect to many different remote clusters from the same host:
    1. Before starting the job, copy the remote Hadoop conf directories fully to the local node (into different locations).
    2. Select the appropriate location by defining HADOOP_CONF_DIR.