Skip to main content

Celonis Product Documentation

Hive: How to install and configure Apache Hadoop and Hive on Ubuntu

New data integration content available

You're currently viewing a data integration topic that has been replaced and will eventually be removed from our documentation.

For the latest data integration content, see: Data Integration.

Steps to set up a database Data Connection

Step 1: Install and configure Apache Hadoop
  1. Apache Hadoop 3.x fully needs Java 8 → Install java 8:

    $ sudo apt install openjdk-8-jdk -y    
  2. Install OpenSSH on Ubuntu:

    $ sudo apt install openssh-server openssh-client -y
  3. Setup a non-root user for Hadoop and switch to the new user:

    $ sudo adduser hdoop
    $ su - hdoop
  4. Enable Passwordless SSH for Hadoop User:

    $ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsacat ~/.ssh/ >> ~/.ssh/authorized_keyschmod 0600 ~/.ssh/authorized_keysssh localhost
  5. Download and Install Hadoop on Ubuntu:

    $ wget
    $ tar xzf hadoop-3.2.1.tar.gz
  6. Configure Hadoop Environment Variables (bashrc):

    #Edit the .bashrc shell configuration file using a text editor of your choice
    $ sudo nano .bashrc
    #Copy the following Hadoop Related Options
    export HADOOP_HOME=/home/hdoop/hadoop-3.2.1
    export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
    export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/nativ"
    #Apply the changes to the current running environment
    $ source ~/.bashrc
  7. Edit the file:

    $ sudo nano $HADOOP_HOME/etc/hadoop/
    #Uncomment the $JAVA_HOME variable and add the full path to the OpenJDK installation on your system
    export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
  8. Edit core-site.xml File to specify the URL for your NameNode, and the temporary directory:

    $ sudo nano $HADOOP_HOME/etc/hadoop/core-site.xml
    #Copy the following to the file 
  9. Create the temporary directory added in the previous step.

  10. Edit hdfs-site.xmlFile to define the NameNode and DataNode storage directories:

    $ sudo nano $HADOOP_HOME/etc/hadoop/hdfs-site.xml
    #Add the following configuration to the file and, if needed, adjust the NameNode and DataNode directories to your custom locations.
  11. Create the specific directories you defined for the value in above step.

  12. Editmapred-site.xmlFile to define MapReduce values.

    $ sudo nano $HADOOP_HOME/etc/hadoop/mapred-site.xml
    #Add the following to the file
  13. Edit the yarn-site.xml file to define settings relevant to YARN. It contains configurations for the Node Manager, Resource Manager, Containers, and Application Master.

    $ sudo nano $HADOOP_HOME/etc/hadoop/yarn-site.xml
    #Append the following configuration to the file<configuration>
  14. Format HDFS NameNode:

    #It is important to format the NameNode before starting Hadoop services for the first time
    $ hdfs namenode -format
  15. Start Hadoop Cluster:

    #Navigate to the directory
    $ cd hadoop-3.2.1/sbin
    $ ./
  16. Now you can verify and access NameNode , DataNodes and YARN Resource Manager over the browser:

    #Hadoop NameNode UI
    #YARN Resource Manager
Step 2: Install and configure Apache Hive

Apache Hive is an enterprise data warehouse system used to query, manage, and analyze data stored in the Hadoop Distributed File System.

The Hive Query Language (HiveQL) facilitates queries in a Hive command-line interface shell. Hadoop can use HiveQL as a bridge to communicate with relational database management systems and perform tasks based on SQL-like commands.

Prerequisites: Apache Hive is based on Hadoop and requires a fully functional Hadoop framework.

  1. Download and Untar Hive:

    $ wget
    $ tar xzf apache-hive-3.1.2-bin.tar.gz
  2. Configure Hive Environment Variables (bashrc):

    $ sudo nano .bashrc
    #Append following to the file
    export HIVE_HOME=/home/hdoop/apache-hive-3.1.2-bin
    export PATH=$PATH:$HIVE_HOME/bin
    $ source ~/.bashrc
  3. Edit the file to add the HADOOP_HOME directory:

    $ sudo nano $HIVE_HOME/bin/
    export HADOOP_HOME=/home/hdoop/hadoop-3.2.1
  4. Create Hive Directories in HDFS. Create two separate directories to store data in the HDFS layer:

    #The temporary, tmp directory is going to store the intermediate results of Hive processes.
    $ hdfs dfs -mkdir /tmp
    $ hdfs dfs -chmod g+w /tmp
    #The warehouse directory is going to store the Hive related tables.
    $ hdfs dfs -mkdir -p /user/hive/warehouse
    $ hdfs dfs -chmod g+w /user/hive/warehouse
  5. Configure the hive-site.xml file:

    $ cd $HIVE_HOME/conf
    $ cp hive-default.xml.template hive-site.xml
    #Optionally if the metastore warehouse directory location is different than default replace it.
  6. Initiate Derby Database. Derby is the default metadata store for Hive:

    $ $HIVE_HOME/bin/schematool –initSchema –dbType derby
  7. Special steps:

    #Fix guava Incompatibility Error in Hive. The guava version has to be same as in Hadoop.
    $ rm $HIVE_HOME/lib/guava-19.0.jar
    $ cp $HADOOP_HOME/share/hadoop/hdfs/lib/guava-27.0-jre.jar $HIVE_HOME/lib/
    #Remember to use the schematool command once again to initiate the Derby database:
    $ $HIVE_HOME/bin/schematool –initSchema –dbType derby
  8. Launch Hive Client Shell to issue SQL-like commands and directly interact with HDFS:

    $ cd $HIVE_HOME/bin
    $ hive
  9. Boot and connect to the hiveserver2:

    #Boot the hive server
    $ $HIVE_HOME/bin/hiveserver2
    #Connect to Hiveserver
    $ $HIVE_HOME/bin/beeline -u jdbc:hive2://localhost:10000
    # You can also access HiveServer2 Web UI via localhost:10002
Step 3: Install and configure Tez execution engine
  1. Download and install Tez:

    $ wget
    $ tar zxvf apache-tez-0.9.2-bin.tar.gz
  2. Copy tarball on HDFS:

    $ $HADOOP_HOME/bin/hadoop fs -mkdir -p /apps/tez-0.9.2
    $ cd apache-tez-0.9.2-bin
    $ $HADOOP_HOME/bin/hadoop fs -copyFromLocal share/tez.tar.gz /apps/tez-0.9.2
  3. Mitigation for a possible issue:

    $ sudo nano ${HADOOP_HOME}/conf/hdfs-site.xml
    #Append to the file<property>
  4. Create tez-site.xml on ${TEZ_HOME}/conf

    $ cd ${TEZ_HOME}/conf
    $ cp tez-default-template.xml tez-site.xml
  5. Edit to add the Tez directories to HADOOP_CLASSPATH

    $ sudo nano ${HIVE_HOME}/conf/
    #Append to the file
    export HADOOP_CLASSPATH=${TEZ_HOME}/conf:${TEZ_HOME}/*:${TEZ_HOME}/lib/*
  6. Edit hive-site.xml to change the execution engine property to Tez:

    $ sudo nano ${HIVE_HOME}/conf/hive-site.xml
    #Update the following configuration property to use tez as execution engine instead of mr (i.e. MapReduce)
  7. Restart the Hive Server.

Step 4: Setup and configure Data Connection
  1. Download all required hive-jdbc dependencies with respect to the specific hive version:

  2. Clone and build celonis-hive-driver. In order to do so, please contact the Celonis customer support mentioning the exact Hive version you are using.

  3. Add all the jars from step 1 and 2, to the classpath of connector-jdbc.

  4. Configure an uplink connection as described in Uplink connectors.

  5. Create a new hive database data connection with custom driver class 'cloud.celonis.connector.jdbc.hive.CelonisHiveDriver'