Skip to main content

Celonis Product Documentation

Connecting to Apache Hive / Apache Hadoop

You can connect your Apache Hive and Apache Hadoop data warehouse to the Celonis Platform using Ubuntu.

While we provide the steps for installing and configuring both Hive and Hadoop here, we recommend consulting their official documentation for the latest updates:

Step 1: Installing and configuring Apache Hadoop
  1. Apache Hadoop 3.x fully needs Java 8 → Install java 8:

    $ sudo apt install openjdk-8-jdk -y    
  2. Install OpenSSH on Ubuntu:

    $ sudo apt install openssh-server openssh-client -y
  3. Setup a non-root user for Hadoop and switch to the new user:

    $ sudo adduser hdoop
    $ su - hdoop
  4. Enable Passwordless SSH for Hadoop User:

    $ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsacat ~/.ssh/ >> ~/.ssh/authorized_keyschmod 0600 ~/.ssh/authorized_keysssh localhost
  5. Download and Install Hadoop on Ubuntu:

    $ wget
    $ tar xzf hadoop-3.2.1.tar.gz
  6. Configure Hadoop Environment Variables (bashrc):

    #Edit the .bashrc shell configuration file using a text editor of your choice
    $ sudo nano .bashrc
    #Copy the following Hadoop Related Options
    export HADOOP_HOME=/home/hdoop/hadoop-3.2.1
    export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
    export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/nativ"
    #Apply the changes to the current running environment
    $ source ~/.bashrc
  7. Edit the file:

    $ sudo nano $HADOOP_HOME/etc/hadoop/
    #Uncomment the $JAVA_HOME variable and add the full path to the OpenJDK installation on your system
    export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
  8. Edit core-site.xml File to specify the URL for your NameNode, and the temporary directory:

    $ sudo nano $HADOOP_HOME/etc/hadoop/core-site.xml
    #Copy the following to the file 
  9. Create the temporary directory added in the previous step.

  10. Edit hdfs-site.xmlFile to define the NameNode and DataNode storage directories:

    $ sudo nano $HADOOP_HOME/etc/hadoop/hdfs-site.xml
    #Add the following configuration to the file and, if needed, adjust the NameNode and DataNode directories to your custom locations.
  11. Create the specific directories you defined for the value in above step.

  12. Editmapred-site.xmlFile to define MapReduce values.

    $ sudo nano $HADOOP_HOME/etc/hadoop/mapred-site.xml
    #Add the following to the file
  13. Edit the yarn-site.xml file to define settings relevant to YARN. It contains configurations for the Node Manager, Resource Manager, Containers, and Application Master.

    $ sudo nano $HADOOP_HOME/etc/hadoop/yarn-site.xml
    #Append the following configuration to the file<configuration>
  14. Format HDFS NameNode:

    #It is important to format the NameNode before starting Hadoop services for the first time
    $ hdfs namenode -format
  15. Start Hadoop Cluster:

    #Navigate to the directory
    $ cd hadoop-3.2.1/sbin
    $ ./
  16. Now you can verify and access NameNode , DataNodes and YARN Resource Manager over the browser:

    #Hadoop NameNode UI
    #YARN Resource Manager

Apache Hive is an enterprise data warehouse system used to query, manage, and analyze data stored in the Hadoop Distributed File System.

The Hive Query Language (HiveQL) facilitates queries in a Hive command-line interface shell. Hadoop can use HiveQL as a bridge to communicate with relational database management systems and perform tasks based on SQL-like commands.

Prerequisites: Apache Hive is based on Hadoop and requires a fully functional Hadoop framework.

  1. Download and Untar Hive:

    $ wget
    $ tar xzf apache-hive-3.1.2-bin.tar.gz
  2. Configure Hive Environment Variables (bashrc):

    $ sudo nano .bashrc
    #Append following to the file
    export HIVE_HOME=/home/hdoop/apache-hive-3.1.2-bin
    export PATH=$PATH:$HIVE_HOME/bin
    $ source ~/.bashrc
  3. Edit the file to add the HADOOP_HOME directory:

    $ sudo nano $HIVE_HOME/bin/
    export HADOOP_HOME=/home/hdoop/hadoop-3.2.1
  4. Create Hive Directories in HDFS. Create two separate directories to store data in the HDFS layer:

    #The temporary, tmp directory is going to store the intermediate results of Hive processes.
    $ hdfs dfs -mkdir /tmp
    $ hdfs dfs -chmod g+w /tmp
    #The warehouse directory is going to store the Hive related tables.
    $ hdfs dfs -mkdir -p /user/hive/warehouse
    $ hdfs dfs -chmod g+w /user/hive/warehouse
  5. Configure the hive-site.xml file:

    $ cd $HIVE_HOME/conf
    $ cp hive-default.xml.template hive-site.xml
    #Optionally if the metastore warehouse directory location is different than default replace it.
  6. Initiate Derby Database. Derby is the default metadata store for Hive:

    $ $HIVE_HOME/bin/schematool –initSchema –dbType derby
  7. Special steps:

    #Fix guava Incompatibility Error in Hive. The guava version has to be same as in Hadoop.
    $ rm $HIVE_HOME/lib/guava-19.0.jar
    $ cp $HADOOP_HOME/share/hadoop/hdfs/lib/guava-27.0-jre.jar $HIVE_HOME/lib/
    #Remember to use the schematool command once again to initiate the Derby database:
    $ $HIVE_HOME/bin/schematool –initSchema –dbType derby
  8. Launch Hive Client Shell to issue SQL-like commands and directly interact with HDFS:

    $ cd $HIVE_HOME/bin
    $ hive
  9. Boot and connect to the hiveserver2:

    #Boot the hive server
    $ $HIVE_HOME/bin/hiveserver2
    #Connect to Hiveserver
    $ $HIVE_HOME/bin/beeline -u jdbc:hive2://localhost:10000
    # You can also access HiveServer2 Web UI via localhost:10002
  1. Download and install Tez:

    $ wget
    $ tar zxvf apache-tez-0.9.2-bin.tar.gz
  2. Copy tarball on HDFS:

    $ $HADOOP_HOME/bin/hadoop fs -mkdir -p /apps/tez-0.9.2
    $ cd apache-tez-0.9.2-bin
    $ $HADOOP_HOME/bin/hadoop fs -copyFromLocal share/tez.tar.gz /apps/tez-0.9.2
  3. Mitigation for a possible issue:

    $ sudo nano ${HADOOP_HOME}/conf/hdfs-site.xml
    #Append to the file<property>
  4. Create tez-site.xml on ${TEZ_HOME}/conf

    $ cd ${TEZ_HOME}/conf
    $ cp tez-default-template.xml tez-site.xml
  5. Edit to add the Tez directories to HADOOP_CLASSPATH

    $ sudo nano ${HIVE_HOME}/conf/
    #Append to the file
    export HADOOP_CLASSPATH=${TEZ_HOME}/conf:${TEZ_HOME}/*:${TEZ_HOME}/lib/*
  6. Edit hive-site.xml to change the execution engine property to Tez:

    $ sudo nano ${HIVE_HOME}/conf/hive-site.xml
    #Update the following configuration property to use tez as execution engine instead of mr (i.e. MapReduce)
  7. Restart the Hive Server.

  1. Download all required hive-jdbc dependencies with respect to the specific hive version:

  2. Clone and build celonis-hive-driver. In order to do so, please contact the Celonis customer support mentioning the exact Hive version you are using.

  3. Add all the jars from step 1 and 2, to the classpath of connector-jdbc.

  4. Configure an uplink connection for your Apache Hive connection. For more information, see: On-premise extractors.

  5. From your data pool diagram in the Celonis Platform, click Data Connections.

  6. Click Add Data Connection and select Connect to Data Source.

  7. Click On-Premise - Database and select your uplink connection from step 4.

  8. Configure the following connection details:

    • Name: An internal reference for this data connection.

    • Database type: Select Hive.

    • Configuration type: Select Custom JDBC String.

    • JDBC connection string: This depends on your connection configuration:

      • When using SSL:

      • When using SSL and a certificate file: Add the certificate to the JDK keystore and then add the following string:

      • When using SSL and a .jks file: Change the SSLTrustStore parameter to the directory of the .jks file containing the SSL key and set the SSLTrustStorePwd to the correct password. Then add the following string:

        jdbc:impala://yourImpalaHostname:yourImpalaPort;AuthMech=4;SSLTrustStore=<path>/<to>/<file>/cacerts.jks; SSLTrustStorePwd=changeit;tsasltransportbufsize=1000;usenativequery=0;useonlysspi=1;usesqlunicodetypes=0;
    • Driver class: Enter the following:

    • Schema name: The schema you want to use.

    • Additional properties: Not needed.

    • Username and password: Enter the details from your Apache Hive account.

  9. Click Test Connection and correct any highlighted issues.

  10. Click Save.

    The connection between your Apache Hive account and the Celonis Platform is establised. You can manage this connection at any time by clicking Options: