Courses Details
Course Overview

This 5 day Big data analyst course is for anyone who wants to access, manipulate, transform, and analyze massive data sets in the Hadoop cluster using SQL and familiar scripting languages. This training course will teach you to apply traditional data analytics and business intelligence skills to big data. This course presents the tools data professionals need to access, manipulate, transform, and analyze complex data sets using SQL and familiar scripting languages.

Course Schedule
Target Audience

This course is designed for data analysts, business intelligence specialists,developers, system architects, and database administrators

Course Prerequisites

SQL knowledge & basic Linux knowledge. Prior knowledge of Apache Hadoop is recommended but not mandatory

Expected Accomplishments
You Will Learn:
  • How the open source ecosystem of big data tools addresses challenges not met by traditional RDBMSs
  • Using Apache Hive and Apache Impala to provide SQL access to data
  • Hive and Impala syntax and data formats, including functions and subqueries
  • Create, modify, and delete tables, views, and databases; load data; and store results of queries
  • Create and use partitions and different file formats
  • Combining two or more datasets using JOIN or UNION, as appropriate
  • What analytic and windowing functions are, and how to use them
  • Store and query complex or nested data structures
  • Process and analyze semi-structured and unstructured data
  • Techniques for optimizing Hive and Impala queries
  • Extending the capabilities of Hive and Impala using parameters, custom file formats and SerDes, and external scripts
  • How to determine whether Hive, Impala, an RDBMS, or a mix of these is best for a given task
Course Outline
Apache Hadoop Fundamentals
•The Motivation for Hadoop
•Hadoop Overview
•Data Storage: HDFS
•Distributed Data Processing: YARN, MapReduce, and Spark
•Data Processing and Analysis: Pig, Hive, and Impala
•Database Integration: Sqoop
•Other Hadoop Data Tools
•Exercise Scenario Explanation
Introduction to Apache Pig
•What Is Hive?
•What Is Impala?
•Why Use Hive and Impala?
•Schema and Data Storage
•Comparing Hive and Impala to Traditional Databases
•Use Cases
Querying with Apache Hive and Impala
•Databases and Tables
•Basic Hive and Impala Query Language Syntax
•Data Types
•Using Hue to Execute Queries
•Using Beeline (Hive's Shell)
•Using the Impala Shell
Common Operators and Built-In Functions
•Scalar Functions
•Aggregate Functions
Data Management
•Data Storage
•Creating Databases and Tables
•Loading Data
•Altering Databases and Tables
•Simplifying Queries with Views
•Storing Query Results
Data Storage and Performance
•Partitioning Tables
•Loading Data into Partitioned Tables
•When to Use Partitioning
•Choosing a File Format
•Using Avro and Parquet File Formats
Working with Multiple Datasets
•UNION and Joins
•Handling NULL Values in Joins
•Advanced Joins
Analytic Functions and Windowing
•Using Common Analytic Functions
•Other Analytic Functions
•Sliding Windows 
Complex Data
•Complex Data with Hive
•Complex Data with Impala
Analyzing Text
•Using Regular Expressions with Hive and Impala
•Processing Text Data with SerDes in Hive
•Sentiment Analysis and n-grams
Apache Hive Optimization
•Understanding Query Performance
•Hive on Spark
Apache Impala Optimization
•How Impala Executes Queries
•Improving Impala Performance
Extending Apache Hive and Impala
•Custom SerDes and File Formats in Hive
•Data Transformation with Custom Scripts in Hive
•User-Defined Functions
•Parameterized Queries
Choosing the Best Tool for the Job
•Comparing Hive, Impala, and Relational Databases
•Which to Choose?