Narasimha Rao

  • Hyderabad


Professional Summary

  • High-performing, self-motivated and passionate professional with 2+ years of experience in storage, querying, processing and analysis in Big-Data Hadoop framework and its ecosystem. Exclusive experience in Hadoop and its components like HDFS, Map Reduce, Apache Pig, Hive, Sqoop and HBase. Hands on experience in Core java programming. Good knowledge of Hadoop Architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and Map Reduce concepts. Good knowledge in writing Hive Query Language and debugging hive issues. Good knowledge in writing Pig scripts and debugging pig issues. Experience in Setting Hadoop Cluster in Test and Production environments. Developed big data workflows to process very large datasets. Experience on Storage and Processing in Hue covering all Hadoop ecosystem components. Implemented Proof of Concept (POC) on Hadoop stack and different Big data analytic tools. Developed data pipelines to import/export structured datasets using Sqoop to move data in and out of the Hadoop ecosystem. Experienced in loading data into the cluster from RDBMS using Sqoop, also from the local file system to theHadoop Experienced in installing apache Hadoop. Knowledge on FLUME and NO-SQL. Excellent communication, interpersonal, analytical skills, and strong ability to perform as part of team. Exceptional ability to learn new ¬†concepts.

Work History

Work History

Jun 2013 - Present

Working as a Hadoop developer in Invesco Private Limited, Hyderabad

Nov 2013 - Present


Project Name:Sensors Data Analytics

Client: Celestica

Environment: Hadoop, Apache Pig, Hive, SQOOP, Java, Linux, MySQL

Duration: Nov 2013 to Present 

Role: Hadoop Developer

Description: The project helps the client to track sensors data. With the number of sensor-embedded intelligent devices increasing exponentially, it is struggle to effectively manage the generated voluminous sensor data. Different sensors imply different formats of data, which are difficult to correlate. Creating analytical models for this varied Big Data to provide alerts to end users in real time is an increasingly challenging task. In today's globalized market, leveraging this sensor data to identify strategic insights is essential to sustaining competitive advantage. Here we are addresses these challenges by enabling to collect, process, store and analyze the voluminous sensor data. Roles and Responsibilities: Loaded all data generated from various sensors to HDFS for further processing. Written the Apache PIG scripts to process the HDFS data. Created External Hive tables to store the processed results in a tabular format. Developed the sqoop scripts in order to make the interaction between MySQL Database and Hadoop. Involved in gathering the requirements, designing, development and testing Writing the script files for processing data and loading to HDFS Writing CLI commands using HDFS. Completely involved in the requirement analysis phase. Analyzing the requirement to setup a cluster Setting Password less hadoop Writing the HIVE queries to extract data as per business requirement Unit Test to validate the Results and preparing unit test documents Proof of concept(POC) Project Name: Retail Pricing

Jun 2013 - Present

Hadoop Developer



2009 - 2013