0% found this document useful (0 votes)
198 views

Sai Harish Addanki - Lead Data EngineerAWS Certified Professional

Sai Harish Addanki is a Lead Data Engineer at NIKE Inc. with over 9 years of experience in data engineering. He has extensive experience designing and developing complex real-time and batch data ingestion systems. At NIKE, he leads a team of 5 data engineers and has built solutions for demand forecasting, supply planning, and inventory management. He also has experience building recommendation algorithms and real-time analytics pipelines.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
198 views

Sai Harish Addanki - Lead Data EngineerAWS Certified Professional

Sai Harish Addanki is a Lead Data Engineer at NIKE Inc. with over 9 years of experience in data engineering. He has extensive experience designing and developing complex real-time and batch data ingestion systems. At NIKE, he leads a team of 5 data engineers and has built solutions for demand forecasting, supply planning, and inventory management. He also has experience building recommendation algorithms and real-time analytics pipelines.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Sai Harish Addanki

Lead Data Engineer|AWS Certified Professional

Profile Details
Lead Engineer with extensive experience in designing & developing 7037 NE Ridge Dr
complex end-to-end realtime & batch ingestion usecases. With 9+ Hillsboro, OR-97124
years of experience in Retail,Banking & Health domain data,extensive 2347389933
domain expertise in Supply chain ( Inventory , supply and material [email protected]
planning), Consumer behaviour, Privacy and security  and with over 7+
years of experience Bigdata ecosystem , Masters in computer science Links
and quick learning ability i can apply myself into any usecase. LinkedIn

Employment History Skills


Data Processing: Apache
Lead Data Engineer at NIKE Inc., Beaverton
January 2017 — Present Spark,Apache Pig,Databricks
• Leading a team of 5 Data engineers in identifying & building Delta
the Analytical solutions for Nike's global geo's demand, supply & Streaming: Spark
inventory planning. Streaming(kinesis,Kafka,scala,Hbase),S
• Designed and Developed Realtime ingestion process for Audience
Builder tool which performs realtime analytics on workbench SQL:
used by Member insights. Hive,Athena,Impala,Presto
• Built dynamic python etl framework ingesting data from RDMS via
No Sql: Hbase,Druid,Elastic
JDBC and loading data into snowflake.
Search
• Developed streaming framework in Scala for Audience Builder and
finetuned the performance of Hbase and ElasticSearch tables that Data Warehousing: Snowflake
are sourced from Kinesis.
• Built Utilities for Audience Builder for DataQuality,Hbase Programming: Python
throughput efficiency.
Programming: Scala,Java
• Proposed and Implemented solution for automating data
ingestion processes with config files whichtakes very less time to Build and Versioning : CI/CD (
add new attributes to the Workbench. Jenkins) , GIT
• Built Privacy Solution for Nike's DPA(Dutch Privacy Act) by building
Encryption,Decryption and Pseudonymization UDF's. Job Scheduler : Airflow , Oozie
• Data product Telemetry : Conceptualized , designed and , Autosys
architect-ed telemetry system to get the insights the usage across
ML Algorithms: Matrix
different environments.
factorization and
• Azure Cube Design: Built dynamic Azure runbook's authenticating
collaborative filtering
via rest API to build data cube in Azure environment.
BI & Visualization : Tableau ,
Sr. BigData Engineer at Bank Of America, Charlotte
September 2015 — January 2017
Kibana

• Migrated all Hive etl code into Spark etl using Spark RDDs and Compute : EC2,EMR
PySpark.
• Developed UDF’s for hive and pig to support extra functionality Databases: Oracle, Mysql ,
provided by Teradata. Teradata
• Worked on Generating Dynamic Case Statements based on Excel
provided by Business using Python.
• Extensively worked on Pyspark, Hive, Pig and Sqoop for sourcing
and transformations.
• Worked on Performance Optimizations to reduce the ETL Run time
by 50%.
• Worked on Avro and Parquet File Formats with snappy
compression.
• Worked on Autosys for scheduling the Oozie Workflows.

Sr. BigData Engineer at Lowes, Mooresville


March 2015 — September 2015
• Developed Recommendation Algorithms Matrix Factorizations
and collaborative filtering which isCurrently in live.
• Developing datasets for items preference for the input of Matrix
Factorization Algorithm in MapReduceand Hive.
• Written MapReduce programs for transforming clickstream data
from HDFS to Hive tables.
• Partnered with data scientists in building recommendation
Algorithms which has provided data insights and predictive
models consumer behaviour.
• Developed Falcon Jobs for clickstream transformation and
transaction data preparation for Algorithm.

Sr. BigData Engineer at Fidelity Investments, New Hampshire


August 2014 — March 2015
• Involved in training, analysis, architectural design,
development(code and common modules), code reviews and
testing phases.
• Built data solutions for various retail clients which spans across
different domains like finance , Marketing , store operations ,
supply chain .
• Migrated a legacy mainframe solution to Spark based data
solution

Asst. Systems Engineer at TCS, Chennai


June 2010 — August 2012
• Developed SQL queries using MySQL and established connectivity.
• Writing and testing the JUNIT test classes.
• Provide support to client applications in production and other
environments.

Education
Masters in Computer Science, University of Akron, Akron,OH
September 2012 — July 2014
GPA: 3.8/4.0

You might also like