Resume_KISHORE.pdf Resume...........,......

BhagyaRaj53 9 views 3 slides Jun 23, 2024
Slide 1
Slide 1 of 3
Slide 1
1
Slide 2
2
Slide 3
3

About This Presentation

Resume guide


Slide Content

ALAJANGI KISHOR KUMAR

Python Etl Developer/ Talend Developer

+91-8919438590
[email protected]






 Business Domains worked on
 Health Care
 Banking
Technical Skills
 Talend, SQL, Hive,spark,Python,hadoop ecosystem
 Database – Oracle, MySQL, MSSQL, SQL
Scheduling tools – TAC, TMC, Jira
Specialities
 Roles performed: ETL Consultant, Talend Developer,
Python etl developer.










4+ Yrs of experience in Talend development & python etl development having expertise with multiple tool sets
with demonstrated history of working in the information technology and services history. Skilled in Talend developer
,hive, hadoop ecosystem , spark, python, Jira & SQL etc.


Profile:
 4.3 Years of total experience in Talend Development and Production support in Data Warehousing and ETL Testing as well.
 Worked on various Data driven Business Intelligence projects in Agile scrum methodology which includes
Designing and developing ETL jobs using Talend.
 Expertise with the tools in Hadoop Ecosystem including pig,Hive,HDFS,MapReduce,spark,yarn. 
 Used Python, xml,JSON,delimited files for developing web applications and having good experience in using
NOSQL databases like Apache Cassandra. 
 Python scripting with focus on devops tools,CI/CD and hands-on Engineering.
 Experience on Talend for Design and development and Admin Activities.
 Using PySpark script in jobs to improve performance.
 Writing and using hive, Impala query in talend big data jobs to meet the requirement.
 Deploying Talend jobs on server.
 Developing ETL (Standard and Big data batch) Jobs with Talend Big data Platform.
 Involve creating hive tables, loading with data and writing hive queries that will run internally in map reduce way.
 Implement partitioning,Dyanamic partitioning and Bucketing in Hive.
 Develop hive queries to process the data and generate the data cubes for visualizing.
 Write Hive UDF to sort structure fields and return complex data type.
 Creating reusable components using job lets and developed reusable jobs using copy command to load data into Red shift
Database.
 Experience in implementing SCD Type-1, Type-2 and Type-3 Dimensional tables.
 Interacted with Onshore/Offshore/clients to understand and redefine Business needs.
 Develop and implement strategies for performance tuning.
 Closely worked with client on requirements and deliverables.
 Knowledge on Database Migration / Integration, data warehousing, OLAP.


Work Experience:
 Working as an ETL developer at YASH Technologies, HYDERABAD from august 2019 to January 2022.
 Working as an python ETL developer at Tata Consultancy Services, HYDERABAD from January 2022 to December 2022.

Key Projects:
Client : Tata Consultancy Services (TCS)
project : RBI CIMS
Environment : Talend 7.3, SQL Server,HDFS,HIVE,Python, Hadoop Ecosystem
Role : Python Talend ETL Developer


Responsibilities:

* Used Talend hive components to extract the data and applying business logic's and move that data into different file systems and databases.
* Developing ETL (Standard and Big data batch) Jobs with Talend Big data Platform.
* Designing and developing mappings for data loads.
* Deploying Talend jobs on server.
* Creating job conductor and execution plan on Talend Administration Center (TAC).
* Scheduling and monitoring Execution Plan.
* Using PySpark script in jobs to improve performance.
* Writing and using hive, Impala query in talend big data jobs to meet the requirement.
* Tuned the jobs for better performance.
* Translated Business processes into Talend Jobs by using various Components.
* Involved into end-to-end Development, Implementation & production support process from dev to prod environment.
* Involved in Testing the ETL process that meets the business requirement.
* Used tRunjob component to run child job from a parent job and to pass parameters from parent to child job.
* Created Context Variables and Groups to run Talend jobs against different environments.
* Used Python, xml,JSON,delimited files for developing web applications and having good experience in using NOSQL databases like Apache Cassandra.
* Python scripting with focus on devops tools,CI/CD and hands-on Engineering.

Environments: Talend, SQL Server, PYTHON,SPARK, TAC , JIRA , WinSCP ,Putty

Caterpillar – Caterpillar Inc is the world's leading manufacturer of construction and mining equipment, diesel and natural gas engines,
Industrial turbines and diesel-electric locomotives. Headquarters in Irving, Texas, United States

Client : YASH Technologies
Role : Software Developer
Duration : AUG 2019 to JAN2022
Position : Team member
Environment: MsSQL, Talend Data Integration, ORACLE

Responsibilities:

• Member of warehouse design team assisted in creating fact and dimension tables based on
specifications provided by manager
• Developed jobs using Talend for data loading.
• Importing Source/Target tables from the respective databases.
• Understanding existing business models and client requirements.
• Extensively involved in development of jobs using various transformations of Talend according to
business logic.
• Worked as ETL Developer in Data loading Phase from Excel and CSV.
• Translated Business processes into Talend Jobs by using various Components.
• Prepared Design documents, Use case Documents and Test Cases Documents.
• Preparing Unit Test Plan, Unit Test Run and Checklist.
• Tuned the Jobs for its better performance.
• Implemented Error handling to provide the detailed error messages.


Environments : Talend , Oracle, S3, MYSQL , TAC