Pranjal Patidar
About Pranjal Patidar
Pranjal Patidar is a Data Engineer with a strong educational background in Data Science and Management from the Indian Institute of Technology and the Indian Institute of Management. He has experience in developing ETL pipelines, managing data workflows, and utilizing cloud technologies, currently working at Brillio in Pune.
Work at Brillio
Pranjal Patidar currently serves as a Data Engineer at Brillio, a position he has held since 2022. He is based in Pune, Maharashtra, India. In this role, he focuses on developing and maintaining data transformation tasks, utilizing technologies such as PySpark and Airflow. His work involves building scalable data management platforms and optimizing data workflows to enhance data-driven decision-making.
Education and Expertise
Pranjal Patidar has an academic background in Data Science and Management. He pursued a Master of Science (MS) at the Indian Institute of Technology, Indore from 2023 to 2024. He also studied at the Indian Institute of Management, Indore, achieving a Master of Science (MS) in the same field during the same period. Prior to this, he earned a Bachelor of Engineering in Computer Science from Rajiv Gandhi Prodyogiki Vishwavidyalaya from 2016 to 2020.
Background
Before joining Brillio, Pranjal Patidar worked at Cognizant in various roles. He started as a Program Analyst Trainee from 2020 to 2021 in Chennai, Tamil Nadu, India, where he gained foundational experience. He then transitioned to a Full Stack Engineer role from 2021 to 2022 in Bengaluru, Karnataka, India, where he further developed his technical skills.
Technical Skills and Projects
Pranjal Patidar has demonstrated expertise in data engineering through various projects. He revamped ETL pipelines to ensure seamless data flow and implemented efficient search console ETL workflows utilizing cloud technologies. He collaborated with cross-functional teams to process large volumes of hotel booking data and built a scalable data management platform on AWS. His technical skills include data extraction from Hive tables, designing complex data transformation pipelines using Airflow and PySpark, and deploying PySpark code on Amazon EMR.