Sweshika Reddy
About Sweshika Reddy
Sweshika Reddy is a Senior Data Engineer at Asurion with extensive experience in data engineering and software development across multiple companies in India and the US.
Current Position at Asurion
As of 2022, Sweshika Reddy holds the position of Sr Data Engineer at Asurion in the United States. In this role, she is responsible for developing and managing complex data pipelines, and leveraging her extensive experience in the Hadoop ecosystem and database management.
Past Experience at Kotak Mahindra Bank
From 2019 to 2021, Sweshika Reddy worked as a Data Engineer at Kotak Mahindra Bank in Maharashtra, India. During her tenure, she focused on building and maintaining data pipelines, managing large datasets, and integrating various data sources for analysis and reporting.
Role at CRED
Sweshika Reddy served as an Azure Developer at CRED in Bengaluru, Karnataka, India, from 2018 to 2019. Her responsibilities included creating secure and scalable applications using Microsoft's Azure platform, optimizing cloud solutions, and ensuring the reliability of data services.
Experience at Groww
From 2017 to 2018, Sweshika Reddy worked as a Software Developer at Groww, Karnataka, India. In this role, she developed and maintained software applications, optimized database performance, and contributed to the overall system architecture of the company's financial platforms.
Technical Skills and Contributions
Sweshika Reddy has extensive technical expertise in data engineering. She developed data pipelines using tools like Flume, Sqoop, Pig, and Java MapReduce. She installed, configured, and maintained components of the Hadoop ecosystem, such as Hive, Hbase, Zookeeper, and Sqoop. She created Java MapReduce jobs for data cleaning and Hive queries. Additional contributions include designing high availability and disaster recovery solutions, creating Hive UDFs, automating data import with Oozie, using PySpark for data handling, and leveraging NoSQL databases like HBase for database design and development. She also developed ETL processes using Amazon S3, EMR, and Spark, and created Spark streaming applications.