Kalyan Trivendrum
About Kalyan Trivendrum
Kalyan Trivendrum is a Big Data Data Scientist currently employed at Miracle Software Systems, Inc in Michigan, where he has worked since 2018. He has a strong background in data analytics and development, with previous roles at Tech Mahindra and AT&T, and holds an MBA and a Bachelor's degree in Computer Applications from Sri Venkateswara University.
Current Role at Miracle Software Systems
Kalyan Trivendrum currently serves as a Big Data Data Scientist at Miracle Software Systems, Inc. in Michigan. He has held this position since 2018, contributing to various projects that leverage big data technologies. His role involves utilizing advanced data processing techniques and tools to derive insights from large datasets.
Previous Experience at Miracle Software Systems
Kalyan previously worked at Miracle Software Systems, Inc. as a Big Data - Spark Developer for a duration of four months in 2018. During this time, he focused on implementing big data solutions and enhancing data processing capabilities within the organization.
Professional Background in Data Analysis
Kalyan has a solid background in data analysis, having worked at Tech Mahindra as a Data Analyst from 2012 to 2015. His responsibilities included analyzing data to support business decisions and providing actionable insights. Additionally, he worked at AT&T as a Hadoop Developer from 2015 to 2018, where he developed and optimized data processing workflows.
Educational Qualifications
Kalyan Trivendrum completed his Master of Business Administration (MBA) in Business Administration and Management at Sri Venkateswara University from 2002 to 2004. He also earned a Bachelor's Degree in Computer Applications from the same university, studying from 1998 to 2001. His educational background provides a strong foundation for his work in data science and analytics.
Technical Skills and Expertise
Kalyan possesses expertise in various big data technologies and tools. He has experience in collecting and integrating log data into HDFS using Flume and Kafka. He is skilled in providing ad-hoc queries and metrics using Hive and Pig, and has implemented advanced procedures such as text analytics with Apache Spark. His technical proficiency includes using Sqoop for data transfer and developing Hive UDFs for specific applications.