Sanjeet Attili
About Sanjeet Attili
Sanjeet Attili is an Associate Data Scientist at Uniphore in Bengaluru, India, where he has worked since 2022. He has a background in data science and machine learning, with experience in various internships and freelance roles.
Work at Uniphore
Sanjeet Attili has been employed at Uniphore as an Associate Data Scientist since 2022. His role is based in Bengaluru, Karnataka, India, and follows a hybrid work model. In this position, he has contributed to various projects, including multilingual post-ASR text correction, which resulted in a significant reduction in model inference time.
Previous Experience
Before joining Uniphore, Sanjeet Attili held several positions in the tech industry. He worked as a Freelance Engineer at Upwork for five months in 2022 and as an NLP Freelancer at Freelancer.com for one month in the same year. He also served as an Analyst at KPMG India for six months in 2022. His earlier roles include internships at The Neuro Labs, SOIL - School of Innovation and Leadership, and Bytelearn, where he gained experience in data science and machine learning.
Education and Expertise
Sanjeet Attili earned a Bachelor of Technology (BTech) in Computer Science from IIIT-Naya Raipur, completing his studies from 2018 to 2022. He also attended Delhi Public School in Visakhapatnam and FIITJEE, where he developed foundational knowledge in technology and engineering principles.
Technical Contributions
Sanjeet has made notable contributions in the field of natural language processing and machine learning. He deployed advanced language models such as Flan-T5 xxl and Flan-ul2 for summarization and named entity recognition tasks using AWS Sagemaker. Additionally, he implemented optimizations for T5 based Setfit models and developed a model-agnostic inference Flask API, enhancing the efficiency of various machine learning frameworks.
Projects and Innovations
Sanjeet Attili has worked on several innovative projects, including fine-tuning the indicBART model on multilingual datasets like Hindi and Tamil. He explored different fine-tuning approaches, such as PEFT-based Lora and regular fine-tuning. His efforts in optimizing model performance included reducing latency through ONNX format conversion and applying graph optimizations.