Kandan Ramakrishnan
About Kandan Ramakrishnan
Kandan Ramakrishnan is a Research Intern at Inria and a Research Assistant Professor at Baylor College of Medicine, with a strong background in computer science and brain-constrained learning. He has contributed to various research projects, including methods for training networks with biological data and integrating large language models with diffusion models.
Work at Inria
Kandan Ramakrishnan has been a Research Intern at Inria since 2011, contributing to various research initiatives for over 13 years in Sophia Antipolis. His work focuses on developing methods that regularize the training of networks using biological data. This research aims to enhance the performance and reliability of machine learning models.
Current Role at Baylor College of Medicine
Since 2020, Kandan Ramakrishnan has held the position of Research Assistant Professor at Baylor College of Medicine in Houston, Texas. In this role, he engages in advanced research projects that intersect with his expertise in computer science and machine learning, particularly in the context of biomedical applications.
Previous Experience at MIT
Kandan Ramakrishnan served as a Postdoctoral Associate at the Massachusetts Institute of Technology from 2017 to 2019. During his tenure, he focused on research that contributed to the understanding of machine learning techniques and their applications in various domains, including computer vision.
Educational Background
Kandan Ramakrishnan completed his Doctor of Philosophy (PhD) in Computer Science at the University of Amsterdam from 2012 to 2017. Prior to that, he earned a Master's degree from the University of Minnesota between 2008 and 2011. His academic background laid the foundation for his research interests in machine learning and computer vision.
Research Focus and Contributions
Kandan Ramakrishnan's research encompasses several areas, including brain-constrained learning to improve object detection accuracy, modeling visual dynamics with world models, and developing self-supervised pre-training techniques for one-shot learning. He also integrates large language models with diffusion models for applications in computer vision, contributing to advancements in the field.