Panagiotis Tzirakis
About Panagiotis Tzirakis
Panagiotis Tzirakis is a Staff AI Research Scientist with a Ph.D. from Imperial College London, specializing in multimodal emotion recognition.
Title: Staff AI Research Scientist
Panagiotis Tzirakis holds the title of Staff AI Research Scientist. His role involves advancing research in artificial intelligence, particularly focusing on areas such as emotion recognition, speech enhancement, and facial motion synthesis. His position underscores a commitment to pushing the boundaries of AI research.
Education and Expertise: Ph.D. from Imperial College London
Panagiotis Tzirakis earned his Ph.D. with the Intelligent Behaviour Understanding Group (iBUG) at Imperial College London. His doctoral studies focused on multimodal emotion recognition. This specialized training has provided him with in-depth knowledge and skills in understanding and developing AI systems that can process and interpret human emotions through multiple modes of input.
Research and Publications in Top Outlets
Panagiotis Tzirakis has an extensive publication record in prestigious journals and conferences. His work has been featured in Information Fusion, International Journal of Computer Vision, and various IEEE conference proceedings. These publications validate his contributions to the fields of AI and machine learning, particularly in the realms of emotion recognition, speech enhancement, and facial motion synthesis.
Research Topics: 3D Facial Motion Synthesis and Speech Enhancement
His research encompasses a variety of topics including 3D facial motion synthesis, multi-channel speech enhancement, and emotion recognition. Additionally, he has explored the detection of Gibbon calls. This diverse range of research areas highlights his expertise in applying AI to different facets of human and animal behavior understanding.
Focus on Multimodal Emotion Recognition
During his Ph.D. studies, Panagiotis Tzirakis focused extensively on multimodal emotion recognition. This research aimed at developing systems that can accurately interpret human emotions using multiple input sources such as audio and video. This work is foundational for enhancing human-computer interactions and developing empathetic AI systems.