Jirka Lhotka
About Jirka Lhotka
Jirka Lhotka is an AI Engineer specializing in Natural Language Processing and Large Language Models, currently employed at Nexthink in Switzerland. He has a strong academic background in computer science from the University of Cambridge and EPFL, along with experience at notable organizations such as DeepMind and The Boston Consulting Group.
Current Role at Nexthink
Jirka Lhotka currently serves as an AI Engineer specializing in Natural Language Processing (NLP) and Large Language Models (LLMs) at Nexthink. He has been in this position since 2023, contributing to advancements in AI technologies. His role involves enhancing the effectiveness of large language models through innovative methodologies.
Previous Experience in Data Science and AI
Before joining Nexthink, Jirka Lhotka gained valuable experience in various roles. He worked as a Data Scientist at Creative Dock for five months in 2018 in Prague. In 2022, he interned as a Research Engineer at DeepMind in London for three months. Additionally, he served as a Visiting Associate at The Boston Consulting Group for three months in 2017. He was also a Founding Engineer at Deepnote from 2018 to 2021.
Educational Background in Computer Science
Jirka Lhotka pursued his education in Computer Science at two prestigious institutions. He completed his Bachelor's Degree at the University of Cambridge from 2015 to 2018. Following this, he studied at EPFL (École polytechnique fédérale de Lausanne), where he earned a Master of Science (MS) in Computer Science from 2021 to 2023.
Participation in Y Combinator Program
In 2019, Jirka Lhotka participated in the Y Combinator program with Deepnote. This experience contributed to his understanding of startup dynamics and innovation in technology, particularly in the field of data science and AI.
Projects in Natural Language Processing
Jirka Lhotka has worked on significant projects in the field of Natural Language Processing. His projects include developing techniques for Retrieval-Augmented Generation (RAG) and implementing fine-tuning methods for large language models. These projects reflect his expertise in enhancing AI capabilities.