Adrian Wesek
About Adrian Wesek
Background Summary:Machine/deep learning engineer with the main focus on state-of-the-art practices and applications in natural language processing/understanding. Over two years of research and implementation experience of cutting edge neural network applications, which includes: Language Modelling, Conditional Text Generation, Text-To-Speech, Voice Cloning, Question Answering, Abstractive Text Summarization, and Text Classification. Strong background in several neural network architectures, namely: Transformer Encoder-Decoder, Attention Mechanism, LSTMs, GANs and CNNs. Confident with Python and Java programming with a strong knowledge of Object-Oriented paradigm - provided by 4+ years of practical and 2+ years of teaching experience at the University. Confident at neural network designing using packages like PyTorch, TensorFlow, Transformers and Scikit-learn, and at handling plus web-scraping unstructured data. Familiar with version control practices and model training environment setting on Linux based GPU servers.