Develop and fine-tune transformer-based models specifically for the Azerbaijani language.
Work with Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) systems to enhance AI-driven capabilities.
Contribute to the development of AI applications, improving the accuracy and efficiency of language models and related technologies.
Develop and optimize Retrieval-Augmented Generation (RAG) systems for improving information retrieval and response generation.
Design and optimize algorithms for entity recognition, text summarization, and semantic search.
Tələblər
2+ years of professional experience in data science.
Develop and implement NLP models for Azerbaijani language such as text classification, named entity recognition, sentiment analysis, text generation, and machine translation.
Apply machine learning and deep learning techniques using frameworks like TensorFlow, PyTorch, and Hugging Face.
Experience in fine-tuning and optimizing transformer-based models such as BERT, GPT, T5, LLaMA, and Claude.
Strong understanding of prompt engineering and in-context learning.
Experience in building and evaluating language models for domain-specific applications.
Hands-on experience in RAG pipeline development, indexing strategies, and integrating vector databases for NLP tasks.
Understanding of FAISS, Pinecone, Weaviate, and other vector search databases.
Experience with RNNs, LSTMs, Transformers, and Foundation Models.
Knowledge of LoRA, Quantization, and Distillation techniques for efficient LLM deployment.
Work with large-scale datasets, ensuring efficiency in training and inference.