Вакансії Data Scientist / Machine learning Engineer
- Python
- TensorFlow
- PyTorch
- scikit-learn
- spaCy
- NLTK
- SQL
- .NET
- AWS
Ciklum is looking for a Senior Data Scientist to join our team full-time in Ukraine.
As a Senior Data Scientist, become a part of a cross-functional development team engineering experiences of tomorrow. As a Senior Data Scientist specializing in Generative AI Intelligence, you will be instrumental in designing, developing, and optimizing the data-driven components of our proactive speech-to-speech platform. You'll dive deep into the nuances of conversational AI, leveraging generative models to create natural, engaging, and effective partner interactions. Your expertise in machine learning, natural language processing, and analytics will be critical to the success of this innovative solution.
We're at the forefront of transforming partner communication through the power of Generative AI. Our mission is to build an intelligent, proactive speech-to-speech solution that fosters stronger relationships and drives impactful interactions. We're seeking a highly insightful and driven Senior Data Scientist to spearhead the data strategy and model development that will fuel this exciting product.
Responsibilities
- Data Analysis: Analyze large datasets to identify patterns, extract insights, and inform model development and optimization
- Prompt Engineering & Optimization: Develop and refine effective prompts to guide generative AI models in producing desired conversational outcomes
- Dialogue Flow Design & Evaluation: Contribute to the design of natural and effective dialogue flows, and develop metrics and methodologies for evaluating their performance
- Personalization & Contextualization: Develop models and techniques to personalize conversations and ensure contextually relevant interactions with partners
- Model Evaluation & Monitoring: Design and implement robust evaluation frameworks to assess the performance of generative AI models, and establish monitoring systems to track their effectiveness in production
- Experimentation & Iteration: Design and execute experiments to test hypotheses, evaluate model performance, and drive continuous improvement of the system
- Ethical Considerations: Ensure responsible AI development practices, considering fairness, bias detection, and ethical implications in our models
Requirements
- Master's or Ph.D. in Data Science, Machine Learning, Natural Language Processing, Statistics, or a related quantitative field
- 5+ years of experience as a Data Scientist, with a strong focus on machine learning and natural language processing
- Proven experience developing and deploying generative AI models (e.g., Transformers, GANs, VAEs)
- Experience working with large datasets
- Strong programming skills in Python and experience with relevant libraries (TensorFlow, PyTorch, scikit-learn, spaCy, NLTK)
- Familiarity with prompt engineering techniques
- Solid understanding of statistical modeling and experimental design
- Excellent communication and collaboration skills, with the ability to clearly explain data-driven insights and results to stakeholders with varying levels of data literacy
- A rigorous approach to evaluating the impact of new algorithms through offline evaluation and online experimentation
- A knack for bringing clarity to ambiguous problems and resourcefulness in dealing with incomplete and messy data
- A strong passion for leveraging AI to solve real-world problems
Desirable
- Data Strategy & Architecture: Defining and implementing data strategy, including data collection, storage, processing, and governance
- Collaboration with Engineering: Work closely with software engineers to integrate data science models and pipelines into the .NET framework and AWS infrastructure
- Performance Optimization: Identify and address performance bottlenecks in data pipelines and machine learning models
- Excellent SQL skills and experience navigating large and complex data lakes and warehouses
- Experience with cloud platforms
Інформація про компанію Ciklum
Переваги співробітникам
- Team buildings
- Англомовне середовище
- Бухгалтерський супровід
- Компенсація домашнього офісу
- Надається ноутбук
- Освітні програми, курси
- TensorFlow
- PyTorch
- Hugging Face Transformers
- AI
- LLM
- Vue.js
- React
- Agile
- Scrum
- AWS
- GCP
- Azure
- Figma
- Adobe XD
We’re looking for a Senior AI Engineer to join our client, a prominent Japanese company which produces Software Solutions for Enterprises facing the digital transformation from content management systems to process or task mining to full automation with factual analysis and expert support.
As a Senior AI Engineer, you will play a key role in building an AI-powered platform to convert Figma design files into high-quality front-end code (HTML, CSS, JavaScript, Vue.js/React).
This remote position is ideal for candidates located in Ukraine or Asian countries who can start work at 08:00 AM EEST to ensure time zone overlap. It is perfect for professionals with a deep technical background in machine learning, deep learning, and natural language processing applied to software development automation. and is perfect for someone with a deep technical background in machine learning, deep learning, and natural language processing applied to software development automation.
About the project
Client is a digital transformation solutions provider specializing in CMS, CXM, RPA, process mining, task mining, and 3D-VR solutions. With a strong presence in Japan and a parent company in the U.S., client delivers innovative automation and analytics platforms to enhance enterprise efficiency.
We are building an AI-powered platform that converts Figma design files into dynamic, high-quality front-end code. This initiative focuses on AI-assisted software development automation, enabling faster and more accurate front-end generation. Our work integrates machine learning, deep learning, and NLP with front-end engineering to bridge the gap between design and code.
Our approach prioritizes pragmatic trade-offs, balancing speed, quality, and cost to ensure rapid development cycles while maintaining performance and scalability.
Tech Stack: Python, TensorFlow, PyTorch, Hugging Face Transformers, Vue.js, React, HTML, CSS, JavaScript, Deep learning models for UI/UX interpretation
Must-have for the position
- 5+ years with Python and frameworks such as TensorFlow, PyTorch, or Hugging Face Transformers;
- Strong background in Machine Learning, Deep Learning, and NLP;
- Experience in building and optimizing AI models for code generation or related applications;
- Familiarity with computer vision techniques for UI/UX design interpretation;
- Knowledge of Large Language Models (LLMs) and prompt engineering;
- Hands-on experience with frontend development (Vue.js, React, or similar frameworks);
- Familiarity with Agile/SCRUM environments;
- Strong problem-solving skills and ability to work in a fast-paced environment;
- English Level: Upper-Intermediate English or higher.
Will be a strong plus
- Prior experience in developing AI-powered low-code or no-code platforms..
- Knowledge of reinforcement learning and optimization techniques for AI models;
- Understanding of software engineering best practices and DevOps for AI model deployment;
- Experience with cloud platforms such as AWS, GCP, or Azure;
- Experience with Figma, Adobe XD, and other UI design tools;
- Bachelor’s Degree or above, preferably in IT or related subjects.
Responsibilities
- Design, develop, and optimize AI models for converting UI designs into front-end code;
- Implement deep learning and NLP techniques to improve the accuracy and efficiency of code generation;
- Work with computer vision models to interpret design files;
- Train, fine-tune, and deploy AI models for code generation;
- Collaborate with frontend and backend engineers to integrate AI functionalities into the platform;
- Research and implement state-of-the-art techniques in AI-assisted coding;
- Optimize AI models for performance and scalability.
Інформація про компанію KitRUM
Переваги співробітникам
- English Courses
- Team buildings
- Work-life balance
- Гнучкий графік роботи
- Довгострокові проекти
- Оплачувані лікарняні
- Освітні програми, курси
- NLP
- Python
- SQL
- NoSQL
- Docker
- Kubernetes
- GraphQL
- Elasticsearch
- MLOps
- AWS
- GCP
- Azure
Сьогодні у тебе є можливість приєднатись до нашої команди професіоналів, які впевнено крокують до однієї цілі та об'єднані спільними інтересами та життєвими цінностями, адже ми перебуваємо в пошуках Machine Learning Engineer.
Основні обов’язки:
- Побудова рекомендаційних систем для пошукача та роботодавця та їх релевантність;
- Покращення якості пошуку вакансій;
- Робота з текстом (NLP);
- Інші ML та інженерні задачі, які допоможуть розвивати наш продукт;
Основні вимоги:
- Вища освіта в computer science чи суміжному технічному напрямку;
- 2 роки досвіду роботи з моделями машинного навчання в продакшині;
- Досвід NLP задачами та рекомендаційними системами (recommendation systems);
- Уміння писати хороший код (Python), зрозумілий іншим;
- Маєш досвід з реляційними SQL та NoSQL базами даних, вмієш писати SQL запити;
- Маєш досвід роботи з Docker, Kubernetes, GraphQL (бажано), Elasticsearch;
- Досвід MlOps (бажано);
- Досвід роботи з хмарними сервісами (AWS, GCP, Azure);
Інформація про компанію robota ua
Переваги співробітникам
- Team buildings
- Компенсація витрат на спорт
- Компенсація навчання
- Оплачувані лікарняні
- Scrum
- Python
- SQL
- Generative AI
- AWS
- Kubernetes
- Docker
- MLOps
We seek a passionate Senior Machine Learning Engineer with Python programming skills to develop robust solutions to real-world, large-scale problems that have a direct impact on the business. As a Senior Machine Learning Engineer, you will be expected to own projects end-to-end - from conception to operationalization, demonstrating an understanding of the full ML model lifecycle. As a result, you will be expected to provide technical solutions and collaborate with your teammates; therefore, strong communication skills are critical in this role. With teammates in Portland, Boston, China, and Poland, you’ll join a global organization working to solve machine learning problems at scale.
Your tasks
- Work with the Artificial Intelligence and Machine Learning (AI/ML) team
- Design and implement scalable applications that leverage predictive models and optimization programs to deliver data-driven decisions that result in incredible business impact
- Write robust, maintainable, and extendable code in Python
- Contribute to core advanced analytics and machine learning platforms and tools to enable both prediction and optimization model development
Requirements
- At least 5 years of relevant professional experience
- Experience in ML Engineering and working on a product model using Scrum
- Ability to write robust, maintainable, and extendable code in Python
- Proficiency in SQL and knowledge of the GenAI area
- Knowledge of applied data science methods and machine learning algorithms
- Data wrangling experience, with structured and unstructured data
- Fluent English
Nice to have
- Experience with cloud architecture and technologies (preferably AWS, Kubernetes, and Docker) and MLOps concepts, would be an asset
Інформація про компанію Sii Ukraine
Переваги співробітникам
- English Courses
- Гнучкий графік роботи
- Довгострокові проекти
- Регулярний перегляд зарплатні
- Python
- R
- SQL
- AWS Cloud Services
- AWS Redshift
- Scala
- Cython
Якщо ти талановитий ML Engineer, хочеш реалізовувати масштабні проєкти та створювати ігри світового рівня – приєднуйся до VOKI team!
Ми просто створені одне для одного, якщо ти:
- Маєш досвід прикладного аналізу даних із використанням методів машинного навчання від 3 років
- Програмуєш на Python/R
- Маєш сильну математичну підготовку (теорія ймовірностей, прикладна статистика та машинне навчання)
- Знаєш SQL та бази даних
- Пишеш чистий і зрозумілий код, який легко підтримувати
- Вмієш створювати надійні сервіси з мінімальною підтримкою
- Швидко знаходиш прикладне рішення, орієнтуючись на бізнес-завдання та результат
- Любиш брати на себе відповідальність за результат та діяти в умовах невизначеності
Буде перевагою:
- Досвід роботи з cloud-ресурсами AWS: обробка даних у Data Lake через Databricks (Hive/Delta) та використання сховища Redshift
- Знання мов програмування Scala та/або Cython
Що потрібно буде робити:
- Розробляти та навчати ML-моделі, створювати архітектуру для кількох напрямків усередині проєктів
- Створювати та підтримувати пайплайни збору даних, консультувати команди у процесі автоматизації
- Підтримувати аналітиків даних, продуктові команди та команди розробників ігор при впровадженні інструментів на основі ML
Інформація про компанію VOKI Games
Переваги співробітникам
- Безкоштовний обід
- Гнучкий графік роботи
- Допомога психотерапевта
- Медичне страхування
- Оплачувані лікарняні
- Регулярний перегляд зарплатні
- LLM
- Generative AI
- Python
- TensorFlow
- PyTorch
We are seeking a Senior AI Engineer to join the AI team on the client’s side. In this role, you will apply your expertise in LLMs (Large Language Models) and Generative AI to help enhance and optimize the learning platform. You will be responsible for developing and implementing cutting-edge AI solutions that enhance the Academy’s capabilities, providing a seamless learning experience for users.
Project Overview:
Our client is a large public cyber charter school in the USA, working on an exciting initiative to enhance learning and training opportunities for their employees. The school has partnered with top specialists to create an innovative learning platform, the Academy, which offers employees convenient access to a wide range of training materials anytime, anywhere. The Academy is designed to provide a flexible, self-paced digital learning environment where students receive support from certified teachers. This comprehensive system aims to facilitate role-specific courses and promote continuous development for all employees.
Key Responsibilities:
- Develop and implement AI-driven solutions to enhance the learning platform, leveraging LLMs and Generative AI techniques;
- Collaborate with cross-functional teams to understand and address AI-related challenges;
- Optimize AI models for performance, scalability, and user experience;
- Continuously research and apply the latest advancements in AI/ML to improve platform features;
- Work closely with the AI team on the client’s side to deliver impactful AI solutions.
Requirements:
- 5+ years of commercial experience;
- Proven experience in AI/ML development, with a focus on LLMs and Generative AI;
- Strong proficiency in programming languages such as Python, TensorFlow, PyTorch, or similar tools;
- Experience in building and deploying machine learning models in production environments;
- Strong understanding of AI/ML algorithms and techniques, especially in natural language processing;
- Excellent problem-solving skills and ability to work collaboratively in a fast-paced, dynamic environment;
- Experience in the EdTech domain or previous work on education platforms or related projects will be a plus.
Інформація про компанію Abto Software
Переваги співробітникам
- English Courses
- Fitness Zone
- Gaming room
- Велопарковка
- Гнучкий графік роботи
- Компенсація навчання
- Медичне страхування
- Оплачувані лікарняні
- Освітні програми, курси
- MLOps
- AWS
- Kubeflow
- MLflow
- DataRobot
- Apache Airflow
- Python
- Linux
- scikit-learn
- Keras
- PyTorch
- TensorFlow
- OpenAI
- Amazon Bedrock
- LangChain
- LlamaIndex
- Docker
- Kubernetes
- OpenSearch
- Qdrant
- Weaviate
- LanceDB
- Snowflake
- BigQuery
- DataBricks
- GCP
- Azure
Automat-it is an all-in AWS Premier partner that empowers startups with DevOps and FinOps expertise and hands-on services. We guide and support hundreds of startups leveraging AWS smarter throughout their growth journey. We save our customers significant time to market and optimize their cloud performance and cost-effectiveness.
Automat-it operates in more than seven countries across EMEA. We provide a dynamic environment where you can expand your professional expertise and make a significant impact.
We seek an experienced AI/ML Team Leader to join our innovative team. You will lead a group of highly skilled MLOps Engineers, develop robust MLOps pipelines, create cutting-edge Generative AI solutions, effectively engage with customers to grasp their requirements and ensure project success overseeing from inception to completion.
Responsibilities
- Manage a team of highly professional MLOps Engineers focusing on their growth and execution excellence.
- Lead and be responsible for on-time, great-quality project deliveries
- Developing MLOps pipelines leveraging the Amazon SageMaker platform and its features
- Developing Generative AI solutions and POCs, utilizing the latest architectures and technologies
- Delivering end-to-end ML products - model performance development, training, validation, testing, and version control
- Provision of ML AWS resources and infrastructure
- Developing ML-oriented CI/CD pipelines using GitHub Actions, BitBucket, or similar tools
- Deploying Machine Learning models in production
- Help customers tackle challenges on a scale using distributed training frameworks
- Using and writing Terraform libraries for infrastructure deployment
- Developing CI/CD pipelines for projects of various scales and tech stacks
- Maintaining infrastructures and environments of all types, from dev to production
- Security monitoring and administration
Requirements
- Proven leadership experience with a track record of managing and developing technical teams.
- Excellent customer-facing skills to understand and address client needs effectively.
- Ability to design and implement cloud solutions and to build MLOps pipelines on AWS
- Experience with one or more MLOps frameworks like Kubeflow, MLFlow, DataRobot, Airflow, etc.
- Fluency in Python, a good understanding of Linux, and knowledge of frameworks such as scikit-learn, Keras, PyTorch, Tensorflow, etc.
- Ability to understand tools used by data scientists and experience with software development and test automation
- Experience with one or more large language models, such as frameworks like OpenAI SDK, Amazon Bedrock, LangChain, and LlamaIndex
- Experience with Docker and Kubernetes
- Fluent written and verbal communication skills in English
- Working knowledge of some Vector Databases such as OpenSearch, Qdrant, Weaviate, LanceDB, etc. – an advantage
- Working knowledge with Snowflake, BigQuery, and/or Databricks – an advantage
- GCP or Azure knowledge (DevOps/MLOps) – an advantage
- ML certification (AWS ML Specialty or similar) – an advantage
Інформація про компанію Automat-IT
Переваги співробітникам
- English Courses
- Team buildings
- Компенсація витрат на спорт
- Компенсація навчання
- Медичне страхування
- Надається ноутбук
- Оплата роботи в коворкінгу
- Оплачувані лікарняні
- Освітні програми, курси
- RAG
- OpenSearch
- Elasticsearch
- Python
- LlamaIndex
- LangChain
- Pinecone
- Qdrant
- FAISS
- LLM
- AWS
- GCP
- Azure
- Docker
- Kubernetes
Our mission at Geniusee is to help businesses thrive through tech partnership and strengthen the engineering community by sharing knowledge and creating opportunities. Our values are Continuous Growth, Team Synergy, Taking Responsibility, Conscious Openness and Result Driven. We offer a safe, inclusive and productive environment for all team members, and we’re always open to feedback. If you want to work from home or work in the city center of Kyiv, great – apply right now.
Generative AI is transforming the way we interact with information, but it also poses an existential challenge to journalism and content creators. By scraping proprietary material without compensation, generative AI undermines key revenue streams—such as ad revenue, subscriptions, and licensing agreements – that sustain high-quality journalism and original content creation. Without proper attribution and compensation, publishers risk losing their audiences and the economic foundation needed to sustain high-quality, human-grounded journalism and content creation. Company mission – to ensure that generative AI platforms fairly credit and compensate content owners for their contributions. Our groundbreaking technology enables generative AI platforms to attribute sources and share revenue on a per-use basis, protecting creators, fostering sustainable journalism, and promoting the integrity of AI-generated content. About the Role We’re looking for a Senior Software Engineer to join our Inference Team, where you’ll lead the design and development of our Retrieval-Augmented Generation (RAG) infrastructure. In this role, you will work closely with ML engineers, research scientists, and product teams to power both web search and API-based experiences for millions of users with fast, accurate, and context-aware responses. You will architect scalable systems that combine LLMs and vector retrieval, optimizing for relevance, recall, latency, and cost. This is a high-impact role focused on AI/ML inference, retrieval performance, and significant ownership in both technical decision-making and long-term architecture.
Requirements
- 8+ years of experience in software engineering, with a focus on AI/ML systems or distributed systems,
- Hands-on experience building and deploying retrieval-augmented generation (RAG) systems,
- Deep knowledge of OpenSearch, Elasticsearch, or similar search engines,
- Strong coding skills in Python,
- Experience with frameworks like LlamaIndex or LangChain,
- Familiarity with vector databases such as Pinecone, Qdrant, or FAISS,
- Exposure to LLM fine-tuning, semantic search, embeddings, and prompt engineering,
- Previous work on systems handling millions of users or queries per day,
- Familiarity with cloud infrastructure (AWS, GCP, or Azure) and containerization tools (Docker, Kubernetes),
- Experience with vector search, embedding pipelines, and dense retrieval techniques,
- Proven ability to optimize inference stacks for latency, reliability, and scalability,
- Excellent problem-solving, analytical, and debugging skills,
- Strong sense of ownership, ability to work independently, and a self-starter mindset in fast-paced environments,
- Passion for building impactful technology aligned with our mission,
- Bachelor’s degree in Computer Science or related field, or equivalent practical experience.
What you will do
- Design, build and scale a production-grade inference stack for RAG-based applications,
- Develop efficient retrieval pipelines using OpenSearch or similar vector databases, with a focus on high recall and response relevance,
- Optimize performance and latency for both real-time and batch queries,
- Identify and address bottlenecks in the inference stack to improve response times and system efficiency,
- Ensure high reliability, observability, and monitoring of deployed systems,
- Collaborate with cross-functional teams to integrate LLMs and retrieval components into user-facing applications,
- Evaluate and integrate modern RAG frameworks and tools to accelerate development,
- Guide architectural decisions, mentor team members, and uphold engineering excellence.
Інформація про компанію Geniusee
Переваги співробітникам
- English Courses
- Team buildings
- Без бюрократії
- Гнучкий графік роботи
- Компенсація витрат на спорт
- Компенсація навчання
- Медичне страхування
- Надається ноутбук
- Оплата роботи в коворкінгу
- Оплачувані лікарняні
- Регулярний перегляд зарплатні
- R
- Python
Binariks is looking for a highly motivated Middle Data Scientist with a biological background to analyze experimental data from cell-based pharmaceutical research. The candidate will work with medium-sized datasets from high-content microscopy, performing statistical analysis including clustering, correlation analysis, and variance control.
The client is a UK-based biotech company specializing in drug discovery for neurodegenerative diseases, particularly Alzheimer's disease. Utilizing innovative human stem cell models, they focus on rapid target identification and validation to develop disease-modifying treatments. The company collaborates with various pharmaceutical firms. It offers access to its proprietary induced pluripotent stem cell (iPSC) platform for research and development partnerships.
What We’re Looking For:
- 3+ years of experience as a Data Scientist
- Statistical analysis experience (correlation, clustering, Anova)
- Proficiency in R programming
- Ability to analyze multi-parametric data (5-10 parameters)
- Experience creating data visualizations for scientific reports
- Excellent problem-solving skills and attention to detail
- Upper-Intermediate English for daily communication
Your Responsibilities:
- Using R (with ggplot) and GraphPad Prism to process CSV data
- Creating visualizations, and generating reports for project leaders
- Effective communication of data analysis process, observations, and findings
Will be a plus:
- Experience with biological data
- Experience with Python
Інформація про компанію Binariks
Переваги співробітникам
- English Courses
- Gaming room
- Бухгалтерський супровід
- Компенсація навчання
- Медичне страхування
- Оплачувані лікарняні
- Освітні програми, курси
- NLP
- LLM
- Python
- PyTorch
- spaCy
- NLTK
- SciPy
- RAG
- LangChain
- LangGraph
- VectorDB
Imagine working on an AI-powered macOS assistant that feels truly intuitive, smart, and seamlessly integrated into users' daily workflows. At MacPaw, we’re pushing LLM-powered agents to the next level, and we need a Senior Data Scientist to help us get there.
In this role, you won’t just build models – you’ll shape the next generation of on-device AI, tuning LLMs to achieve SOTA performance, RAG pipelines, and intelligent agents for real-world applications. You’ll work with top Engineers and Scientists to bring cutting-edge NLP and ML capabilities directly to macOS users.
If you’re passionate about LLMs, AI agents, and making AI more accessible and powerful, this is your chance to be at the forefront of a game-changing innovation.
In this role, you will:
- Fine-tune LLMs
- Develop and fine-tune LLM-based agents for local task execution
- Design and build advanced RAG (Retrieval-Augmented Generation) pipelines for improved information retrieval
- Lead general deep-learning training and experimental initiatives. Cooperate with and mentor students from top Ukrainian universities
- Apply classical NLP techniques (e.g., NER, POS tagging)
- Set up a testing framework to evaluate agent performance, measure key metrics, and identify optimization opportunities
- Supervise macOS Engineers to integrate models on device
Skills you’ll need to bring:
- Advanced experience in NLP and LLMs
- Proven ability to train custom NLP models and fine-tune pre-trained LLMs
- Strong background in Transformer architectures and their applications
- Proficiency in classical ML tasks (classification, regression, clustering)
- Experience with Python and NLP/ML frameworks (PyTorch, spaCy, NLTK, SciPy)
- Hands-on experience building LLM agents (RAG, ReAct, Plan&Execute, multi-agent systems)
- Good expertise in creating competitive advantage through LLM fine-tuning and RAG optimization
- Familiarity with LLM agent frameworks (LangChain, LangGraph, Vector Databases)
- Experience implementing metrics frameworks that align AI performance with business outcomes
- At least an Intermediate level of English
As a plus:
- Knowledge of running and optimizing open-source LLMs
- Background in conversational AI and dialog systems
- Familiarity with iOS/macOS ML frameworks for local deployment
- Experience with edge deployment of ML models
- Expertise with LangFuse or similar observability tools
Інформація про компанію MacPaw
- Python
- SQL
- PyTorch
- TensorFlow
- Keras
- AWS SageMaker
- Vertex AI
- MLOps
- NLP
We invite you to join our Data Center of Excellence, part of Sigma Software’s complex organizational structure that combines collaboration with diverse clients, challenging projects, and continuous opportunities to enhance your expertise in a collaborative and innovative environment.
Customer
We operate across various business domains and work with some of the world’s top clients (those without NDAs can be found here: sigma.software/case-studies).
Our team highly supports employees’ freedom, independence in decision making, and the desire to deeply understand the client’s requests and identify the root of the problem. We believe that people who strive for this mindset are bound to succeed as recognized professionals and drivers of Big Data development in Ukraine.
Project
Data Center of Excellence is a place where we collect the best engineering practices and unite them into one knowledge base to provide the best technical services to our clients.
We’re not just a team that comes together to write code. We are all willing to contribute to the field, either by participating in the life of the Data Center of Excellence or by constantly developing our own skills. In our unit, you can be the Data Engineer/Team Lead/Architect, or you can become a mentor, a person behind all the new technologies in the team, or an active listener, if you will. Whatever you decide, know that you’re not alone. Whether it’s a difficult task, an unordinary request from the client, or your next project choice, you’ll always have a mentor and teammates with the same mindset to come up with the best solution.
We hire people not for the specific project, but to join our team. It gives us a chance to get to know you better and ensures that we’ll provide the perfect match between the client’s needs and your professional interests.
If you’re ready to join the leading Data community, take this opportunity – let’s shape the future together!
Requirements
- Advanced knowledge of Python and SQL
- Hands-on experience in model implementation and tuning using frameworks such as PyTorch, TensorFlow, or Keras
- Strong understanding of end-to-end machine learning development and deployment processes
- Proven experience with ML cloud platforms (e.g., AWS SageMaker, Vertex AI) and their ecosystem tools (e.g., SageMaker Pipelines)
- Experience in computer vision development
- Practical knowledge of MLOps principles and best practices
- Ability to design, build, and optimize ML pipelines for the end-to-end machine learning development cycle
- Experience of working with deep learning models for signal/speech processing and NLP
- Experience in scaling model training using multi-GPU infrastructure
- Familiarity with DevOps practices for AI/ML systems
- Track record of building and deploying production-grade AI solutions
- Proficiency in statistical analysis and data mining with the ability to apply these techniques effectively
- At least an Upper-Intermediate level of English
Responsibilities
- Work in teams that handle a variety of tasks for clients and projects related to data and machine learning
- Collaborate closely with clients and their teams, maintaining clear and professional communication
- Take ownership of delivering key solution features
- Participate in requirements gathering and clarification, proposing optimal machine learning strategies, and leading ML implementation
- Represent the team as an ML technical expert in presale activities
- Design and develop core modules and functions, ensuring solutions are scalable and cost-effective
- Conduct code reviews and implement unit/integration tests to ensure quality
- Enhance distributed systems and infrastructure to improve scalability
- Support the client’s research team in implementing, training, testing, and fine-tuning deep learning models
- Work with researchers to identify best-fit models and evaluate open-source alternatives for language/signal processing, optimizing performance
Інформація про компанію Sigma Software
Переваги співробітникам
- Work-life balance
- Гнучкий графік роботи
- Медичне страхування
- Освітні програми, курси
- Юридичний супровід
- Generative AI
- Python
- R
- TensorFlow
- PyTorch
- NLP
- BERT
- GPT
- Azure
- AWS
- Docker
- Kubernetes
- Git
We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 4 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role.
To qualify for the role, you must have
- Minimum 4 years of experience in Data Science and Machine Learning.
- In-depth knowledge of machine learning, deep learning, and generative AI techniques.
- Knowledge and experience in Generative AI.
- Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch.
- Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models.
- Familiarity with computer vision techniques for image recognition, object detection, or image generation.
- Experience with cloud platforms such as Azure or AWS.
- Expertise in data engineering, including data curation, cleaning, and preprocessing.
- Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems.
- Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models.
- Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions.
- Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels.
- Track record of driving innovation and staying updated with the latest AI research and advancements.
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
Ideally, you’ll also have
- Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models and systems
- Drive DevOps and MLOps practices, covering continuous integration, deployment, and monitoring of AI
- Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines
- Implement monitoring and logging tools to ensure AI model performance and reliability
- Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment.
- Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models.
Інформація про компанію Ernst & Young
Переваги співробітникам
- Team buildings
- Велика стабільна компанія
- Гнучкий графік роботи
- Медичне страхування
- Освітні програми, курси
- Регулярний перегляд зарплатні
- Python
- R
- PySpark
- SQL
- MLOps
- API
- Azure
- Agile
- Scrum
- Kanban
We are looking for a top-notch technology savvy specialists willing to move our projects on the new track! You will use the most advanced technology stack and have an opportunity to implement new solutions while working with top leaders in their industries. As a part of our global team you will participate in various international projects.
Your key responsibilities
The responsibility for this role is to customize and implement solutions centered around Artificial Intelligence technologies - specifically natural language processing and information extraction. You will support AI solution development tasks, provide feedback to the AI lab researchers.
Skills and attributes for success
- Understanding business needs and fulfilling them with the right solution.
- Participation in a cross-functional initiatives and collaboration across various domains.
- Interacting with data engineers, product/project managers from all over the world.
To qualify for the role, you must have
- 2+ years of experience in the field
- Programming experience in one or more of Python, R, PySpark
- Machine learning domain knowledge in any of the following: classification, regression, clustering, time series, Bayes, vision, natural language processing, large language models, genAI
- AI / ML models evaluation
- Excellent communication skills
- Good command of English
Ideally, you’ll also have
- Basic SQL knowledge or visualization skills or data modelling
- Experience in MLOps or APIs
- Experience with one of the Cloud platforms, preferably Azure
- Experience in using Agile methodologies (Scrum, Kanban, etc.)
Інформація про компанію Ernst & Young
Переваги співробітникам
- Team buildings
- Велика стабільна компанія
- Гнучкий графік роботи
- Медичне страхування
- Освітні програми, курси
- Регулярний перегляд зарплатні
- Python
- PyTorch
- TensorFlow
- PySpark
- Pandas
- Git
- ruff
- MyPy
- Docker
- CI/CD
- Big data
- ETL
- MLOps
Prom.ua – найбільший маркетплейс України, де продаються понад 200 млн товарів від десятків тисяч підприємців з усієї країни.
Про команду:
Ми оптимізуємо різні частини продукту, використовуючи дані і алгоритми машинного навчання. Паралельно розбудовуємо ШІ-системи, що надають стратегічну перевагу компанії і допомагають їй слідувати візії – створення комерції майбутнього.
Напрямки роботи команди:
- Рекомендації товарів і персоналізація
- Згортання товарів в продуктові моделі
- Автоматична оцінка якості та модерація контенту товарів: категоризація, машинний переклад, автозаповнення характеристик, генерація контенту
- Пошук та ML-ранжування
- Генерація і лінкування тегів для SEO
Особливості роботи в команді:
- активно занурюємось в продуктове середовище, тісно взаємодіємо з іншими командами – > мало досліджень йде під стіл, багато моделей в production
- розуміємо поставлені цілі, орієнтуємось на результат -> модельки роблять те, що потрібно, і не роблять те, що не потрібно
- мінімальна бюрократія, можна обирати задачки, які більше лежать до душі, заохочуємо ініціативу, але й розраховуємо на відповідальність за результат
- фокусуємось на розбудові інфраструктури для більшої надійності рішень, автоматизації рутини і щоб легше було впроваджувати моделі з нуля
- більшість проєктів версіонуємо і документуємо, це допомагає комфортно працювати декільком фахівцям
- багато працюємо в команді, віримо у зворотній звʼязок і підтримку, жартуємо жарти
- обмінюємось досвідом: проводимо авторські курси, презентуємо свої проєкти, влаштовуємо брейн-шторми
Розбудовуємо тісні зв’язки з командами розробки, тестування та аналітиками.
Для повсякденної роботи піднятий JupyterHub сервер з можливістю задавати необхідні характеристики робочого середовища, можна працювати на локальній машині. Маємо свої сервери з відеокартами для навчання і розгортання моделей.
Проєкти з технічного боку:
- Мова програмування: Python
- Аналіз і обробка даних: Jupyter Notebook, Pandas, NumPy
- Machine Learning і Deep Learning: Scikit-learn, TensorFlow, PyTorch, FAISS, XGBoost
- LLM API: Gemini, ChatGPT
- Візуалізація даних і моніторинг: Matplotlib, Seaborn, Plotly, Bokeh, Tableau, Grafana
- Бази даних: Postgres
- Big Data і розподілені обчислення: Apache Spark, Hadoop
- MLOps: MLflow, DVC, TensorFlow Serving, Python packaging, Fast API, uv
- Даги: Airflow
- Черги даних: Kafka
- Пошук: Elasticsearch
Для даної ролі важливо:
- досвід класифікації/сегментації/генерації текстів як класичними методами, так і deep learning
- досвід роботи з фреймворками для розробки нейромереж (PyTorch/TensorFlow)
- досвідченість у роботі з машинним навчанням: постановка задачі, збір і дослідження даних (PySpark, pandas), тренування моделі, оцінка результатів, аналіз роботи моделі, підготовка до розгортання;
- досвід розгортання та супроводження моделі в production, покращення вже існуючих моделей
- вміння писати надійний і чистий код на python, розуміння і використання різних структур даних, OOP, а також, володіння VC (Git etc); користування лінтерами (ruff) і інструментами перевірки типів даних (mypy)
- досвід роботи з Docker, пакетними менеджерами (uv, pdm), CI/CD
- готовність глибоко занурюватись в бізнес-задачі і перетворювати їх на технічні рішення (архітектура, функції втрат, метрики і т.д.)
- відповідальність та уважність до деталей
- вміння тестувати свої рішення і забезпечувати їх безперебійну роботу
Що буде плюсом:
- досвід навчання моделей на даних, що перевищують обсяг оперативної пам’яті, досвід з високонавантаженими системами, Big Data і розподіленими обчисленнями
- досвід побудови ETL-процесів, зокрема повʼязаних з регулярним перенавчанням моделей
- досвід у застосуванні практик MLOps: контроль версій коду, даних і моделей, автоматичного розгортання, моніторингу і логування, тестування моделей, перенавчання моделей
- досвід роботи з ембедингами і ANN
Можливі задачі:
- розвиток системи класифікації товарів;
- масштабування привʼязки товарів до продуктових моделей
- покращення моделі ранжування товарів у пошуковій видачі
- дослідження нових напрямків застосування машинного навчання для вирішення задач бізнесу
Інформація про компанію EVO
Переваги співробітникам
- Fitness Zone
- Велопарковка
- Допомога психотерапевта
- Кава, фрукти, перекуси
- Медичне страхування
- Оплачувані лікарняні
- Парковка для авто
- SQL
- Python
- NumPy
- SciPy
- Pandas
- scikit-learn
- XGBoost
- Generative AI
Що потрібно робити:
- EDA
- Обробка, очищення та перевірка цілісності даних, що використовуються для аналізу
- Створення автоматизованих систем виявлення аномалій та моніторинг їх роботи
- Створення та впровадження ML моделей для роботи з табличними та неструктурованими даними (текст, зображення, звук)
- Сегментація клієнтської бази
- Підготовка документації про розробку та впровадження моделей
- Презентація результатів замовникам та представникам бізнес-підрозділів
Необхідні знання, досвід та особисті якості:
- Вища освіта
- 2+ роки досвіду роботи на позиції Data Scientist
- Досвід роботи з SQL, Python
- Досвід створення та впровадження повного циклу ML моделей (from research to production)
- Досвід використання поширених DS інструментаріїв (numpy, scipy, pandas, sklearn, xgboost, etc.)
- Відмінне розуміння методів машинного навчання та алгоритмів
- Можливість працювати самостійно та з членами команди з різним бекграундом
- Уміння створювати і підтримувати документацію про створені моделі і процеси
- Розуміння концепцій статистики / теорії ймовірності, аналізу даних, машинного навчання
- Знання BI інструментів, вміння будувати дашборди для моніторингу роботи моделей
- Знання SQL, Python
- Здатність будувати змістовні візуалізації результатів
Додатковою перевагою буде:
- Досвід роботи з рекомендаційними системами, задачами сегментації, GenAI
Інформація про компанію ПриватБанк
Переваги співробітникам
- English Courses
- Fitness Zone
- Gaming room
- Кімната відпочинку
- Кава, фрукти, перекуси
- Компенсація навчання
- Медичне страхування
- ML
- Generative AI
- Scala
- Apache Spark
- Python
- A/B testing
- Hadoop
Svitla Systems Inc. is looking for a Senior Data Scientist for a full-time position (40 hours per week) in Ukraine. Our client is the largest airport television network, with 2,500+ screens in 90 commercial airports and 58 private/FBO airports across North America. They leverage various data and tech that enable targeting messaging, offer shopability, and allow viewers to continue watching TV content on their mobile devices. The client works with brands like Belvedere, Netflix, Sony, TikTok, Hilton, and Bose to innovate and create the best opportunities to connect with the world’s tastemakers.
This role is ideal for someone passionate about pushing the boundaries of location-data analysis and understanding its connection to real-world behavior. The team is building regression models, classification algorithms, data visualizations, and geospatial clustering within the location-data platform, which processes terabytes of data daily to power location-based products. The system integrates billions of data points into a comprehensive dataset covering hundreds of millions of places across the US and Canada, from restaurants and stores to parks, hotels, and colleges, representing every type of interest across North America. The approaches include exploratory data analysis, hypothesis testing, creation of heuristics, and machine learning techniques. They use a mixture of commercial, open-source, and home-grown solutions. Your findings will be built into the core products.
Requirements
- PhD in a quantitative field or a Master’s degree with 5+ years of experience in data science.
- Experience with ML, GenAI, deep learning, probabilistic graphical models, active learning, anomaly detection, and image classification.
- Knowledge of Scala, Spark, Python, A/B testing, the Hadoop ecosystem, Bayesian statistics, graphical data structures, and functional programming.
- Extensive knowledge in math or statistics.
- Experience owning multiple large projects/data products.
- Expertise in efficiently completing all research/design from the initial question to proof of concept ready for production.
- Understanding and documenting all findings, processes, and use cases for technical and non-technical audiences.
Nice to have
- Experience with horizontally scaling statistical algorithms.
Responsibilities
- Generate and implement ML and Gen AI algorithms.
- Design, prototype, and test scalable algorithms that turn hundreds of TBs of raw location data into valuable products.
- Work with extensive data sets, designing and developing highly memory- and CPU-efficient methods.
- Assess the quality of new datasets for downstream processing.
- Create functional and technical documentation of analyses, findings, and prototypes.
- Collaborate with engineering to produce prototypes.
- Communicate results to teams within the organization and to customers.
- Find creative solutions to business problems and develop them into products.
- Own and drive projects with minimal guidance.
Інформація про компанію Svitla
Переваги співробітникам
- English Courses
- Pet-friendly
- Team buildings
- Work-life balance
- Відпустка по догляду за дитиною
- Гнучкий графік роботи
- Кава, фрукти, перекуси
- Компенсація витрат на спорт
- Компенсація навчання
- Медичне страхування
- Оплачувані державні свята
- Оплачувані лікарняні
- Регулярний перегляд зарплатні
- Python
- DataBricks
- Azure DevOps
- Apache Spark
- CI/CD
- Generative AI
- ChatGPT
- Databricks Assistant
GlobalLogic is seeking an experienced Big Data Scientist to help us improve and help to develop the new modern personalized digital platform. Every engineer in our team has an opportunity to make a tremendous impact. We are looking for innovative and passionate engineer who want to take ownership of features and projects and collaborate with other engineers and product managers to evaluate, design, and implement from top to bottom. We will provide opportunities to work with cutting-edge technology that helps millions of people save their/money.
We seek creative and motivated engineers to work on our digital coupon solution. There are a wide variety of challenges that have been and still need to be solved. You will have opportunities for growth and development. You will be part of a team that works on services that have served over 1 billion coupons.
Project for customer base in the U.S. shaping the ways CPG brands operate in the market. Work with us using the latest technology and influence the way over 50 million shoppers act each day.
Requirements
- 4 to 8 years of hands-on experience in machine learning, data science, and optimization, with a track record of building and deploying AI-driven solutions in a production environment.
- Expert proficiency in Python and Databricks, with demonstrated experience in Azure DevOps, Spark, and cloud-based ML operations for scalable data processing and model deployment.
- Strong foundation in exploratory data analysis (EDA), statistical modelling, and feature engineering, with the ability to uncover patterns, detect anomalies, and optimize model inputs.
- Hands-on experience developing predictive, optimization, and generative AI models, ensuring business relevance through measurable outcomes and iterative improvements.
- Proven ability to implement and manage CI/CD pipelines for machine learning models, automating deployment, monitoring, and retraining to enhance efficiency and reliability.
- Experience leveraging GenAI tools (e.g., ChatGPT, Databricks Assistant, or equivalent) to accelerate solution development, debug complex issues, and optimize coding workflows.
- Commitment to continuous learning and staying ahead of industry advancements, actively researching and integrating emerging machine learning, AI, and data science trends.
Job responsibilities
As a Senior Data Scientist, you will design, develop, and deploy advanced machine learning and optimization solutions that drive business impact. This role requires a mix of technical expertise, problem-solving, and business acumen to build scalable pipelines, optimize models, and leverage GenAI tools for efficiency. You’ll work across the full data science lifecycle, collaborating with cross-functional teams to translate complex challenges into actionable insights. Success in this role means delivering high-quality solutions with minimal supervision while continuously innovating and staying ahead of industry advancements.
- Build and optimize end-to-end machine learning and optimization pipelines in Databricks using Python and Azure DevOps, ensuring efficiency, scalability, and alignment with business objectives.
- Perform in-depth exploratory data analysis (EDA) to detect anomalies, uncover patterns, and generate actionable insights, using statistical methods to inform data science solutions.
- Design and implement feature engineering strategies to enhance model accuracy and reliability, leveraging domain expertise and automation tools.
- Develop, train, and optimize predictive, optimization, and generative AI models, ensuring measurable business impact through continuous improvement.
- Implement CI/CD workflows for machine learning models, automating deployment, monitoring, and retraining processes for efficiency and consistency.
- Leverage GenAI tools (e.g., ChatGPT, Databricks Assistant) to accelerate solution development, enhance debugging, and optimize coding efficiency.
- Write modular, maintainable, and well-documented code, following best practices for scalability, version control, and secure coding standards.
- Absorb and apply business concepts and stakeholder language, ensuring data science solutions align with organizational goals and decision-making.
- Stay ahead of industry advancements, continuously researching new methodologies, tools, and frameworks to integrate state-of-the-art techniques.
- Collaborate effectively with cross-functional teams, including engineers, project managers, and business stakeholders, to drive successful implementations.
- Demonstrate problem-solving resilience, proactively addressing technical blockers and seeking the right resources or alternative solutions to maintain project momentum.
- Track and analyze model performance using key metrics, refining solutions to enhance real-world business impact and decision support.
Інформація про компанію GlobalLogic
Переваги співробітникам
- Relocation assistance
- Б'юті послуги
- Допомога психотерапевта
- Компенсація витрат на спорт
- Медичне страхування
- Освітні програми, курси
- Python
- SQL
- Microservices
- ETL
- Apache Airflow
- dbt
- Snowflake
- Kafka
- AWS Kinesis
- scikit-learn
- TensorFlow
- Keras
- Torch
- PyTorch
N-iX is looking for a passionate and motivated Data ML Engineer to join our team.
Our customer is a technology company that powers one of the world's largest two-sided marketplaces for high-quality photos, illustrations, videos and music used by individuals, businesses, marketing agencies and media organizations of all sizes. In this role, you will be working with highly motivated and talented data scientists and engineers who are primarily focused on developing next generation AI/ML systems that transform the way people find content they enjoy.
Responsibilities:
- Partner with product managers, data scientists and other technical stakeholders to understand requirements and collaborate on providing solutions at scale.
- Create end-to-end machine learning solutions from start to finish, contribute to complex machine learning operations and inference pipelines.
- Build and maintain a modern serverless ML-powered GraphQL stack that will support rapid iteration and drive delivery of AI-powered features.
- Support development, productionalization and deployment of machine learning models.
- Manipulate large datasets quickly and efficiently in both relational and “big” data stores.
- Day-to-day operational support of engineering infrastructure, products, and services, including on-call routine.
Requirements:
- Industry experience in building, deploying and maintaining high performing, resilient, and scalable real-time machine learning systems.
- Expert knowledge of Python and experience in building micro-services that can scale effectively.
- Knowledge of SQL, data modelling, and dimensionality modelling concepts
- Familiarity with modern ETL stack (Airflow, DBT, Snowflake) and stream processing frameworks (Kafka, Kinesis).
- Good understanding of machine learning operation practices and procedures.
- Familiarity with machine learning principles, hands-on experience with common machine learning tools like Scikit-Learn and deep learning frameworks like TensorFlow, Keras, Torch/PyTorch
- Experience with Cloud infrastructure provisioning and integration patterns
- Be excited to be part of the team, have the ability to communicate clearly and develop close working relations with other team members and stakeholders.
- Ability to innovate and identify new approaches to improve current practices.
Інформація про компанію N-iX
Переваги співробітникам
- English Courses
- Гнучкий графік роботи
- Компенсація витрат на спорт
- Компенсація навчання
- Медичне страхування
- Python
- Pandas
- SciPy
- scikit-learn
- PyTorch
- TensorFlow
- FastAPI
- Streamlit
- asyncio
- OpenAI
- LangChain
- LlamaIndex
- Java
- C++
- C#
Are you looking for a challenging yet rewarding project that will leverage the best of your decision-making and engineering excellence?
At DIAL we strive to maintain a delicate balance between the constantly evolving LLM landscape, the demands and scales of enterprise customers, and the restrictions that come with developing an open-source multimodal LLM and Application Orchestration platform under the Apache 2 license, all while ensuring ease of use and deployment and constantly delivering new features.
Using DIAL as a foundation, we build a variety of practical solutions that can be customized for specific needs, such as StatGPT, a talk-to-your-data platform adopted by the International Monetary Fund and World-Bank. Imagine a future where accessing and understanding complex datasets is as easy as asking a question. With Project StatGPT, that future is now. We're on a mission to democratize data access through the power of Large Language Models (LLMs) based on Natural Language Processing (NLP). StatGPT is designed to bridge the gap between vast data repositories and the business people who need insights from them, using intuitive, conversational interfaces.
Your role:
We see candidate for this position as a mix of a mathematician, algorithm specialist, analyst, and programmer, capable of developing domain expertise and implementing production-ready solutions in most scientifically challenging areas. All the above by either doing comprehensive research or discovering an efficient way of implementing ideas obtained by the research, analytics, or scientific articles.
Here we provide a list of skills and technologies we are interested in. See for yourself, whether any of them are in your scope of expertise or interest. You do not have to master all of them. Tell us what your strong side is and we will find the right task for you. We value the thrive for knowledge and self-development within our team.
Responsibilities
- Design, develop, and maintain core functionalities using Python
- Collaborate with cross-functional teams to define, design, and ship new features
- Ensure the performance, quality, and responsiveness of applications
- Identify and correct bottlenecks and fix bugs
- Help maintain code quality, organization, and automation
- Conduct system and test engineering activities as required
Requirements
- Strong knowledge in common Computer Science and Programming Languages theory
- Good Knowledge of Python
- Understanding of requirements developments. Desire and ability to understand how things work in a specific domain
- Advanced algorithms and data structures
Nice to have
- Knowledge of python libs: Pandas, SciPy, Sklearn, PyTorch, TensorFlow, FastAPI, Streamlit, asyncio and others
- Experience with OpenAI / Langchain / llamaindex / other LLMs - related things
- Experience in Java / C++ / C#
- Experience with Data analytics
- Math, Statistics, Machine learning
Інформація про компанію EPAM
Переваги співробітникам
- English Courses
- Relocation assistance
- Гнучкий графік роботи
- Допомога психотерапевта
- Компенсація домашнього офісу
- Компенсація навчання
- Медичне страхування
- Оплачувані лікарняні
- Освітні програми, курси
- Python
- Azure Databricks
- MLflow
Successful candidate will be part of a growing Analytics Team, focusing on various Big Data technologies, providing projects and services to international clients and teams from across the globe. He/she will participate in end-to-end project implementations delivered in a dynamic and young environment that encourages innovative solutions. We offer an opportunity to develop both in the technology and business areas.
Responsibilities
- Build and maintain data models that support large-scale predictive analytics.
- Work with Microsoft Azure Databricks to process, analyze, and model large datasets.
- Implement and manage MLFlow for model tracking, versioning, and deployment.
- Automate model training and deployment workflows to ensure production stability and scalability.
- Collaborate with engineers, analysts, and business stakeholders to align forecasting models with business needs.
- Continuously research and apply the latest advancements in machine learning and time series forecasting.
Must have skills
- Proficiency in Python for data analysis, modeling, and automation.
- Hands-on experience with Microsoft Azure Databricks and cloud-based machine learning workflows.
- Knowledge of MLFlow for model lifecycle management.
- Solid understanding of data modeling techniques and best practices.
- Experience with production automation of machine learning models.
- Strong analytical and problem-solving skills, with the ability to translate business needs into technical solutions.
Nice to have
- Experience in manufacturing or supply chain forecasting.
- Familiarity with deep learning approaches for time series forecasting.
- Exposure to cloud-based MLOps practices.
Інформація про компанію Luxoft
Переваги співробітникам
- Relocation assistance
- Team buildings
- Багатонаціональна команда
- Велика стабільна компанія
- Освітні програми, курси
Сторінки
Читайте нас в Telegram, щоб не пропустити анонси нових курсів.