Вакансии Data Engineer / Big Data Architect
- ETL
- ELT
- SQL
- T-SQL
- Python
- Java
- Azure
- AWS
- GCP
Innovecs is a global digital services company with a presence in the US, the UK, the EU, Israel, Australia, and Ukraine. Specializing in software solutions, the Innovecs team has experience in Supply Chain, Healthtech, Collaboration Tech, and Gaming.
For the fifth year in a row, Innovecs is included in the Inc. 5000, the list of fastest-growing private companies in the US, and a ranking of the best global outsourcing service providers by IAOP. Recently, Innovecs was honored with the prestigious Global Good Awards for the Employee Engagement & Wellbeing, won gold at the Employer Brand Management Awards, and was included in the Global Top 100 Inspiring Workplaces Ranking.
Requirements
- 5+ years of experience as a Data Engineer or in a similar backend engineering role.
- Proven experience building production-grade data pipelines (ETL/ELT).
- Strong proficiency with SQL and TSQL in conjunction with Python or Java.
- Solid understanding of cloud platforms (e.g.,Azure AWS, or GCP).
- Strong grasp of data modeling, data architecture, and data governance principles.
- Excellent problem-solving skills and the ability to work independently in a fast-paced environment.
- Strong knowledge of database normalization, indexing strategies, and execution plans.
- Experience in Supply Chain & Logistics domain would be preferred.
Responsibilities
- Design and implement robust, scalable, and high-performance data pipelines to ingest, process, and serve structured and unstructured data from various internal and external sources (e.g., TMS, WMS, ERP, telematics, IoT).
- Optimize system performance and scalability for large logistics and operational data volumes.
- Build and maintain data models, data marts, and data warehouses/lakes that support analytics and decision-making.
- Partner with the team to translate business requirements into technical data solutions.
- Implement data quality, lineage, and observability practices.
- Drive ETL/ELT best practices, automation, and operational excellence in data workflows.
- Ensure security, compliance, and privacy standards in data processing.
- Take ownership of production support.
Информация о компании Innovecs
Преимущества сотрудникам
- Team buildings
- Безкоштовний брендований мерч
- Гнучкий графік роботи
- Допомога психотерапевта
- Медичне страхування
- Надається ноутбук
- Оплачувані лікарняні
- Освітні програми, курси
- Java
- Go
- Spring
- Kubernetes
- Terraform
- Helm
- Argo CD
We're looking for an experienced Backend Developer to join our growing data platform team.
As a Backend Developer, you'll work on a massive data processing pipeline, ingesting over a billion daily events from multiple sources. You'll also create the next-generation pipeline and help us scale from a billion events a day to tens of billions of events.
Responsibilities
- Own projects from initial discussions to release, including data exploration, architecture design, benchmarking new technologies, and product feedback.
- Work with massive amounts of data from different sources using state-of-the-art technology to make big data accessible in real-time.
- Develop and deploy real-time and batch data processing infrastructures.
- Manage the development of distributed data pipelines and complex software designs to support high data rates (millions of daily active users) using cloud-based tools.
- Work closely with company stakeholders on data-related issues.
- Develop unit, integration, end-to-end (e2e), and load tests.
Requirements
- 4+ years of experience as a Software Engineer, including design & development.
- Proven experience with Java or Go.
- Experience in the design and development of scalable big data solutions.
- Experience working in a cloud-based environment.
- Passionate about technologies, frameworks, and best practices.
- Ability to work in a fast-paced environment.
Advantages
- Experience with Spring / Kubernetes.
- Experience with Terraform / Helm / Argo.
Информация о компании Moon Active
Преимущества сотрудникам
- Relocation assistance
- Team buildings
- Безкоштовний обід
- Відпустка по догляду за дитиною
- Кава, фрукти, перекуси
- Компенсація витрат на спорт
- Компенсація навчання
- Медичне страхування
- Оплачувані лікарняні
- Освітні програми, курси
- Парковка для авто
- Регулярний перегляд зарплатні
- Apache Airflow
- Python
- SQL
- dbt
- Snowflake
- AWS
- Docker
We are seeking a Senior Data Engineer to join our growing team and help design, build, and maintain robust data infrastructure for our client — a leader in digital banking solutions. This role is ideal for someone who is passionate about building scalable and efficient data pipelines and enjoys working with modern data tools in a cloud environment.
This is the job
As a Senior Data Engineer, you will play a key role in the development of a modern data platform. You will collaborate closely with data scientists, analysts, and other engineers to ensure high data availability, quality, and performance.
This is you
- You have proven experience with Airflow, Python programming, and SQL / dbt.
- You’ve worked with Snowflake or similar cloud-based data warehouses.
- You have a solid understanding of AWS services and Docker.
- You hold a Bachelor’s degree in Mathematics, Computer Science, or another relevant quantitative field.
- You approach problems with strong analytical and problem-solving skills.
- You’re familiar with Data Engineering best practices, including Data Quality and Monitoring/Observability.
- You’re comfortable working in a dynamic, fast-paced environment, and take ownership of your work.
- You have a growth mindset and are eager to learn and grow through hands-on experience.
This is your role
- Design and develop scalable data pipelines to efficiently process and analyze large volumes of data, utilizing Snowflake, Looker, Airflow, and dbt.
- Collaborate with stakeholders to translate their requirements into technical steps and coordinate the projects you drive with them.
- Monitor and improve the health of our data pipelines.
- Promote knowledge sharing within the team to foster collaboration and continuous learning, and mentor junior colleagues.
- Stay updated on emerging technologies and best practices in data engineering, and bring new ideas to enhance the technical setup.
Информация о компании Avenga
Преимущества сотрудникам
- English Courses
- Paid overtime
- Гнучкий графік роботи
- Компенсація витрат на спорт
- Медичне страхування
- Оплачувані лікарняні
- Освітні програми, курси
- Java
- Scala
- Python
- Go
- C++
- Rust
- Kafka
- Apache Flink
- Apache Spark
- Apache Beam
- NoSQL
- Cassandra
- MongoDB
- OLAP
- ClickHouse
- StarRocks
- Doris
- SQL
- Kubernetes
- Helm
- ArgoCD
- Iceberg
- Delta Lake
- Apache Hudi
- GCP
- AWS
- Azure
We are seeking an experienced developer to create a high-performance, scalable, and flexible behavioral analytics engine platform.
You will be a key member of our team, responsible for the architecture, development, and optimization of core components for processing and analyzing large volumes of data [terrabytes].
Required professional experience:
- 7+ years of experience in developing analytics platforms or big data processing systems.
- Deep knowledge of programming languages such as Java, Scala, Python, Go, C++, or Rust.
- Experience with distributed systems and big data technologies [Kafka, Flink, Spark, Apache BEAM].
- Understanding of scalable system design principles and architectures for real-time data processing.
- Experience with NoSQL databases [Cassandra, MongoDB].
- Experience with OLAP databases [ClickHouse, StarRocks, Doris].
- Knowledge of SQL.
- Understanding of statistical methods and principles of data analysis.
- Experience with Kubernetes [Helm, ArgoCD].
Desired Skills:
- Experience with open table format [Apache Iceberg/Delta Lake/Hudi].
- Experience with cloud platforms [Google Cloud, AWS, Azure].
- Knowledge of data security methods and compliance with regulatory requirements [GDPR, CCPA].
Key Responsibilities:
- Design and develop the architecture of an behavioral analytics platform for real-time big data processing.
- Implement key engine systems [data collection, event processing, aggregation, prepare data for visualization].
- Optimize the platform performance and scalability for handling large data volumes.
- Develop tools for user behavior analysis and product metrics.
- Collaborate with data analysts and product managers to integrate the engine into analytics projects.
- Research and implement new technologies and methods in data analysis.
Информация о компании Burny Games
Преимущества сотрудникам
- Гнучкий графік роботи
- Медичне страхування
- Оплачувані лікарняні
- Оплачувана відпустка
- Освітні програми, курси
- Регулярний перегляд зарплатні
- Python
- SQL
- Apache Spark
- AWS Glue
- Athena
- Apache Airflow
- ETL
- ELT
- Amazon S3
- AWS Lambda
- AWS RDS
- Amazon API Gateway
- CI/CD
- FastAPI
- Great Expectations
Our client is a leading SaaS company offering pricing optimization solutions for e-commerce businesses. Its advanced technology utilizes big data, machine learning, and AI to assist customers in optimizing their pricing strategies and maximizing their profits.
About the Role:
As a data engineer you’ll have end-to-end ownership – from system architecture and software development to operational excellence.
Key Responsibilities:
- Design and implement scalable machine learning pipelines with Airflow, enabling efficient parallel execution.
- Enhance our data infrastructure by refining database schemas, developing and improving APIs for internal systems, overseeing schema migrations, managing data lifecycles, optimizing query performance, and maintaining large-scale data pipelines.
- Implement monitoring and observability, using AWS Athena and QuickSight to track performance, model accuracy, operational KPIs and alerts.
- Build and maintain data validation pipelines to ensure incoming data quality and proactively detect anomalies or drift.
- Collaborate closely with software architects, DevOps engineers, and product teams to deliver resilient, scalable, production-grade machine learning pipelines.
Required Competence and Skills:
To excel in this role, candidates should possess the following qualifications and experiences:
- A Bachelor’s or higher in Computer Science, Software Engineering or a closely related technical field, demonstrating strong analytical and coding skills.
- At least 5 years of experience as a data engineer, software engineer, or similar role and using data to drive business results.
- At least 5 years of experience with Python, building modular, testable, and production-ready code.
- Solid understanding of SQL, including indexing best practices, and hands-on experience working with large-scale data systems (e.g., Spark, Glue, Athena).
- Practical experience with Airflow or similar orchestration frameworks, including designing, scheduling, maintaining, troubleshooting, and optimizing data workflows (DAGs).
- A solid understanding of data engineering principles: ETL/ELT design, data integrity, schema evolution, and performance optimization.
- Familiarity with AWS cloud services, including S3, Lambda, Glue, RDS, and API Gateway.
Nice-to-Have:
- Experience with MLOps practices such as CI/CD, model and data versioning, observability, and deployment.
- Familiarity with API development frameworks (e.g., FastAPI).
- Knowledge of data validation techniques and tools (e.g., Great Expectations, data drift detection).
- Exposure to AI/ML system design, including pipelines, model evaluation metrics, and production deployment.
Информация о компании Adaptiq
Преимущества сотрудникам
- English Courses
- Освітні програми, курси
- Python
- Kafka
- ClickHouse
- Data lake
- Argo Workflows
- Apache Airflow
- Prefect
- CI/CD
- Docker
- Kubernetes
Boosty Labs is one of the most prominent outsourcing companies in the blockchain domain. Among our clients are such well-known companies as Ledger, Consensys, Storj, Animoca brands, Walletconnect, Coinspaid, Paraswap, and others.
About project: Advanced blockchain analytics and on-the-ground intelligence to empower financial institutions, governments & regulators in the fight against cryptocurrency crime
Requirements:
- 3+ years of experience in data engineering or a similar role
- Strong programming skills in Python
- Solid hands-on experience with Apache Kafka for real-time data streaming
- Experience working with ClickHouse or other columnar databases
- Understanding of Data Lake architecture and cloud data storage solutions
- Familiarity with Argo Workflows or similar workflow orchestration tools (e.g., Airflow, Prefect)
- Experience with CI/CD processes and containerization (Docker, Kubernetes) is a plus
- Strong problem-solving skills and the ability to work independently
Responsibilities:
- Design and implement scalable, efficient, and reliable data pipelines
- Work with real-time and batch data processing using Kafka and ClickHouse
- Develop and maintain ETL/ELT processes using Python
- Manage and optimize data storage in cloud-based Data Lake environments
- Use Argo Workflows to orchestrate complex data workflows
- Collaborate with data scientists, analysts, and engineering teams to support their data needs
- Ensure data quality, consistency, and governance throughout the pipeline
Информация о компании Boosty Labs
Преимущества сотрудникам
- English Courses
- Work-life balance
- Без бюрократії
- Гнучкий графік роботи
- Допомога психотерапевта
- Компенсація навчання
- Оплачувані лікарняні
- Оплачувана відпустка
- Python
- SQL
- Palantir Foundry
- PostgreSQl
- Redis
- Snowflake
- Azure
- DataBricks
We are seeking a skilled and adaptable Data Engineer who is passionate about data infrastructure and long-term career growth. This role offers an opportunity to build and maintain scalable data solutions while developing expertise in Palantir Foundry and other modern data tools. We value individuals who are excited to expand their technical capabilities over time, work on multiple accounts, and contribute to a dynamic and growing team.
You will play a pivotal role in transforming raw data from various sources into structured, high-quality data products that drive business decisions. The ideal candidate should be motivated to learn and grow within the organization, actively collaborating with experienced engineers to strengthen our data capabilities over time.
About the project
This project focuses on building a centralized data platform for a leading investment firm that supports data-driven decision-making for high-growth companies. Currently, data is sourced from multiple locations, including Excel files, third-party tools, and custom applications, managed within separate systems. This decentralized approach creates inefficiencies and introduces the potential for data inaccuracies.
The objective is to integrate these data sources into a single, unified platform that streamlines access and reduces manual errors. By transforming financial, legal, and operational data into structured data marts, the platform will enable advanced analytics and real-time visualization through BI tools on both web and mobile interfaces.
Skills & Experience
- Bachelor’s degree in Computer Science, Software Engineering, or equivalent experience.
- Minimum 3 years of experience in Python, SQL, and data engineering processes.
- Experience with Palantir Foundry or a strong willingness to learn and develop expertise in it.
- Proficiency in multiple database systems, such as PostgreSQL, Redis, and a data warehouse like Snowflake, including query optimization.
- Hands-on experience with Microsoft Azure services.
- Strong problem-solving skills and experience with data pipeline development.
- Familiarity with testing methodologies (unit and integration testing).
- Docker experience for containerized data applications.
- Collaborative mindset, capable of working across multiple teams and adapting to new projects over time.
- Fluent in English (written & verbal communication).
- Curiosity and enthusiasm for finance-related domains (personal & corporate finance, investment concepts).
Nice to have
- Experience with Databricks.
- Experience with Snowflake.
- Background in wealth management, investment analytics, or financial modeling.
- Contributions to open-source projects or personal projects showcasing data engineering skills.
Responsibilities
- Design and maintain scalable data pipelines to ingest, transform, and optimize data.
- Collaborate with cross-functional teams (engineering, product, and business) to develop solutions that address key data challenges.
- Support data governance, data quality, and security best practices.
- Optimize data querying and processing for efficiency and cost-effectiveness.
- Work with evolving technologies to ensure our data architecture remains modern and adaptable.
- Contribute to a culture of learning and knowledge sharing, supporting newer team members in building their skills.
- Grow into new roles within the company by expanding your technical expertise and working on diverse projects over time.
Информация о компании Proxet
Преимущества сотрудникам
- English Courses
- Team buildings
- Відпустка по догляду за дитиною
- Гнучкий графік роботи
- Допомога психотерапевта
- Компенсація витрат на спорт
- Медичне страхування
- Надається ноутбук
- Оплачувані лікарняні
- Освітні програми, курси
- AWS
- Snowflake
- Salesforce
- Workato
- Microsoft Power BI
- Python
We are seeking a Senior Data Engineer to lead the design and implementation of a robust data pipeline and warehouse architecture leveraging Snowflake on AWS. This role will focus on ingesting and transforming data primarily from Salesforce (SFDC) and potentially other marketing and sales systems, enabling advanced analytics and reporting capabilities. The candidate will play a key advisory role in defining and implementing best practices for data architecture, ingestion, transformation, and reporting.
About the project
Our client is a global real estate services company specializing in the management and development of commercial properties. Over the past several years, the organization has made significant strides in systematizing and standardizing its reporting infrastructure and capabilities. Due to the increased demand for reporting, the organization is seeking a dedicated team to expand capacity and free up existing resources.
Skills & Experience
- 5+ years of experience in data architecture, data engineering, or related roles.
- Proven expertise in designing and implementing data pipelines on AWS.
- Hands-on experience with Snowflake (ingestion, transformation, and data modeling).
- Strong understanding of Salesforce (SFDC) data structures and integrations.
- Deep knowledge of data warehouse architectures, including Medallion architecture and data governance.
- Good to know: Workato (or similar integration tools) and Power BI for dashboards and reporting.
- Experience in data validation, cleansing, and optimization techniques.
- Exceptional communication and stakeholder management skills.
- Ability to work independently and deliver results in a fast-paced environment.
Responsibilities
- Design and implement scalable data pipelines on AWS to ingest and transform data from Salesforce (SFDC) and other sources into Snowflake.
- Integrate SFDC data using tools like Workato (or propose alternative solutions).
- Provide advisory services on Snowflake architecture and implement best practices for ingestion, validation, cleansing, and transformation.
- Develop the initial set of data products (analytics, dashboards, and reporting) in Power BI.
- Guide the creation of a semantic layer and optimize data governance using Medallion architecture.
- Ensure scalability, efficiency, and performance of the data infrastructure.
Информация о компании Proxet
Преимущества сотрудникам
- English Courses
- Team buildings
- Відпустка по догляду за дитиною
- Гнучкий графік роботи
- Допомога психотерапевта
- Компенсація витрат на спорт
- Медичне страхування
- Надається ноутбук
- Оплачувані лікарняні
- Освітні програми, курси
- SQL
- Python
- Hadoop
- Yarn
- MapReduce
- Hive
- HDFS
- Apache Spark
- Kafka
- AWS
- EMR
- Amazon S3
- AWS Lambda
- AWS Redshift
- PySpark
EPAM у пошуку Senior/Lead Big Data Software Engineer для розробки та підтримки внутрішніх систем сховищ даних для продуктів та додатків нашого клієнта.
Чим ви будете займатися у цій ролі
- Розробка та підтримка складних систем сховищ даних для обробки великих обсягів даних
- Проєктування і оптимізація архітектури баз даних та схем даних
- Розробка та впровадження ефективних запитів SQL для аналізу та обробки даних
- Розробка і впровадження програмних рішень за допомогою екосистеми Hadoop (Yarn, MapReduce, Hive, HDFS)
- Використання Spark та Kafka для обробки та передачі даних у реальному часі
- Налаштування та керування хмарними сервісами AWS (EMR, S3, Lambda)
- Оптимізація існуючих рішень для зменшення витрат часу та ресурсів
- Співпраця з іншими інженерами та командами для створення масштабованих і ефективних рішень
- Виконання аналізу даних, створення звітів та рекомендацій для покращення систем
- Визначення й реалізація кращих практик для забезпечення безпеки та цілісності даних
- Участь у підготовці технічної документації та підтримка документації для користувачів
Навички
- Більше 4 років досвіду в розробці сховищ даних та проєктуванні баз даних
- Експерт в SQL з досвідом роботи не менше 2+ років
- Хороші навички розробки на Python
- Розробка програмних рішень з використанням компонент екосистеми Hadoop (Yarn, MapReduce, Hive, HDFS)
- Практичний досвід з Spark та Kafka
- Досвід роботи з хмарними сервісами AWS (EMR, S3, Lambda)
- Відмінні аналітичні навички та навички комплексного вирішення проблем
- Хороші комунікаційні та презентаційні навички
- Уважність до деталей
- Здатність дотримуватися стислих термінів та вміння розставляти пріоритети
Буде перевагою
- Досвід роботи в адмініструванні баз даних
- Досвід роботи з AWS Redshift
- Практичний досвід з pyspark
- Розуміння підходів масштабування сервісів/аплікацій
Информация о компании EPAM
Преимущества сотрудникам
- English Courses
- Relocation assistance
- Гнучкий графік роботи
- Допомога психотерапевта
- Компенсація домашнього офісу
- Компенсація навчання
- Медичне страхування
- Оплачувані лікарняні
- Освітні програми, курси
- Apache Spark
- Presto
- Hive
- Apache Flink
- Apache Beam
- AWS
- Kubernetes
- Java
- Python
- Ray
- ML
- AI
We are looking for engineers well versed in cloud data infrastructure to advise customers by building solutions using diverse technologies like Spark, Flink, Kafka, Ray and others. As part of this group, you will educate customers on the value proposition of the cloud data platform, address their challenges and deliver solutions that go from proof of concept to production.You will be expected to effectively partner with cross-functional engineering teams and customers.
Essential functions
- Must have:
- AWS experience with kubernetes operating knowledge.
- Excellent communication skills
- Big data experience with Spark or Flink
- Experience with automation for testing, monitoring and CI/CD
Qualifications
- 8+ YOE, with 5+ years of experience working with big data technologies and cloud environments
- Hands-on experience on batch processing (Spark, Presto, Hive) or streaming (Flink, Beam, Spark Streaming)
- Experience in AWS and knowledge in its ecosystem. Experience in scaling and operating kubernetes.
- Excellent communication skills is a must, experience working with customers directly to explain how they would use the infrastructure to build solutions that meet their business goals
- Proven ability to work in an agile environment, flexible to adapt to changes
- Able to work independently, research on possible solutions to unblock customer
- Programming experience in Java or Python
- Fast learner and experience with other common big data open source technologies is a big plus
- Knowledge on machine learning is a nice-to-have
Would be a plus
- Experience working in a customer-facing or consulting role
- Programming experience in Java and Python
- Knowledge on Ray
- Knowledge on Machine Learning and AI
Информация о компании Grid Dynamics
Преимущества сотрудникам
- English Courses
- Relocation assistance
- Гнучкий графік роботи
- Догляд за дітьми співробітників
- Компенсація витрат на спорт
- Компенсація навчання
- Медичне страхування
- Освітні програми, курси
- Apache Spark
- Scala
- Hadoop
- Kafka
- Oracle
- PostgreSQl
- Teradata
- Cassandra
We are building scalable data pipelines and infrastructure to generate reports based on massive datasets. Our team is responsible for designing, developing, and validating jobs using the latest versions of Scala and Apache Spark, ensuring accurate statistical results for our stakeholders.
This is a distributed team environment, offering an exciting opportunity to collaborate with top Big Data engineers across Europe and overseas.
While this role primarily focuses on Big Data engineering, experience with CI/CD and DevOps practices is a strong advantage, as infrastructure-related tasks will also be part of the job.
Essential functions
- Design and Develop Scalable Data Pipelines
- Implement and Validate Big Data Solutions
- Integrate and Manage Infrastructure
- Collaborate in a Distributed Environment
Qualifications
- Strong expertise in Spark and Scala
- Hands-on experience with Hadoop
- Proficiency in processing and computation frameworks: Kafka, Spark
- Experience with database engines: Oracle, PostgreSQL, Teradata, Cassandra
- Understanding of distributed computing technologies, approaches, and patterns
Would be a plus
- Experience with Data Lakes, Data Warehousing, or analytics systems
Информация о компании Grid Dynamics
Преимущества сотрудникам
- English Courses
- Relocation assistance
- Гнучкий графік роботи
- Догляд за дітьми співробітників
- Компенсація витрат на спорт
- Компенсація навчання
- Медичне страхування
- Освітні програми, курси
- AWS services
- AWS Glue
- Athena
- EMR
- EC2
- IAM
- MWAA
- Python
- PySpark
- Django
- Great Expectations
- Soda
The client is the largest pan-European online car market with around 1.5 million listings and more than 43,000 car dealer partners offer inspiring solutions and empowering services. We amaze our customers by delivering real value.
Details on tech stack:
- Expertise in AWS services, especially Glue, Athena, EMR, EC2, IAM, MWAA
- Proficiency in Python and PySpark
Key requirements to the candidate:
- AWS Services: Expertise in AWS services, especially Glue, S3, Athena, EMR, EC2, and MWAA.
- Programming Languages: Proficiency in Python, PySpark, SQL, and/or Scala.
- Big Data Technologies: Hands-on experience with Spark, and Trino, Presto
- Data Platforms: Experience in building data platforms, not just using them
Qualifications
- Expertise in AWS services, especially Glue, Athena, EMR, EC2, IAM, MWAA
- Proficiency in Python and PySpark
- Experience in building data platforms, not just using them
- Proficiency in data modeling techniques and best practices
- Experience in implementing data contracts
- Experience in applying data governance policies
- Experience with data quality frameworks (Great expectations, Soda)
- Familiarity with the data mesh architecture and its principles
- Django experience
- Important: Strong Python knowledge.
Информация о компании Grid Dynamics
Преимущества сотрудникам
- English Courses
- Relocation assistance
- Гнучкий графік роботи
- Догляд за дітьми співробітників
- Компенсація витрат на спорт
- Компенсація навчання
- Медичне страхування
- Освітні програми, курси
- MySQL
- PostgreSQl
- AWS
- AWS RDS
- Aurora
- DynamoDB
- Indigo Data Services
- Prometheus
- Grafana
- CloudWatch
- SQL
- Python
- Bash
- Snowflake
- NoSQL
- MongoDB
- ClickHouse
- BigQuery
- Kubernetes
- Docker
- CI/CD
We are seeking an experienced Senior Data Engineer (MySQL, PostgreSQL) to play an essential role in implementing and maintaining a data warehouse. A career in Exadel means you will work alongside exceptional colleagues and be empowered to reach your professional and personal goals.
About the Customer
The world’s largest publisher of investment research. For over two decades it connects the world’s leading asset and wealth managers with nearly 1,000 research firms in more than 50 countries and serves internal teams across multi-national corporations from its offices located in Durham (HQ), New York, London, Edinburgh, and Timisoara.
The client facilitates the equitable exchange of critical investment insights by improving efficiency, collaboration, and security across the complete information lifecycle. The ecosystem is designed to meet users’ bespoke needs, from compliance tracking to interactive publishing, by removing friction from the publication, dissemination, consumption, and application of investment research content.
Requirements
- 5+ years of background in working with MySQL and PostgreSQL in production environments
- Strong expertise in query optimization, indexing strategies, execution plan analysis, and performance tuning
- Competency in database replication, failover strategies, and high-availability architectures
- Hands-on experience with AWS database services, including RDS, Aurora, and DynamoDB
- Proficiency in troubleshooting database performance issues and implementing tuning strategies
- Familiarity with database integration challenges and experience working with Indigo Data Service or similar solutions
- Skills in monitoring and alerting tools like Prometheus, Grafana, or CloudWatch for real-time database performance tracking
- Practice in SQL, Python, or Bash for automation of database operations
- Familiarity with schema design, normalization, and data modeling best practices
- English level – Upper-Intermediate
Nice to Have
- Skills in working with Snowflake and cloud-based data warehouses
- Knowledge of NoSQL databases such as MongoDB, ClickHouse, or BigQuery
- Exposure to containerized database deployments using Kubernetes and Docker
- Experience with CI/CD pipelines for database schema changes and migrations
Responsibilities
- Optimize query performance and fine-tune database configurations for MySQL to enhance system efficiency
- Resolve database integration issues by decoupling dependencies and leveraging the Indigo Data Service for improved failover strategies
- Analyze and mitigate single points of failure within the current database architecture, ensuring high availability and fault tolerance
- Improve indexing strategies, caching mechanisms, and partitioning techniques to reduce query execution times
- Monitor and troubleshoot performance bottlenecks, identifying and addressing slow queries, deadlocks, and resource contention
- Implement database scaling strategies, including read replicas, sharding, and horizontal scaling in AWS environments
- Enhance database security and compliance by implementing best practices for access control, encryption, and backup strategies
- Work closely with software engineers, DevOps, and data teams to ensure seamless database integration and high-performance data access
- Deploy and manage database solutions on AWS, optimizing for cost efficiency and scalability
- Implement automation for database maintenance tasks, including schema migrations, backups, and failover management
Информация о компании Exadel
Преимущества сотрудникам
- Team buildings
- Work-life balance
- Допомога психотерапевта
- Компенсація витрат на спорт
- Компенсація навчання
- Медичне страхування
- SQL
- RDBMS
- T-SQL
- PostgreSQl
- MySQL
- ETL
- Microsoft Fabric
- Microsoft Power BI
- Azure
- AWS
- Git
- Cypress
- Snowflake
- Apache Airflow
- AWS Redshift
We are looking for a SQL & Data Engineer who will be responsible for building scalable ETL pipelines, optimizing SQL queries, and developing efficient data models to power business intelligence and reporting. This role places a strong emphasis on database management, data processing, and ETL automation, while also integrating Microsoft Fabric and Power BI for visualization and reporting.
This role is for you if you:
- Have deep expertise in SQL (T-SQL, PostgreSQL, MySQL) and query optimization.
- Have experience designing and maintaining ETL pipelines for large-scale data processing.
- Can structure and model data efficiently for BI and reporting purposes.
- Have experience working with Microsoft Fabric, Power BI, and cloud data solutions.
- Are skilled in data warehousing, data integration, and performance tuning.
Про клієнта:
A real-time 3D rendering and animation software used for creating high-quality visuals, product renderings, and animations. It is popular in industrial design, engineering, and marketing due to its ease of use and fast rendering capabilities. The software offers real-time rendering, physically accurate materials and lighting, and supports both CPU and GPU acceleration. It integrates with CAD programs like SolidWorks, Rhino, and Autodesk Inventor, making it a go-to choice for designers and engineers who need quick yet photorealistic concept visualizations.
Про проект:
Join a leading software development company specializing in real-time 3D rendering and animation solutions. Their intuitive tools empower professionals in product design, engineering, marketing, and architecture to create photorealistic visuals with ease. By supporting a wide range of 3D formats and offering advanced features like lighting simulation and environment controls, the company streamlines creative workflows. As a BI Developer, you will play a crucial role in leveraging data to drive business decisions, ensuring seamless data integration, and developing impactful Power BI dashboards that enhance operational efficiency.
Вимоги:
- Strong SQL (T-SQL, PostgreSQL, MySQL) expertise, including query optimization and indexing.
- Hands-on experience with ETL processes and workflow automation.
- Data modeling expertise, including normalization, denormalization, and performance tuning.
- Experience with Microsoft Fabric and Power BI for BI and reporting.
- Knowledge of data warehousing concepts and best practices.
- Familiarity with cloud platforms (Azure, AWS) is a plus.
- Strong problem-solving skills and ability to work with large datasets.
- Ability to work independently and as part of a team.
- Bachelor’s degree in Computer Science, Information Technology, or a related field is a plus
Обов'язки:
- Develop and optimize SQL queries, stored procedures, and data models to support data processing and analytics.
- Design, build, and maintain ETL pipelines to integrate data from multiple sources.
- Structure and clean raw data to enable efficient BI reporting and decision-making.
- Implement and manage Microsoft Fabric and Power BI dashboards for real-time analytics.
- Monitor and optimize database performance, ensuring reliability and scalability.
- Collaborate with finance, product, and business teams to define data requirements and reporting needs.
Информация о компании FlexMade
Преимущества сотрудникам
- Англомовне середовище
- Багатонаціональна команда
- Гнучкий графік роботи
- Освітні програми, курси
- Java
- Scala
- JMM
- UML
- HBase
- Cassandra
- Kafka
SE Ranking is the company standing behind this powerful and intuitive SEO platform, which has been trusted by over a million businesses, agencies, and SEO professionals since 2013 and we are looking for a highly motivated and proactive Senior Big Data / Java Engineer , who is excited to take on new challenges.
Responsibilities:
- Design and develop a large distributed system with multiple nodes.
- Create complex MapReduce and Spark pipelines for processing data volumes reaching hundreds of terabytes.
- Optimize and refine existing Spark pipelines.
- Develop ETL pipelines for OLAP databases, write, and optimize SQL queries.Maintain system stability and respond promptly to emerging issues.
Requirements:
- Experience: At least 7 years of development experience in Java/Scala.
- Expert proficiency in Java, including a deep understanding of multi-threaded and concurrent development.
- Knowledge of JMM (Java Memory Model) and its capabilities in multi-threading.
- Database experience: Schema design, understanding of key concepts (views, joins, transactions, transaction isolation levels, locks, etc.).
- Performance optimization skills for Java applications.
- Experience with distributed systems, including independent design of libraries and subsystems.
- Familiarity with UML and the ability to document architectural decisions.
- Strong OOP understanding and practical application of design patterns.
- Task management skills: Ability to assess complexity, decompose tasks, and prioritize execution.
- Technical documentation skills: Clear and structured documentation of solutions and recommendations.
Personal Qualities:
- Ability to make decisions and take responsibility for them.
- Capability to foresee potential issues at the design stage.
- Willingness to acknowledge mistakes, analyze, and correct them.
- Attention to detail and commitment to delivering high-quality solutions.
Would be an advantage:
- Experience with large-scale data storage and processing systems (HBase, Cassandra, MR Jobs, Flow computations, Kafka, etc.).
- Proficiency in Scala (if Java is the primary language).
Информация о компании SE Ranking
Преимущества сотрудникам
- English Courses
- Team buildings
- Бухгалтерський супровід
- Гнучкий графік роботи
- Компенсація навчання
- Медичне страхування
- SQL
- Apache Airflow
- Redshift
- Python
- CI/CD
We are seeking a highly skilled Senior Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data processing and analytics. This role requires strong collaboration with Data Analysts, Data Scientists, and Business Stakeholders to drive data-driven decision-making.
Does this relate to you?
- 5+ years of experience in Data Engineering or a related field.
- Strong expertise in SQL and data modeling concepts.
- Hands-on experience with Airflow.
- Experience working with Redshift.
- Proficiency in Python for data processing.
- Strong understanding of data governance, security, and compliance.
- Experience in implementing CI/CD pipelines for data workflows.
- Ability to work independently and collaboratively in an agile environment.
- Excellent problem-solving and analytical skills.
A new team member will be in charge of:
- Design, develop, and maintain scalable data warehouse solutions.
- Build and optimize ETL/ELT pipelines for efficient data integration.
- Design and implement data models to support analytical and reporting needs.
- Ensure data integrity, quality, and security across all pipelines.
- Optimize data performance and scalability using best practices.
- Work with big data technologies such as Redshift.
- Collaborate with cross-functional teams to understand business requirements and translate them into data solutions.
- Implement CI/CD pipelines for data workflows.
- Monitor, troubleshoot, and improve data processes and system performance.
- Stay updated with industry trends and emerging technologies in data engineering.
Информация о компании Glorium Technologies
Преимущества сотрудникам
- English Courses
- Бухгалтерський супровід
- Гнучкий графік роботи
- Компенсація витрат на спорт
- Медичне страхування
- Освітні програми, курси
- Регулярний перегляд зарплатні
- Python
- FastAPI
- Flask
- Django
- CI/CD
- Docker
- Apache Airflow
- SQL
- NoSQL
- Snowflake
- DynamoDB
- TensorFlow
- PyTorch
- scikit-learn
- Keras
- Apache Spark
- Apache Flink
- AWS Kinesis
Pwrteams are seeking a Senior Data Engineer to join the TUI Musement Data Science & Analytics Team. As a Data & ML Engineer, you will be providing support to the rest of Data profiles in TUI Musement building the tooling used on analytical and operational Data and ML/AI Products helping them at the task of delivering value to the business, and managing the platform used by all Data and ML/AI products on the Data area.
Responsibilities:
- Develop and maintain cutting-edge Data and ML/AI solutions for TUI Musement that meet our Service Level Agreements (SLAs) and fit within our budget.
- Build and optimize Data and ML/AI tools to empower your fellow data professionals, helping them unlock the full potential of our platform.
- Design and manage ETL pipelines and workflows using technologies like SQL and AWS Big Data services, ensuring that data flows seamlessly from source to insight.
- Create robust Machine Learning pipelines that handle the lifecycle of ML models from development to deployment, monitoring, and optimization.
- Implement and manage APIs for model inference, supporting Data Scientists in deploying their solutions at scale.
- Monitor and enhance ML Data products, fine-tuning them for performance, accuracy, and scalability.
- Lead efforts to automate and optimize internal processes – whether it’s reducing manual work, improving data delivery, or re-designing infrastructure for greater efficiency.
- Collaborate with your team in an agile environment, following operational and control procedures to deliver top-tier results.
- Contribute to the design and modeling of a scalable Enterprise Data Platform within the cloud (AWS), ensuring that our data architecture remains future-proof.
Qualifications:
- Proficient in Python, with a commitment to writing clean, maintainable, and testable code.
- Experience with API frameworks like FastAPI, Flask, or Django.
- Demonstrated expertise in Data and ML pipelines, with hands-on experience in CI/CD frameworks.
- Comfortable building containerized solutions with Docker and using orchestration tools like Airflow.
- Proficient in working with both SQL and NoSQL databases: Our current stack includes technologies like Snowflake and DynamoDB, among others.
- A strong understanding of data transformation, metadata management, and workflow orchestration.
- Familiarity with Machine Learning frameworks and libraries is highly valued.
- Experience with AWS cloud services and stream-processing systems like Apache Spark, Apache Flink, or Kinesis is a plus.
- You have an analytical mindset and a passion for continuous learning, embracing challenges as opportunities.
- Fluent in English, you’re able to communicate both technical details and big-picture concepts with ease, whether you’re speaking to senior stakeholders or peers in the technical team.
- You’re passionate about data, machine learning, and finding solutions to complex challenges, but you also thrive in a collaborative environment.
- You see data not just as numbers but as the foundation for meaningful change, and you’re excited about the opportunity to help revolutionize the travel experience.
Информация о компании Pwrteams
Преимущества сотрудникам
- English Courses
- Relocation assistance
- Work-life balance
- Кава, фрукти, перекуси
- Компенсація витрат на спорт
- Компенсація навчання
- Медичне страхування
- Оплачувані лікарняні
- Оплачувана відпустка
- Регулярний перегляд зарплатні
- Snowflake
- Redshift
- BigQuery
- dbt
- SQL
- Python
- Java
- Scala
- С++
- Apache Airflow
- Luigi
- Azkaban
- Microsoft Power BI
- Tableau
Pwrteams are seeking a Senior Analytics Engineer to join the TUI Musement Data Science & Analytics Team. As an Analytics Engineer, you will be providing support to the business experts in TUI Musement in the definition of the requirements for the Data ecosystem and ensure the design, construction and testing of the solutions, modelling the data that is provided to the Data ecosystem by Data Engineers.
Imagine being at the forefront of how data transforms the travel industry!
Responsibilities:
- Delivering high-quality Data solutions for TUI MM in accordance with the agreed SLA´s and within the budget.
- Create and maintain optimal data pipeline architecture.
- Assemble large, complex data sets that meet functional/non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Follow and maintain operational and control procedures as a part of an agile team.
- Help in the design and modeling of the Data in an Enterprise Data Platform and provide effective solutions in a cloud environment.
Qualifications:
- 5+ years of experience building processes supporting data transformation, data structures, metadata, dependency and workload management.
- 5+ years of experience working on data warehousing platforms (Snowflake, Redshift, Bigquery, etc.) including data transformation, data model design and query optimization strategies.
- Solid understanding of data modeling and database design principles.
- Experience with DBT is highly valuable.
- Proficiency with SQL and experience with object-oriented/object function scripting languages (Python, Java, C++, Scala) is highly valuable.
- Experience with data pipeline and workflow management tools: Airflow, Azkaban, Luigi is valuable.
- Experience with BI tools like PowerBI, Tableau.
- Analytical, conceptual and implementation skills. Ability to analyze business requirements in various business functional areas and translate theminto conceptual, logical and physical data models.
- Passion for learning without fear of failure in a passionate and agile team.
- Excellent communicator, both verbal and written, in English. Comfortable communicating high level concepts to senior stakeholders whilst also being able to delve into the detail of complex changes when required.
Информация о компании Pwrteams
Преимущества сотрудникам
- English Courses
- Relocation assistance
- Work-life balance
- Кава, фрукти, перекуси
- Компенсація витрат на спорт
- Компенсація навчання
- Медичне страхування
- Оплачувані лікарняні
- Оплачувана відпустка
- Регулярний перегляд зарплатні
- JavaScript
- RESTful API
- ETL
- Agile
- Scrum
- Jira
- Salesforce
- Adobe
- GDPR
- Tableau
- Microsoft Power BI
- Twilio
- Braze
NBS LVIV is seeking a highly motivated and experienced Twilio Segment CDP and Braze Developer to join our Digital Marketing and Data Integration team. This role will focus on maintaining and implementing integrations for Twilio Segment CDP and Braze platforms to support our marketing efforts across Europe. The ideal candidate will work closely with campaign managers and other team members to ensure seamless data flow and efficient campaign execution.
Key Responsibilities:
- Design, develop, and maintain integrations between Twilio Segment CDP, Braze, and other internal and external systems.
- Collaborate with campaign managers to configure and optimize data flow for marketing campaigns.
- Monitor and troubleshoot data integration issues, ensuring data accuracy and timely delivery.
- Develop and maintain documentation for data flows, system integrations, and processes.
- Participate in code reviews and provide constructive feedback to team members.
- Stay current on industry trends, best practices, and new features of Twilio Segment CDP and Braze.
- Collaborate with cross-functional teams to identify opportunities for improvement and drive continuous integration and deployment practices.
Required Skills and Qualifications:
- Bachelor's degree in Computer Science, Information Technology, or a related field.
- 3+ years of experience in software development, with a focus on data integration and API development.
- Strong proficiency in programming languages such as JavaScript.
- Experience working with RESTful APIs and web services.
- Knowledge of data modeling, data warehousing, and ETL processes.
- Familiarity with Agile/Scrum methodologies and project management tools such as JIRA.
- Excellent problem-solving skills and attention to detail.
- Proficient in English, both written and spoken.
Desired Skills:
- Experience with other marketing automation and customer data platforms (e.g., Salesforce, Adobe, etc.).
- Knowledge of GDPR and other data privacy regulations.
- Familiarity with data visualization tools (e.g., Tableau, Power BI).
- Experience working in a distributed team or remote work environment.
- Hands-on experience with Twilio Segment CDP and Braze platform integrations.
Soft Skills:
- Strong communication and interpersonal skills.
- Ability to work effectively in a team environment.
- Adaptability and willingness to learn new technologies.
- Self-motivated and able to work independently with minimal supervision.
- Strong time management and prioritization skills.
Информация о компании Nestle Integrated Business Services Lviv
Преимущества сотрудникам
- Relocation assistance
- Медичне страхування
- Оплачувані лікарняні
- Оплачувана відпустка
- Освітні програми, курси
- Регулярний перегляд зарплатні
- Cisco CCNP
- VMware
- AWS
- Python
- Docker
- Kubernetes
- Citrix
- Fortinet
Що потрібно робити:
- Брати участь у реалізації проєктів у сфері датацентрових мережевих технологій: розроблення рішень, створення плану робіт, реалізація, створення технічної документації, презентація рішення, інструктаж замовника.
Професійні знання, уміння і навички:
- Знання на рівні сертифікації:
- Cisco (CCNP DC);
- VMWare (VCP-NV/VCAP-NV);
- AWS (ANS).
- Знання і досвід роботи з рішеннями:
- Cisco ACI, NDO/NDFC;
- VMWare NSX.
- Високий рівень знань у технічній сфері відповідного напряму.
- Високий рівень володіння технологіями й функціоналом основних продуктів та знання продуктів ключових виробників.
- Наявність відповідних сертифікатів і компетенцій від виробників або інших профільних установ.
- Практичний досвід створення архітектурних рішень та імплементацій у відповідній технічній сфері.
- Широкий кругозір і постійний професійний розвиток, що підтверджується успішними проєктами та схвальними відгуками замовників.
- Володіння широким спектром інструментів для створення і впровадження технічних рішень у своєму напрямі.
- Уміння швидко та якісно впроваджувати технічні рішення в межах своїх компетенцій.
- Уміння розробляти й писати інструкції та створювати виконавчу документацію.
- Уміння переконливо аргументувати свою думку й захищати технічні рішення.
Вітається:
- NGFW – Fortinet/Cisco FTD/Paloalto;
- ADC – Citrix/F5/A10;
- *nix;
- docker/k8s;
- Python.
Информация о компании Netwave
Преимущества сотрудникам
- Team buildings
- Work-life balance
- Кава, фрукти, перекуси
- Оплачувані лікарняні
- Оплачувана відпустка
- Освітні програми, курси
- Регулярний перегляд зарплатні
Страницы
Читайте нас в Telegram, чтобы не пропустить анонсы новых курсов.