Вакансии Data Engineer / Big Data Architect

Outstaff, Consulting / Integrator
Опыт от 5 лет Senior Full-time Не имеет значения Есть тестовое задание Remote Україна, Bulgaria, Portugal, Romania, Poland
05.02.2025
Подробнее
  • Golang
  • SQL
  • NoSQL
  • Kafka
  • Apache Pulsar
  • ELK
  • Redis
  • Apache Spark
  • Apache Flink
  • Kubernetes

About the Product:

The product of our client stands at the forefront of advanced threat detection and response, pioneering innovative solutions to safeguard businesses against evolving cybersecurity risks. It is a comprehensive platform that streamlines security operations, empowering organizations to swiftly detect, prevent, and automate responses to advanced threats with unparalleled precision and efficiency.

About the Role:

We are looking for a proactive, innovative, and responsible Senior Big Data Engineer with extensive knowledge and experience with GoLangб streaming and batching processes, building DWH from scratch. Join our high-performance team to work with cutting-edge technologies in a dynamic and agile environment.

Key Responsibilities:

  • Design & Development: Architect, develop, and maintain robust distributed systems with complex requirements, ensuring scalability and performance.
  • Collaboration: Work closely with cross-functional teams to ensure the seamless integration and functionality of software components.
  • System Optimization: Implement and optimize scalable server systems, utilizing parallel processing, microservices architecture, and security development principles.
  • Database Management: Effectively utilize SQL, NoSQL, Kafka/Pulsar, ELK, Redis and column store databases in system design and development.
  • Big Data Tools: Leverage big data tools such as Spark or Flink to enhance system performance and scalability(experience with these tools is advantageous).
  • Deployment & Management: Demonstrate proficiency in Kubernetes (K8S) and familiarity with GTP tools to ensure efficient deployment and management of applications.

Required Competence and Skills:

  • At least 5 years of experience in Data Engineering domain.
  • At least 2 years of experience with GoLang.
  • Proficiency in SQL, NoSQL, Kafka/Pulsar, ELK, Redis and column store databases.
  • Experienced with big data tools such as Spark or Flink to enhance system performance and scalability.
  • Proven experience with Kubernetes (K8S) and familiarity with GTP tools to ensure efficient deployment and management of applications.
  • Ability to work effectively in a collaborative team environment.
  • Excellent communication skills and a proactive approach to learning and development.

Advantages:

  • Experience in data cybersecurity domain.
  • Experience in startup growing product.

Информация о компании Adaptiq

Adaptiq – это технологическая консалтинговая компания, специализирующаяся на создании и масштабировании R&D команд для высококлассных, быстрорастущих продуктовых компаний в различных отраслях.
Год основания: 2020
Количество сотрудников: 51-100
Сайт: adaptiq.co

Преимущества сотрудникам

Откликнуться
Опыт не имеет значения Middle, Senior Full-time Не имеет значения Есть тестовое задание Office, Remote Україна, Київ, Львів, Харків, Дніпро, Одеса, Brazil, Bulgaria, Georgia, Poland
04.02.2025
Подробнее
  • SQL
  • Azure Data Factory
  • Microsoft Fabric
  • Python
  • Apache Spark
  • PySpark
  • Microsoft Azure
  • Azure Portal
  • MSSQL
  • PostgreSQl
  • Redshift
  • Java
  • Scala
  • Microsoft Power BI

We are seeking an Azure Data Engineer to join our team. In this role, you will be responsible for creating Microsoft Fabric pipelines and organizing ETL/ELT processes for a leading investment management company.

Requirements

  • Perfect level of SQL
  • Practical experience with Azure data services (especially Azure Data Factory) or MS Fabric
  • Practical experience with Python
  • Good knowledge level and practical experience with Spark, especially PySpark
  • Experience with Azure Security services and Azure Portal
  • Knowledge and practical experience with data warehousing, data governance and management

Nice to have

  • Practical experience with MS SQL
  • Practical experience with PostgreSQL
  • Practical experience with Redshift
  • Practical experience with Java and Scala
  • Practical experience with PowerBI

Информация о компании DataArt

Специалисты компании помогают клиентам в разработке специального программного обеспечения, улучшающего их деятельность и расширяющего охват рынка благодаря командам высококвалифицированных инженеров, расположенных по всему миру, глубокому пониманию отраслевых секторов и постоянному исследованию технологии. DataArt работает с клиентами в любом масштабе и на любой платформе, адаптируясь с ними, когда они меняются, что позволяет компании предоставлять своим клиентам надежные, качественные решения и долгосрочные отношения.
Год основания: 1997
Количество сотрудников: 1001-5000
Сайт: dataart.team

Преимущества сотрудникам

Откликнуться
Опыт не имеет значения Middle, Senior Full-time Upper-Intermediate / B2 Есть тестовое задание Office, Remote Україна, Київ
04.02.2025
Подробнее
  • SQL
  • MySQL
  • PostgreSQl
  • Microservices
  • AWS
  • Snowflake
  • Redshift
  • Kafka
  • AWS Kinesis
  • NoSQL
  • MongoDB
  • DynamoDB

We are seeking a highly knowledgeable Data Specialist to take the lead in managing and centralizing data from our various microservices and third-party systems. Your expertise will guide us in creating a unified, queryable data layer to benefit our global humanitarian network. You will be the go-to authority for all data-related strategies and implementations.

Key Responsibilities:

  • Collect, clean, and centralize data from multiple microservices and third-party systems;
  • Design and implement a centralized data repository or warehouse;
  • Develop and optimize SQL queries for data retrieval and analytics;
  • Work closely with software engineers and data analysts to fulfill data requirements;
  • Ensure data integrity, security, and compliance with relevant regulations.

Required Skills:

  • Strong expertise in SQL databases like MySQL and PostgreSQL;
  • Experience with data integration tools and platforms;
  • Familiarity with microservices architecture;
  • Proficient in data modeling and schema design;
  • Strong knowledge of data security and compliance measures.

Nice-to-Have:

  • Experience with data warehousing solutions like Snowflake or Redshift;
  • Familiarity with data pipeline tools like Apache Kafka or AWS Kinesis;
  • Knowledge of NoSQL databases like MongoDB or DynamoDB.

Personal Qualities and Soft Skills:

  • Highly autonomous and capable of managing tasks efficiently;
  • Excellent problem-solving capabilities;
  • Strong communication skills, especially in a remote work environment.

Информация о компании Matoffo

Matoffo – международная IT-компания, которая специализируется на облачных решениях для технологических продуктов. Компания состоит из высококвалифицированных инженеров, которые занимаются разработкой масштабированных облачных приложений, предлагая услуги AI, Cloud, DevOps, Data & Software Engineering, ускоряющие инновации и развертывание продуктов, а также создают надежную и безопасную облачную инфраструктуру для клиентов в Соединенных Штатах, Канаде и Европе.
Год основания: 2017
Количество сотрудников: 11-50
Сайт: matoffo.com

Преимущества сотрудникам

Откликнуться
Опыт от 3 лет Middle, Senior Full-time Upper-Intermediate / B2 Есть тестовое задание Office, Remote Україна, Київ
04.02.2025
Подробнее
  • AWS
  • ETL
  • PySpark
  • SQL
  • NoSQL
  • DataBricks
  • Agile
  • Unit testing

We are seeking a skilled Data Engineer to join our team. The Data Engineer will be responsible for designing, developing, and maintaining our data infrastructure to enable efficient data processing, analysis, and reporting. The ideal candidate must have strong experience with Databricks, AWS, Python, PySpark, SQL, and NoSQL databases.

Key Responsibilities:

  • Design, develop, and maintain data infrastructure on AWS cloud platform;
  • Develop ETL pipelines to extract, transform, and load data from various sources to data warehouse using PySpark and SQL;
  • Develop automated data quality checks and validation processes to ensure data integrity;
  • Develop data models and schemas to support data analysis and reporting;
  • Collaborate with data analysts and scientists to design and implement data solutions that meet business requirements;
  • Monitor and maintain the performance, availability, and scalability of the data infrastructure.

Requirements:

  • Minimum of 3 years of experience as a Data Engineer;
  • Strong experience with AWS cloud services;
  • Strong experience with Spark (PySpark) for data processing and transformation;
  • Strong experience with SQL and NoSQL databases;
  • Experience with data modeling and schema design;
  • Strong problem-solving and analytical skills;
  • Excellent communication in English and collaboration skills;
  • Ability to work in a fast-paced environment and handle multiple projects simultaneously.

Preferred Qualifications:

  • Experience with Databricks;
  • Experience with software development best practices, such as Agile methodology and unit testing.

Информация о компании Matoffo

Matoffo – международная IT-компания, которая специализируется на облачных решениях для технологических продуктов. Компания состоит из высококвалифицированных инженеров, которые занимаются разработкой масштабированных облачных приложений, предлагая услуги AI, Cloud, DevOps, Data & Software Engineering, ускоряющие инновации и развертывание продуктов, а также создают надежную и безопасную облачную инфраструктуру для клиентов в Соединенных Штатах, Канаде и Европе.
Год основания: 2017
Количество сотрудников: 11-50
Сайт: matoffo.com

Преимущества сотрудникам

Откликнуться
Product / Startup, Outsource
Опыт от 3 лет Middle, Senior Full-time Upper-Intermediate / B2 Есть тестовое задание Office, Hybrid Київ
30.01.2025
Подробнее
  • AWS
  • ETL
  • ELT
  • Amazon S3
  • AWS Lambda
  • AWS Glue
  • Athena
  • AWS SQS
  • CloudWatch
  • EC2
  • SQL
  • Python
  • Apache Spark
  • С#
  • CI/CD
  • Git
  • Shell
  • Terraform
  • YAML
  • MSSQL
  • T-SQL
  • SSIS
  • SSAS
  • SSRS
  • Microsoft Power BI
  • DAX
  • Visual Studio
  • Agile

Are you passionate about healthcare and technology? Join our IT department as a Data Engineer.

What you will do:

  • ETL/ELT development: design, develop, and maintain ETL/ELT routines on AWS cloud data lakes and MS SQL on-prem DWH (SSIS)
  • Reporting: design, develop, and maintain reports using Power BI and SSRS (AWS QuickSight experience is a plus)
  • Data cube management: maintain existing data cubes using multidimensional and tabular models from various data sources (SSAS) and migrate data cubes to AWS data lakes
  • Data modeling: design and develop scalable and extendable data models, including logical/physical layers and ETL processes (on-premise/on-cloud)
  • Technical analysis: conduct technical analysis of existing solutions and prepare supporting technical documentation
  • Data analysis: analyze and organize raw data, evaluate business needs, and prepare data for analysis
  • Self-service support: assist business users with Power BI self-service, including teaching, data sourcing, DAX formula building, and optimization
  • Collaboration: work closely with data engineers, architects, data scientists, and business stakeholders to deliver efficient data engineering solutions
  • Maintenance: maintain existing solutions, resolve incidents, and optimize data infrastructure
  • Automation: develop and automate data workflows, including data ingestion, aggregation, and ETL processing
  • Data preparation: prepare raw data in data lakes/data warehouses into consumable datasets for stakeholders
  • DevOps integration: collaborate with the DevOps team to automate deployment processes using PowerShell and DevOps Azure
  • Data quality: improve data quality and efficiency across the data infrastructure.

Your profile:

  • Education: Degree in computer science, IT, or a similar field; a master’s degree is a plus;
  • Data engineering certification is a plus.

Hard skills:

  • 3-5 years of experience with AWS cloud and AWS services (S3 Buckets, Lambda, Glue, Athena, SQS queues, CloudWatch, EC2);
  • Proficiency in programming languages: SQL, Python, Spark, C#;
  • Familiarity with software DevOps CI/CD tools: Git, Shell Script, Terraform, YAML;
  • 5+ years of experience with MS SQL (T-SQL, SSIS, SSAS, SSRS);
  • Experience with Power BI and DAX;
  • Proficiency with MS Visual Studio and SQL Management Studio;
  • Technical expertise in data models and data analysis techniques (AI and ML techniques are a plus);
  • Solid understanding of database schema design;
  • Comprehensive understanding of privacy and security development best practices.

Soft skills:

  • Analytical and numerical thinking;
  • Strong teamwork and communication skills;
  • Experience working directly with technical and business teams;
  • Ability to learn quickly, be organized, and be detail-oriented;
  • Excellent problem-solving skills and ability to troubleshoot complex data pipeline issues;
  • Strong collaboration skills with the ability to work effectively in a cross-functional team environment;
  • Experience with Agile development methodologies is a plus.

Preferred skills:

  • Knowledge of other relational databases such as Sybase, MySQL, DynamoDB, and MongoDB;
  • Knowledge of other reporting tools: Tableau, AWS QuickSight;
  • Technical expertise in data models and data analysis techniques (AI and ML techniques).

Информация о компании Materialise

Materialise – международная высокотехнологичная компания, известная своей широкой деятельностью в области создания промышленных и медицинских прототипов, а также поставщик программ 3D-печати и CAD-программного обеспечения. Компания предоставляет новаторские решения, обеспечивающие гибкое промышленное производство и массовую персонализацию в аэрокосмической, автомобильной и MedTech-сферах, создает разнообразное программное обеспечение и предоставляет услуги для поддержки проектов.
Год основания: 1990
Количество сотрудников: 1001-5000
Сайт: materialise.com

Преимущества сотрудникам

Откликнуться
Product / Startup
Опыт от 1 года Junior, Middle Full-time Advanced / Fluent / C1 Есть тестовое задание Remote, Hybrid Київ
17.01.2025
Подробнее
  • Azure Databricks
  • Apache Spark
  • Delta Lake
  • Python
  • PySpark
  • SQL
  • Github
  • GitHub Actions
  • Terraform

We are looking for a commercially minded, highly skilled Data Engineer to have an important part in implementing 3Shape’s company-wide Customer Data Strategy. As part of our BI team in Ukraine, you’ll collaborate closely with the Customer Data Strategy team in Denmark, working to shape and develop initiatives that are a cornerstone of our overall business strategy.
The Customer Data Strategy is a high-priority initiative at 3Shape and holds significant potential and buy-in from senior management. The team, which currently includes a Data Analyst, Data Engineer, Data Architect, and Manager, is in an exciting phase of expansion – and this is your opportunity to contribute to its future.

Key responsibilities:

  • Develop, implement, maintain and optimize Azure Databricks in collaboration and alignment with cross-functional teams to enable a ‘one-stop-shop’ for analytical data in 3Shape
  • Translate customer-focused commercial needs into concrete data products, incl. detailing of technical requirements by facilitating workshops with non-technical stakeholders
  • Build data products to unlock commercial value, and help integrate systems, create pipelines etc. to automate and optimize key commercial processes
  • Coordinate and facilitate technical alignment meetings between functions, incl. ensuring that what is being build is aligned with the Data Architect, business needs & best practices, i.e. IT security and data compliance
  • Act as customer data ambassador towards a broad range of stakeholders to improve ‘data literacy’, ensure best practice and convey technical concepts to non-technical stakeholders.

What we are hoping you have/are:

  • A master’s degree in Computer Science, Data Engineering or equivalent and 1-2 years of experience working with data engineering in a larger organization, a tech start-up or as an external consultant
  • Extensive experience with Azure Databricks, Apache Spark, and Delta Lake
  • Experience integrating various IT platforms and systems into Databricks
  • Proficiency in Python, PySpark and SQL, and a previous experience with optimizing and automating data engineering processes
  • Good understanding of the importance of quality and what it means to contribute with high-quality code
  • Familiarity with GitHub and GitHub Actions for CI/CD processes
  • Knowledge of Terraform for infrastructure provisioning and management is beneficial
  • Technical flair and curiosity – we want you to be comfortable asking questions and we encourage a ‘dig deeper’ mindset
  • Good problem-solving and communication skills and ability to work in a collaborative team environment and engage effectively with a broad group of stakeholders, incl. non-technical stakeholders and senior management
  • A ‘go-do’ attitude & deep motivation for making a successful digital transformation journey
  • Full professional proficiency in English.

Join one of the most exciting Danish tech companies in the medical device industry and make an impact. With us you will be able to work on solutions used by thousands of dental professionals worldwide.

Информация о компании 3Shape

Компания 3Shape специализируется на разработке 3D-сканеров и программных решений, с помощью которых специалисты в области стоматологии и слухопротезирования могут предоставлять более качественные услуги широкому кругу пациентов. Продукты компании – это высокотехнологичные инновационные сканеры и программные CAD-решения, существенно улучшающие жизнь пациентов и врачей-стоматологов во всем мире.
Год основания: 2000
Количество сотрудников: 1001-5000
Сайт: 3shape.com

Преимущества сотрудникам

Откликнуться
Product / Startup
Опыт не имеет значения Senior Full-time Не имеет значения Есть тестовое задание Office Київ
17.01.2025
Подробнее
  • Azure Databricks
  • ETL
  • ELT
  • OLTP
  • OLAP

We are looking for an experienced Data Engineer to work in our Kyiv office, who will crunch the data and become wiser on our business, our global install base, and millions of dental cases.

In your daily work you will:

  • define and adapt the data collection strategies across our products and align them across R&D
  • introduce and monitor the data quality metrics
  • sanity check and validate usage data
  • do data analysis with the Product Management
  • make relevant reports and dashboards with the Product Management
  • align with main stakeholders in the R&D Management, Development Teams, Product Management, Finance, BI team, and corporate IT

You are expected to possess the following personal skills:

  • An analytical mindset
  • An ability to apply critical sense to analysis results
  • An ability to collaborate with stakeholders
  • Proficient communication skills
  • A detail-oriented work approach

You will be able to utilize the following experience:

  • Data Engineering (e.g., with Azure Databricks)
  • Database design and modeling
  • Data integration and ETL/ELT (Extract, Transform, Load) processes
  • Ensuring data quality and governance: e.g., has implement best practices for data validation, consistency, and security to maintain high data quality
  • Knowledge about OLTP/OLAP (online transaction/analytical processing)
  • Programming experience is beneficial

Информация о компании 3Shape

Компания 3Shape специализируется на разработке 3D-сканеров и программных решений, с помощью которых специалисты в области стоматологии и слухопротезирования могут предоставлять более качественные услуги широкому кругу пациентов. Продукты компании – это высокотехнологичные инновационные сканеры и программные CAD-решения, существенно улучшающие жизнь пациентов и врачей-стоматологов во всем мире.
Год основания: 2000
Количество сотрудников: 1001-5000
Сайт: 3shape.com

Преимущества сотрудникам

Откликнуться
Consulting / Integrator
Опыт от 3 лет Middle Full-time Upper-Intermediate / B2 Есть тестовое задание Office, Hybrid Київ, Львів
09.01.2025
Подробнее
  • API
  • Git
  • Python
  • R
  • Apache Spark
  • Amazon S3
  • AWS Glue
  • Athena
  • Amazon Redshift
  • AWS SageMaker

Itera is looking for an engaged, curious, and collaborative Data Engineer with solid technical background skills and excellent communication skills to become part of the development team.
We are looking for an open-minded person, who confesses continuous self-development and can solve challenging tasks under guidance\support or by himself. This position is ideal for those looking for the opportunity to work in an international, self-motivated, and distributed team with modern technologies.

Tasks and responsibilities:

  • Build, maintain, design as well as optimize and tune data processing pipelines.
  • Work together with data analyst and data scientists on the deployment and monitoring of our solution
  • Help define our development environment and communicate the best development practices within the organization (i.e. code reviews, testing, etc.).
  • Work with the product management team to find the best solutions to meet our customers’ needs.
  • Ensure compliance with data governance and security policies.
  • Enable teams and local sites across the organization to develop data-driven products and services through cross-team initiatives and collaboration.

Professional requirements:

  • Minimum Master’s Degree with subjects such as computer science
  • Developer/programming experience;
  • Good understanding of API and deployment of the applications;
  • Good knowledge of version control and git is a plus;
  • Experience with Python, R or similar in work setting;
  • Ability to build data pipelines in Python & Spark;
  • Experience with building data solutions in Clouds (S3, Glue, Athena ,Redshift, Sagemaker or similar);
  • Good knowledge of various types of data sources (files, databases, streams) and data formats (CSV, Parquet, Avro, etc.);
  • Good understanding and experience solving complex problems and challenges;
  • Own research experience or relevant experience from research environments is an advantage;
  • Good English, oral and written.

Preferred qualifications:

  • Analytical and structured;
  • Result oriented;
  • Approachable, professional, takes initiative and outgoing;
  • Open-minded and enjoys trying out new technology;
  • Customer oriented;
  • Team player.

Информация о компании Itera

Itera – международная компания в области технологий и цифровых коммуникаций, которая предоставляет полный спектр услуг, позволяющих клиентам ускорить цифровую трансформацию и воспользоваться новыми возможностями бизнеса. Компания обслуживает клиентов по всему миру, предлагает решения и услуги, которые способствуют устойчивому развитию в таких секторах, как энергетика, промышленность, банковское дело, страхование, здравоохранение, государственные услуги и т.п.
Год основания: 1993
Количество сотрудников: 501-1000
Резидент Дія.City
Сайт: itera.com

Преимущества сотрудникам

Откликнуться
Outsource, Consulting / Integrator
Опыт от 3 лет Middle, Senior Full-time Не имеет значения Есть тестовое задание Office, Remote Україна, Київ, Poland, Warszawa
20.12.2024
Подробнее
  • Spotfire
  • Microsoft Power BI
  • T-SQL
  • SSIS
  • PL/SQL
  • Azure Data Factory
  • SQL
  • IronPython
  • HTML
  • CSS
  • Oracle Database
  • Microsoft Azure
  • ETL
  • JavaScript

Intego Group’s Biometric Department is looking fo Tibco Spotfire BI Developer.
We’re looking for bright individuals to join our growing team in Poland and Ukraine offices.
As a part of existing teams, you will focus on Interactive dashboards/reports creation in Spotfire (using Iron Python, HTML, CSS) to cover business needs of the life science enterprise, bringing together best-in-class science, technology, and service to drive superior clinical outcome results.

Key Responsibilities:

  • Interactive dashboards/reports creation in Spotfire (using IronPython, HTML, CSS) to cover business needs of the enterprise
  • Creating dynamic spotfire dashboards, information links and data source handling
  • Developing dashboards using Custom expression, IronPython, JavaScript, Data function, Property control, Data limiting, etc.

General Requirements:

  • BI/Data Engineer with at least 3 years of Spotfire development experience that includes reports creation in BI tools, Data Warehouse construction, and ETL processes development;
  • Proven applied experience in BI area, Data and Business Analysis;
  • Ability to optimize dashboards with massive data volumes;
  • Experience in Data Visualization (Spotfire, MS Power BI);
  • Experience in Data Integration (T-SQL, MS SSIS, Azure Data Factory, PL/SQL);
  • Extensive experience in Database Development;
  • Strong SQL skills on large scale databases;
  • Ability to understand data models;
  • Writing technical specifications and design documents;
  • Ability to message data to make it accessible to Spotfire, and do source-system analysis and write queries to create input tables for Spotfire;
  • Required tools and technologies: TIBCO Spotfire, IronPython, HTML, CSS, Oracle Database, Microsoft Azure;
  • Experience in a life science industry is a plus.

Информация о компании Intego Group

Intego Group – международная технологическая консалтинговая компания, которая специализируется на разработке программного обеспечения полного жизненного цикла и внедрении высококачественных решений по разным технологиям, помогает ведущим научным компаниям раскрыть силу клинических данных в широком диапазоне терапевтических областей. Компания использует современные алгоритмы машинного обучения, передовые статистические методы и методы визуализации данных, чтобы помочь лидерам отрасли найти связанные с данными решения для преодоления сложных проблем, с которыми сталкиваются их отрасли.
Год основания: 2007
Количество сотрудников: 101-250
Сайт: integogroup.com

Преимущества сотрудникам

Откликнуться
Outsource, Consulting / Integrator
Опыт от 4 лет Senior Full-time Upper-Intermediate / B2 Есть тестовое задание Office, Remote Україна, Київ, Poland, Warszawa
20.12.2024
Подробнее
  • SAS
  • Unix
  • Oracle Clinical

Intego Group is looking for experienced Senior Clinical Statistical Programmers to join the global team working from our offices or remotely.
As a clinical project team member, the Senior Clinical Statistical Programmer applies advanced-level programming techniques and leadership to reporting and analysis of clinical trials.
The Senior Clinical Statistical Programmer will be responsible for overall timelines, significant project milestones, and overall project quality, integrity, and productivity. May serve on, or lead a departmental initiative, specialized projects, and working groups. Can serve as a project team leader. Also, this person should be a Subject Matter Expert (SME) as a technology troubleshooter.

Job responsibilities:

  • Responsible for the processing of clinical data required for analysis of clinical trials for Phase 1-4.
  • Develop SAS code and table templates for preparing, processing, and analyzing clinical data.
  • Generate and QC summary tables, data listings, and graphs for in-house analyses of study data or publications using SAS standard coding practices.
  • Create/acquire tools to improve programming efficiency or quality.
  • Validate the work of other programmers/analysts.
  • Create/review programming plan, specifications for datasets, and TLF’s.

General requirements:

  • A minimum of a Bachelor’s degree in Computer Science, Mathematics, Statistics, Pharmaceuticals Sciences, Life Sciences, and related areas is required. A Master’s is preferred;
  • A minimum of 4 years of hands-on relevant career experience in the pharmaceutical or biotechnology industry;
  • Have excellent knowledge of SAS programming and associated features and their applications in the pharmaceuticals industry environment, particularly clinical trial data sets;
  • Familiar with CDISC conventions, i.e., SDTM and ADaM models, and hands-on experience implementing these models;
  • Strong understanding of clinical trial data and hands-on in data manipulations, analysis, and reporting of analysis results;
  • Must understand the role of all the functional areas in the clinical trial process;
  • Must have a basic understanding of the FDA/ICH guidelines, the software development life cycle, 21 CFR Part 11, and other relevant FDA regulations;
  • Must possess the basic knowledge of statistics such as p-values, confidence intervals, linear regression analysis, advanced general linear models, frequencies, survival analysis, non-parametric analysis, and randomization software, and demonstrate proficiency in implementing new ideas in clear, efficient SAS code for the purposes of data analysis and reporting;
  • Must demonstrate intermediate UNIX, Oracle Clinical or equivalent clinical DM system, and relational database theory;
  • Track record of generating new ideas and solutions to data analysis;
  • Excellent application development skills;
  • Thorough understanding of relational database components and theory;
  • SAS certification is an advantage;
  • It would be an asset to have had experience working on FDA submissions;
  • The candidate should demonstrate clear and timely written and verbal communication with peers, customers, and management;
  • Should be able to present effective presentations to small groups such as Project Teams or during Statistical Programming meetings.

Информация о компании Intego Group

Intego Group – международная технологическая консалтинговая компания, которая специализируется на разработке программного обеспечения полного жизненного цикла и внедрении высококачественных решений по разным технологиям, помогает ведущим научным компаниям раскрыть силу клинических данных в широком диапазоне терапевтических областей. Компания использует современные алгоритмы машинного обучения, передовые статистические методы и методы визуализации данных, чтобы помочь лидерам отрасли найти связанные с данными решения для преодоления сложных проблем, с которыми сталкиваются их отрасли.
Год основания: 2007
Количество сотрудников: 101-250
Сайт: integogroup.com

Преимущества сотрудникам

Откликнуться
Outsource, Consulting / Integrator
Опыт от 3 лет Middle Full-time Не имеет значения Есть тестовое задание Remote Україна
13.12.2024
Подробнее
  • SQL
  • Looker Studio
  • Python
  • BigQuery
  • GCP

We are looking for a talented Data Engineer with 3+ years of commercial experience to join our new project from Dubai.

Product

This solution revolutionizes businesses by seamlessly integrating payment collection, corporate cards, and expense management into one sleek, user-friendly platform. Trusted by over 1,000 companies, it enhances revenue collection, streamlines spending control, reduces costs, and automates financial processes

Must have

  • 3+ years of professional experience with analysis tools and recommendation systems
  • Extensive experience in SQL
  • Hands-on experience with Looker Studio or similar tools
  • Solid experience in Python and data analysis libraries such as pandas, numpy, matplotlib, scikit-learn, etc
  • Experience in using analytical concepts and statistical techniques: hypothesis development, designing tests/experiments, analyzing data, drawing conclusions, and developing actionable recommendations for business units

Will be a plus

  • Payments, Fraud, Risk, E-Commerce or Finance background
  • Experience with BigQuery and other GCP services

Responsibilities

  • Analyze large-scale structured and unstructured datasets using analytical, statistical, machine learning, or deep learning techniques to address a wide range of complex issues
  • Collaborate with stakeholders from various departments to comprehend their business requirements and obstacles, design and develop analytics solutions to achieve business goals, and facilitate decision-making
  • Partner with cross-functional teams to provide strategies based on data-driven insights across product, marketing, compliance, and other areas
  • Identify, understand, and evaluate external/internal opportunities to enhance our products and services
  • Determine and assess the success of product initiatives through goal setting, forecasting, and monitoring of key product metrics
  • Create data models, data automation systems, performance metrics, and reporting frameworks, and monitor impact over time
  • Present results and business impacts of insight initiatives to stakeholders within and outside of the organization

Информация о компании TechMagic

TechMagic – это компания по разработке полного цикла, предоставляющая комплексные услуги по разработке программного обеспечения для компаний разного размера. Поставляет программные продукты от концепции к развертыванию или внедряет существующие.
Год основания: 2014
Количество сотрудников: 251-500
Резидент Дія.City
Сайт: techmagic.co

Преимущества сотрудникам

Откликнуться
Product / Startup, Education
Опыт не имеет значения Senior Full-time Не имеет значения Есть тестовое задание Office Berlin
11.12.2024
Подробнее
  • GDPR
  • CCPA

As a Senior Data Manager within our Central Data team, you will play a key role in developing and implementing effective data management and governance to increase data quality. You will work closely with business data owners and users, domain product owners, data producers, data consumers, legal, infosec and senior leadership to define the data ecosystem, data governance policies, procedures, and best practices that ensure the discoverability, compliance, quality and interoperability of our data products.

You will:

  • Create and enforce standards and best practices across domains, promoting data awareness for effective data management and fostering a culture of data-driven decision-making
  • Run the change management process to ensure successful adoption of data governance and our new data catalog
  • Formulate techniques for quality data collection to ensure adequacy, accuracy and legitimacy of data
  • Developing and implementing data processes and practices, including data lineage tracking, data quality management, metadata management, data issue resolution etc.
  • Establish rules and procedures for data sharing
  • Collaborating with domain data product owners, data engineers and relevant stakeholders to assess data governance needs, introduce necessary standards and data quality controls, and ensure compliance
  • Establishing metrics, monitor and analyze information and data systems and evaluate their performance to discover ways of enhancing them
  • Establish metrics and monitor adoption and adherence to data operations policies and procedures
  • Collaborating with Platform teams to ensure data governance principles are integrated into data systems, applications, and infrastructure

You have:

  • Proven experience as a data manager or relevant area
  • Past success with defining and leading company wide governance initiatives
  • Past success in implementing and rolling out a data catalog org wide
  • Experience crafting communication and engagement strategies for securing buy-in with operational and executive leadership levels
  • Strong change management and program management experience
  • Excellence understanding of data management, quality and data governance functions
  • Strong knowledge, understanding and implementation proficiency of data governance principles, concepts, policies, procedures, guidelines and in data governance tools and technologies
  • Experience working with a Data Mesh architecture or similar decentralized data governance models is a plus
  • Familiarity with data privacy, security, and data protection regulations (e.g., GDPR, CCPA) and their implications on data governance
  • A self-starting and ownership mentality with the technical ability to proactively onboard yourself using documentation and training resources to new data tools (e.g. data catalogs, observability tools, data platforms)
  • Excellent communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams and stakeholders at all levels

Информация о компании Babbel

Babbel – компания, которая создает продукты, которые помогают людям жить и развлекать разными людьми. В дополнительных уроках самостоятельного обучения, живые занятия под руководством профессиональных инструкторов с Babbel Live, Babbel for Business, серии подкастов и журналов, можно просмотреть новые фильмы в реальных ситуациях с реальными людьми.
Год основания: 2007
Количество сотрудников: 1001-5000
Сайт: ua.babbel.com

Преимущества сотрудникам

Откликнуться
Опыт не имеет значения Senior Full-time Не имеет значения Есть тестовое задание Office Ireland, Dublin
11.12.2024
Подробнее
  • DataBricks
  • Snowflake
  • Apache Spark
  • PySpark
  • Delta Lake
  • MLflow
  • AWS
  • Azure
  • Python
  • SQL
  • Scala
  • Github
  • Azure DevOps
  • GitHub Actions
  • Apache Airflow
  • Jenkins
  • Tableau
  • Alteryx
  • Azure Synapse
  • Netezza

We are seeking an accomplished Senior Data Engineer to join our Dublin-based team. This role provides an exciting opportunity to influence our data architecture, working with innovative cloud technologies to drive impactful, data-centric projects. The ideal candidate will have in-depth experience with Databricks, Snowflake, AWS, and MLOps to support and enhance the deployment and scalability of machine learning models. You’ll play a pivotal role in ensuring data accessibility, optimising data sourcing pipelines, and enhancing the performance of large-scale data solutions.

Key responsibilities & duties include:

  • Design and Implement Cloud-Native Data Solutions: Develop scalable, resilient data platforms using cloud-native technologies, data mesh frameworks, and integration across diverse data sources
  • Build and Maintain MLOps Pipelines: Use tools like MLflow to create reliable, efficient pipelines for deploying machine learning models in production environments
  • Establish Data Governance and Quality Standards: Develop and uphold data governance practices, ensuring robust data quality and compliance using tools like Databricks Unity Catalog
  • Oversee Data Integration and Migration: Lead migration projects from legacy data systems to modern cloud platforms, focusing on optimising operational costs and efficiencies
  • Performance Optimisation and Tuning: Use tools such as Snowflake and Delta Lake to enhance data accessibility, reliability, and performance, delivering robust, high-quality data products

Key Projects/Deliverables:

  • Data Mesh Architecture: Design and deploy data mesh frameworks to support seamless data integration and scalability across business functions
  • Operationalise MLOps Pipelines: Develop and manage MLOps pipelines to streamline machine learning model deployment and operational efficiency
  • Data Migration and Cost Optimisation: Lead large-scale data migrations to Azure and AWS, focusing on business-critical data sources and substantial cost reductions
  • Data Governance Applications: Create applications to reinforce data governance, quality, and enterprise standards, ensuring a secure and compliant production environment

Required Experience:

  • Data Engineering Expertise: Proven experience in architecting and delivering large-scale, cloud-native data solutions
  • Advanced Knowledge in Databricks and Snowflake: Hands-on experience in Databricks, Spark, Pyspark, and Delta Lake, with strong skills in data warehousing and lakehouse solutions
  • MLOps Skills: Practical experience in MLOps, ideally with MLflow for model management and deployment
  • Cloud Proficiency: Strong knowledge of AWS, with additional experience in Azure advantageous for multi-cloud setups
  • Programming Proficiency: Advanced coding abilities in Python, SQL and Scala
  • Tooling Competence: Familiarity with version control (GitHub), CI/CD tools (Azure DevOps, GitHub Actions), orchestration tools (Airflow, Jenkins), and dashboarding tools (Tableau, Alteryx)

Desirable Skills:

  • Experience with Synapse Analytics, Netezza, and legacy data systems
  • Knowledge of data governance best practices and tools
  • Excellent problem-solving skills, with the ability to work both autonomously and collaboratively in cross-functional teams

Информация о компании Amach Software

Amach Software – компания, специализирующаяся на автоматизированном тестировании и развертывании. Команда помогает создать и интегрировать индивидуальную систему автоматизации, которая согласовывает тестирование и операционную практику организации, создает быстрые, масштабируемые, простые в обслуживании фреймворки и безупречно интегрируются с любым сервером сборки и репозиторием управления исходным кодом.
Год основания: 2013
Количество сотрудников: 251-500
Сайт: amach.com

Преимущества сотрудникам

Откликнуться
Outsource, Consulting / Integrator
Опыт от 3 лет Middle Full-time Upper-Intermediate / B2 Есть тестовое задание Office, Remote Poland
11.12.2024
Подробнее
  • Confluent Kafka
  • Python
  • Microsoft SQL Server
  • PostgreSQl
  • Oracle
  • Git
  • ETL
  • SSIS
  • RDBMS

In this position you will:

  • participate in Data Management systems implementation projects: Data Lakehouse, Data Streaming, Metadata management, Reference Data Management,
  • apply data engineering and development best practices to current delivery process (like: CI/CD, Code Management, Testing, Knowledge Management, Documentation etc.),
  • participate in and drive Data Management systems implementation projects,
  • develop data pipelines to bring new data to Enterprise Data Fabric
  • ensure that all Data Policies are met within Enterprise Data Fabric.
  • ensure that implemented systems correspond with target Data Architecture,
  • establish and maintain agile delivery process based on one of frameworks: Kanban, Scrum.

We are looking for you, if you:

  • have min. 3 years of experience as a Data Engineer,
  • are familiar and proficient in the use of Kafka (Confluent) and Python,
  • have experience with MS SQL Server, PostgreSQL, Oracle,
  • have worked with Git, ETL: SSIS, RDBMS: MS SQL Server, PostgreSQL, Oracle,
  • have knowledge of English (min. B2+),
  • have experience working in an international environment (Middle East client).

Информация о компании Altkom Software

Altkom Software – это компания по разработке программного обеспечения на заказ, работающая с наиболее узнаваемыми брендами, включая международные корпорации, быстроразвивающиеся компании и стартапы.
Год основания: 1998
Количество сотрудников: 251-500
Сайт: altkomsoftware.com

Преимущества сотрудникам

Откликнуться
$5500 – 7900
Outsource
Опыт от 5 лет Senior Full-time Не имеет значения Есть тестовое задание B2B / ФОП Remote
11.12.2024
Подробнее
  • AWS
  • Apache Spark
  • AWS Glue

Hi there! If you’re looking for a high-impact position in an ambitious software house, we have a match for you!
Our customer aims to build scalable data pipelines to ingest, transform, orchestrate, and publish our index data from over 200 suppliers.
We are looking for a proficient Data Engineer to upgrade our platform utilizing contemporary technologies while ensuring dependable and high-performing operations.

Your main responsibilities for this position will be:

  • Extract and convert various data sources into practical insights to support business decision-making.
  • Implement industry-leading transformation and modeling methods, ensuring the output is validated for accuracy and reliability by downstream data teams and end users.
  • Lead development projects to enhance the platform and its operations.
  • Collaborate closely with business stakeholders to fine-tune requirements, iterate and complete designs, produce working proofs of concept, and develop final data solutions

This offer will be a perfect match for you if you have:

  • 5+ years experience in Data Engineering.
  • Comprehensive understanding of the data sources, formats, and data processing challenges at scale.
  • Bachelor's Degree in Computer Science, Math, or a related field.

Информация о компании Acaisoft

Acaisoft специализируется на разработке приложений в облаке и превращении устаревших сред в облаке. Компания предоставляет сквозные программные решения, от бизнес-анализа, через оценку проекта до дизайна и внедрения UI/UX, Frontend и Backend. Команда интегрирует лучшие методы контроля качества вручную и автоматизированно, чтобы убедиться, что конечный продукт является первоклассным.
Год основания: 2014
Количество сотрудников: 251-500
Сайт: acaisoft.com

Преимущества сотрудникам

Откликнуться
$5500 – 7900
Outsource
Опыт от 5 лет Senior Full-time Не имеет значения Есть тестовое задание Remote
11.12.2024
Подробнее
  • Apache Airflow
  • ETL
  • AWS
  • Iceberg
  • Athena
  • Dermio
  • Python

Hi! We are looking for a Data Engineer to work with our client – a UK company offering digital music licensing for independent labels.
When you join our team, you will have the opportunity to work on the scale of the data around 4 billion rows per month! Due to a need to produce huge analytic tables, we use Iceberg.

Your main responsibilities for this position will be:

  • designing, building, and maintaining the data infrastructure necessary for optimal extraction, transformation, and loading of data from a variety of data sources.
  • developing and implementing data collection systems that integrate a variety of sources such as proprietary company data.
  • working on AWS-based infrastructure.

This offer will be a perfect match for you if you have:

  • at least 5 years of relevant data experience.
  • understanding data orchestration tools like Airflow to extend it with new data pipelines and troubleshoot its operational issues.
  • proven experience in building and maintaining ETL pipelines
  • familiarity with cloud computing services, particularly AWS.

It would be nice if you have:

  • knowledge of tools like Iceberg, Athena, and Dermio.
  • degree in Computer Science or related field.

Информация о компании Acaisoft

Acaisoft специализируется на разработке приложений в облаке и превращении устаревших сред в облаке. Компания предоставляет сквозные программные решения, от бизнес-анализа, через оценку проекта до дизайна и внедрения UI/UX, Frontend и Backend. Команда интегрирует лучшие методы контроля качества вручную и автоматизированно, чтобы убедиться, что конечный продукт является первоклассным.
Год основания: 2014
Количество сотрудников: 251-500
Сайт: acaisoft.com

Преимущества сотрудникам

Откликнуться
Опыт от 3 лет Senior Full-time Не имеет значения Есть тестовое задание Office Serbia, Belgrade, Novi Sad
09.12.2024
Подробнее
  • AWS
  • AWS Glue
  • Amazon S3
  • Athena
  • EMR
  • EC2
  • MWAA
  • Python
  • PySpark
  • SQL
  • Scala
  • Apache Spark
  • Trino
  • Presto
  • ETL
  • Great Expectations
  • Soda

Grid Dynamics, a global software services company driving enterprise-level digital transformation solutions for Fortune 1000 corporations, is looking for a Senior Data Engineer .

Essential functions

  • Design and optimize data ingestion systems to ensure a consistent flow of high-quality data into the platform, creating a solid foundation for developing data products and supporting comprehensive KPI tracking and analysis.
  • Demonstrate expertise in Infrastructure as Code (IaC), utilizing tools like Terraform or CloudFormation to automate infrastructure deployment and management.
  • Translate business requirements into robust technical architecture, including designing physical schema and logical data models.
  • Engage with stakeholders to understand their needs, providing technical guidance for current and future data platform projects.
  • Analyze user interaction with the data platform, focusing on patterns of use to identify areas for improvement and optimize user engagement over time.
  • Implement and maintain data governance frameworks, emphasizing automated processes for data quality checks, compliance adherence, and secure data handling, while collaborating with engineering teams to integrate governance protocols into data pipelines and platform architecture.
  • Participate actively in all Data Platform Engineering team meetings and knowledge-sharing sessions, contributing to team learning and process improvement.

Qualifications

  • AWS Services: Expertise in AWS services, especially Glue, S3, Athena, EMR, EC2, and MWAA.
  • Programming Languages: Proficiency in Python, PySpark, SQL, and/or Scala.
  • Big Data Technologies: Hands-on experience with Spark, and Trino, Presto
  • Data Platforms: Experience in building data platforms, not just using them. (MUST HAVE)

Would be a plus

  • Data Engineering: 3+ years of work experience as a Data Engineer in a cloud-based environment, with a focus on AWS.
  • ETL and Data Modeling: Advanced understanding of ETL processes, data modeling techniques, and best practices.
  • Data Governance and Quality: Experience in applying data governance policies, implementing data contracts, and using data quality frameworks like Great Expectations and Soda.
  • Data Architecture: Strong understanding of data architecture to enable ML and data analytics.
  • Data Mesh: Familiarity with data mesh architecture and principles, with an appreciation for decentralized data management and shared data ownership.

Информация о компании Grid Dynamics

Grid Dynamics является ведущим поставщиком технологического консалтинга, масштабируемых инженерных услуг и услуг по обработке данных для корпораций, находящихся на этапе цифровой трансформации. Компания тесно сотрудничает с клиентами над инициативами цифровой трансформации, которые охватывают стратегический консалтинг, первые прототипы и внедрение новых цифровых платформ в масштабе предприятия и помогают организациям стать более гибкими и создавать инновационные цифровые продукты и опыт, используя глубокий опыт в новейших технологиях, лучших мировых инженерных талантов, практики разработки экономного программного обеспечения и культуру высокопроизводительных продуктов
Год основания: 2006
Количество сотрудников: 1001-5000
Сайт: griddynamics.com

Преимущества сотрудникам

Откликнуться
Outsource, Consulting / Integrator
Опыт не имеет значения Middle Full-time Intermediate / B1 Есть тестовое задание Remote
08.12.2024
Подробнее
  • AWS
  • Apache Airflow
  • Amazon S3
  • Athena
  • AWS Glue
  • IAM
  • AWS Redshift
  • ETL
  • SQL
  • Git
  • CI/CD
  • Terraform
  • Kafka
  • dbt

Growe welcomes those who are excited to:

  • Create end-to-end ETL workflows using Apache Airflow, define task dependencies and scheduling to ensure timely and accurate data processing;
  • Establish best practices for organizing data within S3 buckets, including folder structures, partitioning strategies, and object tagging for improved data discoverability and management;
  • Provide access controls and permissions management on S3 buckets and Athena tables using IAM policies, ensure compliance with regulatory requirements and data governance standards;
  • Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver scalable and reliable solutions;
  • Document data storage architectures, ETL pipelines, and query optimization techniques maintain clear and up-to-date documentation for reference and knowledge sharing within the team.

We need your professional experience:

  • Proven experience working as a data engineer with a focus on AWS services;
  • Strong proficiency in building data pipelines using Apache Airflow;
  • Expertise in AWS services such as S3, Athena, Glue, IAM roles, Redshift;
  • Solid understanding of ETL processes and data modeling concepts;
  • Proficiency in SQL and experience working with large datasets;
  • Familiarity with version control systems (e.g., Git) and CI/CD pipelines;
  • Experience with Terraform;
  • Excellent problem-solving and troubleshooting skills;
  • Understanding of MLOps will be a plus;
  • Experience with Kafka and DBT will be a plus;
  • Level of English: Intermediate/Upper-Intermediate.

We appreciate if you have those personal features:

  • Strong communication and presentation skills;
  • Attention to detail;
  • Proactive and result-oriented mindset.

We are seeking those who align with our core values:

  • GROWE TOGETHER: Our team is our main asset. We work together and support each other to achieve our common goals;
  • DRIVE RESULT OVER PROCESS: We set ambitious, clear, measurable goals in line with our strategy and driving Growe to success;
  • BE READY FOR CHANGE: We see challenges as opportunities to grow and evolve. We adapt today to win tomorrow.

Информация о компании Growe

Growe – международная компания, работающая в индустрии iGaming & Entertainment, объединяя бренды со всего мира и собирая опыт с разных рынков. Компания специализируется на внедрении передовых технологических платформ и платежных решений, способствуя расширению и постоянному росту активов, а также предлагает широкие возможности от запуска новых iGaming-брендов в Азии, Африке и Латинской Америке до предоставления эксклюзивных условий игрокам и чрезвычайных шансов на победу.
Год основания: 2019
Количество сотрудников: 501-1000
Сайт: growe.com

Преимущества сотрудникам

Откликнуться
Outsource, Consulting / Integrator
Опыт от 5 лет Senior Full-time Advanced / Fluent / C1 Есть тестовое задание Office, Remote Вінниця
04.12.2024
Подробнее
  • Python
  • SQL
  • NoSQL
  • Firestore
  • BigQuery
  • Bigtable
  • Redis
  • Kafka
  • OLAP
  • Snowflake
  • ClickHouse
  • Apache Airflow
  • Google Cloud Dataflow
  • Hadoop
  • Apache Spark
  • TDD
  • Machine learning
  • Docker
  • Kubernetes
  • GCP
  • IaC
  • Scala

The company is the first Customer-Led Marketing Platform. Its solutions ensure that marketing always starts with the customer instead of a campaign or product.
It is powered by the combination of:

  1. rich historical, real-time, and predictive customer data;
  2. AI-led multichannel journey orchestration;
  3. statistically credible multitouch attribution of every marketing action.

Requirements:

  • At least 5 years of experience with Python.
  • At least 3 years of experience in processing structured terabyte-scale data (processing structured data of several hundreds of gigabytes).
  • Solid experience in SQL and NoSQL (ideally GCP storages Firestore, BigQuery, BigTable and/or Redis, Kafka).
  • Hands-on experience with OLAP storage (at least one of Snowflake, BigQuery, ClickHouse, etc.).
  • Deep understanding of data processing services (Apache Airflow, GCP Dataflow, Hadoop, Apache Spark).
  • Proven experience in DevOps.
  • Experience in automated test creation (TDD).
  • Freely spoken English.

Advantages:

  • Being fearless of mathematical algorithms (part of our team’s responsibility is developing ML models for data analysis; although knowledge of ML is not required for the current position, it would be awesome if a person felt some passion for algorithms).
  • Experience in any OOP language.
  • Familiarity with Docker and Kubernetes.
  • Experience with GCP services would be a plus.
  • Experience with IaC would be a plus.
  • Experience in Scala.

Информация о компании Gemicle

Gemicle – инновационная, высокотехнологичная компания, специализирующаяся на разработке современных решений, призванных сделать жизнь людей более удобной, безопасной и комфортной. Компания занимается разработкой на заказ, предлагает широкий спектр услуг от создания игр, приложений для социальных сетей и проектов электронной коммерции до комплексных решений B2B и возможностей в Big data. Квалифицированные команды разработчиков, дизайнеров, инженеров, специалистов контроля качества и аниматоров предлагают отличные продукты и решения компаниям любого размера.
Год основания: 2012
Количество сотрудников: 51-100
Сайт: gemicle.com

Преимущества сотрудникам

Откликнуться
Outsource, Consulting / Integrator
Опыт от 3 лет Senior Full-time Advanced / Fluent / C1 Есть тестовое задание Remote
11.11.2024
Подробнее
  • Python
  • Snowflake
  • SAP HANA
  • SQL

We are looking for a seasoned Senior Data Engineer to help us shape Emergn’s exciting future and play an important role in our growth.

We want you to:

  • Work with stakeholders including data, design, product and executive teams and assist with data-related technical issues.
  • Identify, design and implement process improvements including infrastructure re-design for greater scalability, data delivery optimization, and automation of manual processes.
  • Build and support required infrastructure for optimal extraction, transformation and loading of data from various data sources.
  • Design and assemble large, complex sets of data that meet non-functional and functional business requirements.
  • Willingness to continuously learn & share learnings with others.

This job might be for you if you have:

  • 3+ years of experience in Data Engineering or Data Science;
  • Experience with Python;
  • Experience with Snowflake or SAP Hana;
  • Experienced with SQL;
  • Highly experienced and skilled at collaborating with business clients;
  • An analytical and creative approach to design and problem-solving;
  • Excellent communication skills, including presentation skills;
  • Must be fluent in English.

Информация о компании Emergn

Emergn – международная компания, которая специализируется на цифровой трансформации бизнеса и разработке программных решений на заказ, помогающих клиентам быстрее превращать свои перспективные идеи в ценные продукты и клиентский опыт. Компания разрабатывает консалтинговые продукты, предоставляет полный спектр услуг наивысшего уровня предприятиям как частного, так и государственного сектора, помогает постоянно улучшать качество продуктов и услуг.
Год основания: 2009
Количество сотрудников: 501-1000
Сайт: emergn.com

Преимущества сотрудникам

Откликнуться

Страницы

Читайте нас в Telegram, чтобы не пропустить анонсы новых курсов.