Вакансии Data Engineer / Big Data Architect
- CRM
- Talend
We are looking for a Data Researcher to join our outbound sales team. This person will take the responsibility for researching and collecting decision-makers based on ICP to help our team grow and meet KPIs.
Responsibilities:
- Research and collect contact information of decision-makers according to the ICP.
- Work with various online tools and databases to enrich leads.
- Record all lead details in the database/CRM.
- Track and improve weekly/monthly KPI.
Requirements:
- Have experience with data research 1 year + in data research;
- Attentive to detail and accurate;
- Fast learner;
- Proactive;
- Result-oriented;
- Deadline and quota-driven;
- Confident Intermediate English speaker;
- Knowledge of research and validation tools.
Информация о компании DigitalSuits
Преимущества сотрудникам
- Гнучкий графік роботи
- Компенсація навчання
- Надається ноутбук
- Оплачувані лікарняні
- Оплачувана відпустка
- Регулярний перегляд зарплатні
- RDBMS
- С++
- Linux
- Debugging
- Database testing
- ClickHouse
Altinity is looking for a great server internals engineer to work on contributions to ClickHouse. As a ClickHouse Open Source Developer, you’ll be responsible for designing, implementing, and supporting features of ClickHouse ranging from encryption to storage to query processing. We’re looking for imaginative engineers with a background in database internals and in high-performance languages like C++.
We have lots of exciting projects underway as we help the community adapt ClickHouse to the cloud and Kubernetes.
Our ideal candidate has:
- Proven experience in design, implementation, and testing high-performance DBMS features in a complex C++ codebase.
- Excellent background in database internals including query languages, access methods, storage, and/or connectivity.
- Demonstrated ability to read and write good C++.
- Good understanding of networking and I/O on Linux.
- Familiar with performance optimization techniques and tools.
- History of getting pull requests vetted and merged in rapidly evolving open-source projects.
- Sound knowledge of database testing, debugging, and low-level performance optimization.
- Enthusiasm to learn more about database technology and data-related applications.
- Good English language reading and writing skills.
- Eager to work with a friendly, distributed team following open-source dev practices.
- MAJOR PLUS: previous development experience on ClickHouse.
A day in your life as a ClickHouse server engineer may include any or all of the following:
- Write good task-specific C++ code and solidify it with tests.
- Debug issues reported by users, fix them and add tests to make sure they won’t happen again.
- Profile existing code and make it faster (either by applying clever algorithms, adding vectorized intrinsics, or by implementing cool tricks), add performance tests.
- Submit your own pull requests and review pull requests from others.
- Help the Support Team investigate customer problems running ClickHouse.
- Help new community members contribute to ClickHouse.
- Attend meetups and make presentations on open-source development.
- Write blog articles and share information about ClickHouse.
Информация о компании Altinity
Преимущества сотрудникам
- Team buildings
- Англомовне середовище
- Гнучкий графік роботи
- Освітні програми, курси
- Python
- Pandas
- Spark SQL
- PySpark
- Azure Data Factory
- Azure Databricks
- Microsoft Azure
- Azure Data Lake
- Azure Data Lake Storage
- Machine learning
- TensorFlow
- scikit-learn
Impressit is looking for a highly skilled Senior Data Engineer with expertise in Python and Big Data technologies for our British customer - one of the world's largest oil and gas companies. The ideal candidate will have a strong track record in data warehousing, extensive experience with Azure cloud services, and proficiency in working with large datasets. The role will primarily focus on designing, developing, and maintaining data pipelines to support our analytical and operational needs.
What you will do:
- Design, develop, and maintain scalable data pipelines using SparkSQL, PySpark, Azure Data Factory, and Azure Databricks.
- Utilize Spark for Delta Live Tables to enable real-time data processing and analysis.
- Collaborate with cross-functional teams to understand data requirements and design optimal solutions.
- Implement efficient data ingestion, transformation, and storage processes to ensure data quality and integrity.
- Optimize performance and reliability of data pipelines to meet SLAs and business objectives.
- Develop and maintain API frameworks for Python to facilitate data access and integration.
- Stay abreast of emerging technologies and best practices in data engineering and contribute to continuous improvement initiatives.
What we expect:
- 5+ years of hands-on experience as a Data Engineer with a focus on data warehousing and cloud technologies.
- Strong proficiency in Python programming and experience with PANDAs for data manipulation.
- Extensive hands-on experience with SparkSQL, PySpark, Azure Data Factory, and Azure Databricks.
- Solid understanding of Big Data concepts and architectures, including distributed computing and parallel processing.
- Experience working with large datasets and implementing scalable solutions.
- Familiarity with Azure cloud services such as Azure Data Lake and Azure Data Lake Storage.
- Excellent problem-solving skills and ability to troubleshoot complex data issues.
- Strong communication and collaboration skills with the ability to work effectively in a team environment.
- Upper-intermediate or a higher level of English.
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
As a plus:
- Certification in Azure cloud services or Big Data technologies.
- Knowledge of machine learning concepts and frameworks.
Информация о компании Impressit
Преимущества сотрудникам
- English Courses
- Relocation assistance
- Team buildings
- Бухгалтерський супровід
- Компенсація навчання
- Оплачувані лікарняні
- Data lake
- ETL
- ELT
- Microsoft Azure
- Azure Synapse
- Snowflake
- Azure Databricks
- CI/CD
EY Belgium Modern Data Platform team supports our clients in defining and rolling out the right data architecture, data platform and infrastructure that support their needs, implement and maintain automated data pipelines, and infuse data through business intelligence and analytics. We are currently looking for a Data Engineer to lead the team and performing business development with other team leaders.
Key Responsibilities
As Data Engineering lead, you will have the following responsibilities:
- Lead clients discussions going from defining the problem, Advising on the solution (Data Architecture, Best practices guidance) and supporting implementation.
- Lead the design, development, and maintenance of efficient data pipelines.
- Create robust data products that leverage cloud Data platforms such Azure Synapse, Snowflake and Databricks
- Use cloud services to build scalable, reliable, and efficient data solutions.
- Help translate data and analytics requirements into data solutions based on the approved technical designs.
- Define the technology stack, best practices, and standards for data engineering and analytics.
- Enable test automation and ensure CI/CD pipelines are in good health.
- Implement monitoring of data applications and track product quality, performance, and stability.
- Carry out effective technical design reviews to ensure that the right architecture patterns are used by the team.
- Optimize and fine-tune data processing workflows for performance and reliability.
- Identify and resolve issues within organizations and processes.
- Educate clients and partners about the data analytics landscape.
- Provide guidance on industry trends, emerging technologies, and best practices.
- Lead a team of Data Engineers to thrive by promoting teamwork, providing guidance, and encouraging skill enhancement.
Skills and Attributes for Success
- Master’s degree in computer science, engineering, mathematics or another relevant subject
- Minimum 5+ years’ experience in Data Engineering
- Experience with batch and real-time processing frameworks.
- Expertise in proactive performance monitoring, conducting data quality testing, and troubleshooting performance issues of complex warehouse data tables.
- Experience in assembling data from multiple sources and analyzing and modeling complex datasets.
- Hands-on Experience with production Cloud / DevOps environments and Data Lake, Data transformation, ETL/ELT, and other data concepts
- Practical knowledge of cloud possibilities and limitations in areas like distributed systems, load balancing, networking, massive data storage, and security.
- Solid Experience of leveraging Microsoft Azure ecosystem to manage the development and maintenance of cloud platform operations.
- Understanding of data architecture concepts such as data modeling, Big Data storage, Lambda architecture, data vault, and dimensional modeling, Data Fabric and Data Mesh is nice to have.
- Excellent analytical skills and problem solver
- Strong communicator who understands team dynamics, able to support where needed and lead when asked.
- Experience in leading and coaching technical teams.
- Fluent in Dutch and/or French, proficient business English
Информация о компании Ernst & Young
Преимущества сотрудникам
- Team buildings
- Велика стабільна компанія
- Гнучкий графік роботи
- Медичне страхування
- Освітні програми, курси
- Регулярний перегляд зарплатні
- Azure Data Lake
- Azure Data Factory
- Azure Synapse
- Tableau
- Microsoft Power BI
- RDBMS
- SQL
- NoSQL
- С#
- Python
- Azure DevOps
- TensorFlow
- Machine learning
Requirements:
- 3+ years as Data warehouse architect;
- 1+ year experience with Azure Data Lake, Azure Data Factory, Azure Synapse;
- Ability to create low-level technical requirements based on business or high-level technical requirements;
- 3+ years experience with BI tools: Power BI/Tableau;
- Relational and non-relational DB - 5+ years;
- Basic knowledge of C#;
- English: В2+.
Be a Plus:
- Azure DevOPS, Python, Tensorflow;
- Knowledge of DS and Machine learning algorithms;
- Experience with other technologies and frameworks.
Responsibilities:
- Analyze business requirements;
- Propose solution architecture;
- ELT data from different sources to Azure;
- Creation API for further data consuming;
- Creating reports with BI tools, etc.
Информация о компании Chudovo
Преимущества сотрудникам
- Багатонаціональна команда
- Без бюрократії
- Гнучкий графік роботи
- Довгострокові проекти
- SQL
- Oracle
- Java
- Python
- NoSQL
- HBase
- Elasticsearch
- Redis
- MongoDB
- Apache Spark
- Hadoop
- Vertica
- Oracle Exadata
- Kafka
- Unix
- Shell
Для розширення команди, ми в пошуку Big Data Engineer.
Основні завдання на цій позиції:
- Визначення зовнішніх та внутрішніх джерел великих даних для проведення аналітичних робіт
- Побудова ефективних процесів отримання та конвеєрної обробки даних великих об’ємів із гетерогенних джерел даних, розробка методів та регламентів необхідних ETL-процесів
- Розробка моделей даних, адаптованих до технологій великих даних
- Розробка необхідних аналітичних звітів, розрізів і агрегатів великих даних
- Оцінка відповідності наборів даних предметній галузі та задачам аналітичних робіт
- Керування життєвим циклом даних (процеси отримання, розміщення, зберігання, розподілення, міграції, архівування та видалення великих даних)
Необхідні навички:
- Досвід в галузі будування сховищ даних та процесів видобутку, обробки та трансформування даних на основі MPP архітектури
- Досвід у database schema design та dimensional data modeling
- Знання та великий досвід використання SQL, Oracle
- Знання Java або Python як стеки для автоматизації процесів
- Досвід у роботі з NoSQL базами на кшталт HBase, Elasticsearch
- Знання Redis DB, MongoDB може бути корисним;
- Розуміння побудови розподілених кластерів на кшталт Spark, Hadoop, та інших
- Досвід у роботі з СУБД для аналізу великих даних типу Vertica, Exadata буде доречним;
- Розуміння та наявність досвіду у роботі з системами брокерів повідомлень на кшталт Kafka
- Знання та розуміння технологій OLAP
- Розуміння побудови та досвід роботи з інструментами data governance
- Досвід побудови та використання data analytics та data science wokbench
- Знання UNIX з досвідом використання Shell для задач автоматизаії
Информация о компании IT SmartFlex
Преимущества сотрудникам
- Team buildings
- Гнучкий графік роботи
- Медичне страхування
- Оплачувані лікарняні
- Освітні програми, курси
- Регулярний перегляд зарплатні
- SQL
- Python
- AWS
- ETL
- Snowflake
- DataBricks
Our client is the UK’s leading independent automotive retailer of lightly loved vehicles. Our mission is to bring affordable transportation at fair prices with excellent quality and remarkable experiences.
As a Data Engineer, you will be at the heart of our technological transformation, crafting the foundation of our new data environment. This role is pivotal in developing a robust, scalable data platform that will serve as the core single source of truth for all operations, supporting advanced optimization and automation models, including machine learning applications like price intelligence, demand forecasting, and allocation optimization.
Responsibilities:
- Designed, built, and maintained an advanced data environment on AWS using MySQL, Python, and other essential tools and languages suggested by industry trends and project requirements.
- Develop and implement data models, database designs, and ETL processes that support machine learning and automation initiatives, focusing primarily on data infrastructure over ML applications.
- Collaborate with cross-functional teams to ensure seamless integration of the new data platform with existing systems, drive modular migrations, and maintain operational excellence.
- Engage in projects geared towards optimizing car servicing through computer vision and enhancing customer understanding through segmentation, classification, recommendation models, and price point estimation.
- Stay ahead of data protection regulations, ensuring compliance while managing the fast-paced expansion and innovation.
Qualifications:
- Proven experience in data engineering with a senior-level understanding of data architecture, database design, and data modeling.
- Strong proficiency in SQL and Python and familiarity with AWS. Openness to learning and incorporating additional languages or tools as necessary.
- Experience with ETL processes, data integration, and automation.
- An enthusiasm for self-development, with a keen interest in exploring new technologies and methodologies.
- Excellent collaboration and communication skills, with the ability to work effectively in a fast-paced, innovative environment.
- Proven experience as a Data Scientist, Data Engineer, or Data Analyst.
- Strong expertise in data modeling and architectures (data lake, data hub, data vault).
- Experience with database design, ETL/ELT processes, and data pipelines.
- Experience with data warehousing, cloud-based data platforms such as Snowflake, and Databricks running on AWS or Azure.
- Problem-solving aptitude.
- BSc/BA in Computer Science, STEM field, or significant experience in place of degree; graduate degree in Data Science or other quantitative field is preferred.
Информация о компании Qubit Labs
Преимущества сотрудникам
- English Courses
- Гнучкий графік роботи
- Медичне страхування
- Оплачувані лікарняні
- SQL
- Graylog
- SoapUI
- Postman
- Swagger
- Fiddler
- DWH
- API testing
- CI/CD
- TeamCity
- Octopus Deploy
- Jenkins
Wireless Standard POS (Point-Of-Sales) is our retail management solution for the Telecom Market. It provides thousands of retailers with features and functionalities they need to run their businesses effectively with full visibility and control into every aspect of sales and operations. It is simple to learn, easy to use and as operation grows, more features can be added on. Our system can optimize and simplify all processes related to retail in this business area.
As a member of B2B Soft’s QA Team you will collaborate closely with our great Development teams and work with several different testing directions including embedded, web, desktop and mobile.
You will have the opportunity to learn and help improve the quality of our deep and complex product architecture by diving into customer needs, working with our Product, Business analytics and Architecture teams, and communicating with Customer representatives.
What you’ll be doing:
- Provide functional, regression, smoke manual testing of B2B Soft Reporting products (general reports, BI reports, APIs, subscription services) and data analyzing;
- Investigation of complex solution and finding the root cause;
- Creation and maintenance of testing documentation;
- Participate in the analysis and estimation of product requirements;
- Full coverage of QA processes in Scrum based process environment;
- Demo conduction for PO;
- Reporting to QA Lead/Delivery manager.
Requirements:
- Experience in QA 3+ years;
- Experience of testing WEB apps;
- Good understanding of QA methodologies & practices;
- Strong knowledge with SQL manager (queries, tracing, exec stored procedure);
- Experience with log management systems (e.g. GrayLog)
- Experience of testing client-server applications;
- Experience of testing web-services: SoapUI, Postman, Swagger, Fiddler, etc;
- Good written and spoken English skills (intermediate +).
Would be a plus:
- Experience in DWH testing
- Technical Education; Accounting understanding;
- Experience in testing complex applications that include integrations with different 3rd parties (APIs, etc);
- Experience with CI/CD infrastructure (e.g. TeamCity, Octopus, Jenkins).
Soft skills needed:
- Mature, self-organized and responsible person;
- Cooperative and flexible team player;
- Analytical skills;
- Proactive and result oriented person.
Информация о компании B2B Soft
Преимущества сотрудникам
- Медичне страхування
- Регулярний перегляд зарплатні
Страницы
Читайте нас в Telegram, чтобы не пропустить анонсы новых курсов.