Вакансії DevOps Engineer
We are building a cutting-edge Data Science team in our bank, focusing on integrating Data Science/Machine learning technologies into Bank’s services.
Our mission is to revolutionize the banking experience by leveraging Data Science/Machine learning for enhanced security, personalized customer interactions, and streamlined operations.
We utilize a range of modern technologies and methodologies to ensure our systems are robust, scalable, and secure.
We follow DevOps principles to maintain a seamless integration and deployment process, ensuring continuous delivery and continuous integration (CI/CD).
Our development practices are based on Agile methodology, specifically Scrum, to foster a collaborative and iterative approach to problem-solving.
As part of this innovative project, you will be at the forefront of integrating Data Science/Machine learning with financial services, addressing challenges in information security, fraud detection, customer service automation, and more.
This role offers the opportunity to work with a diverse range of technologies and contribute to a transformative initiative within the banking sector.
Required skills:
- 3+ years of experience as a DevOps engineer.
- Strong experience with Ansible and Terraform.
- 1+ years of experience with AWS (including maintenance of RDS).
- 1+ years of experience with Docker and Kubernetes.
- 1+ years of experience with RabbitMQ, PostgreSQL, MongoDB, ELK, Redis, and nginx.
- Strong knowledge of Linux (especially CentOS), OSI model, load balancing, clusterization, virtualization, and VMware.
- Experience in building CI/CD processes (experience with Git & Jenkins).
- Higher Education in a relevant field.
- Enthusiasm for exploring and implementing new technologies.
- Self-learning capabilities and a proactive attitude.
- Good communication and collaboration skills.
- Experience with AI/ML model deployment and monitoring.
- Knowledge of AI frameworks such as TensorFlow, PyTorch, or similar.
- Familiarity with data pipelines and tools like Apache Kafka and Spark.
- Understanding of MLOps practices and tools.
Desired skills:
- Experience with Vault, Consul, KeyCloak, Apiman, Prometheus.
- Experience with MLops tools: Feast,DVC,MLFlow
Scope of work:
- Creating and maintaining high-load platform;
- Implementing, configuration and deployment using Ansible;
- Working closely with Development to deploy new platform components and improve existing system;
- Responding reliably to on-call issues;
- Manage non-production integrative infrastructure and provide all its as a service;
- Drive the automation of multiple parts of infrastructure and deployment systems, striving to improve and shorten processes to enable engineering and operations teams work smarter and faster with a high quality.
- Setting up and maintaining a working DEV environment for the collective work of the Data Science team
- Develop and set up CI/CD pipelines for machine learning models.
- Participation in the operationalization/integration of developed machine learning models into the Bank's systems.
Why ПУМБ?
- Remote work/Comfortable work environment (your choice);
- Continuous professional competencies development and professional growth opportunities;
- Annual paid vacations ;
- Medical insurance;
- Friendly team of young and talented professionals.
Інформація про компанію ПУМБ
Переваги співробітникам
- Team buildings
- Без дрес-коду
- Гнучкий графік роботи
- Медичне страхування
- Linux
- Bash
- Python
- Puppet
- Ansible
- Chef
- TCP/IP
- HTTP
- HTTPS
- DNS
- SSH
- FTP
- Nagios
- Prometheus
- ELK
- SQL
- MySQL
- PostgreSQl
- NoSQL
- Docker
- Kubernetes
- Docker Swarm
- AWS
- Azure
- GCP
- Scrum
- Kanban
- Jenkins
- Terraform
- CircleCI
We are the Hosting team, and at the moment we are in the middle of creating new infrastructure for Shared Hosting as part of a huge company idea to rebuild.
We need a teammate to share our passion for building and improving hosting infrastructure, making it secure, manageable, and smart.
Within the Scrum / Kanban frameworks we use to build working processes, you can take your time for a task, plan your work, and dig deep into it without being bothered by external alerts. We do more R&D and less operations work.
There is a team you are part of that is engaged in common business, we will support, discuss, and challenge ideas.
Your expertise:
- DevOps / Linux SysAdmin work experience 2+ years
- Proficient in Linux / Unix administration
- Scripting and Automation proficiency (Bash, Python) to automate operations
- Configuration Management Tools (preferable: Puppet, any other like Ansible, or Chef is OK)
- Virtualization. Understanding of virtualization technologies (preferably KVM)
- Networking Knowledge. Understanding of network protocols (TCP/IP, HTTP / HTTPS, DNS, SSH, FTP) and network security practices
- Monitoring and Logging Tools. Experience with monitoring and logging solutions like one of Nagios, Prometheus, ELK Stack (Elasticsearch, Logstash, Kibana), or similar tools to monitor system health and diagnose issues
- Database Administration. Familiarity with database management, preferable: SQL (e.g., MySQL, PostgreSQL). Will be plus: NoSQL databases
- Understanding of Containerization and Orchestration (any of Docker, Kubernetes, Docker Swarm) concepts. This includes creating, managing, and deploying containers
- Security Practices. Knowledge of security best practices, including securing servers and applications, managing firewalls and intrusion detection systems, and conducting security audits
- Disaster Recovery and Backup Solutions
- Be able to work as part of the team
- Have a strong will for self-development
Will definitely be a plus:
- Experience with CloudLinux OS (LVE, CageFS, PHP Selector, etc.)
- Experience with cPanel Shared Hosting administration
- Experience with Scrum / Kanban processes
- Cloud Services Proficiency (any of AWS, Azure, or Google Cloud)
- CI / CD Pipeline Creation and Management (any of Jenkins, GitLab CI, or CircleCI)
- Infrastructure as Code (IaC) tools to manage infrastructure entities (like with Terraform)
You will be involved into:
- Manage and maintain our hosting infrastructure, ensuring high availability, performance, and security of our web services. This role involves working closely with both Developers and Admins for:
- services configuration optimization (stability, performance, manageability, etc.)
- manual tasks automation
- security enhancement
- new approaches design, development, implementation
- Document research and development results to be shared with other teams
- Participate in building new Hosting Infrastructure for new products
- Be an active part of the team to improve technical and soft processes
Інформація про компанію ZONE3000
Переваги співробітникам
- English Courses
- Gaming room
- Велопарковка
- Гнучкий графік роботи
- Кімната відпочинку
- Кава, фрукти, перекуси
- Оплачувані лікарняні
- Освітні програми, курси
- PostgreSQl
- Redis
- Elasticsearch
- ClickHouse
- AWS
- CI/CD
- GitHub Actions
- Grafana
- MongoDB
- Docker Swarm
JOIDY is looking for a Senior DevOps Engineer to join our team and manage our infrastructure, deployment pipelines, and system performance.
Tech Stack:
- Containers & Orchestration: Docker (Docker Swarm);
- Databases: PostgreSQL, Redis, MongoDB, Clickhouse (important!);
- Data Tools: Superset, AirByte;
- Monitoring & Logging: Prometheus, Grafana, Loki, Tempo, Sentry;
- Cloud: AWS EC2, AWS S3;
- CI/CD: GitHub Actions.
Number of years required in technologies:
- PostgreSQL - 5y+;
- Redis - 5y+;
- Elasticsearch - 2y+;
- ClickHouse - 2y+;
- One technology from the list may be missing.
Responsibility:
- Implement and maintain scalable infrastructure;
- Automate deployment pipelines;
- Monitor and troubleshoot system performance.
Requirements:
- Experience with our tech stack;
- Strong cloud and automation skills (AWS preferred);
- Familiarity with CI/CD pipelines (GitHub Actions a plus).
Інформація про компанію JOIDY
Переваги співробітникам
- English Courses
- Team buildings
- Гнучкий графік роботи
- Регулярний перегляд зарплатні
- Linux
- Kubernetes
- TCP/IP
- VCS
- Git
- Kafka
- CI/CD
- Jenkins
- TeamCity
- ArgoCD
- Ansible
- Hadoop
- Kubeflow
- Apache Airflow
- Golang
Responsibilities:
- Deep understanding of the company applications and IT infrastructure;
- Support and maintain the existing data infrastructure, ensuring reliability and efficiency;
- Collaborate in the design and implementation of new infrastructure components;
- Being a focal point for R&D teams, providing expert guidance and support;
- Lead and participate in incident investigations;
- Deliver solutions and provide support for critical IT environments, ensuring high availability and resilience;
- Coordinate with external vendors on new deployments and technical support.
Requirements:
- At least 4+ years of practical experience in Linux administration;
- Experience with containerization and container orchestration systems (e.g. Kubernetes);
- Understanding of TCP/IP network stack;
- Experience with VCS (preferably Git);
- Good troubleshooting skills and basic application debugging;
- Understanding/experience with message brokers (e.g. Apache Kafka);
- Experience with CI/CD systems such as Jenkins, TeamCity, ArgoCD;
- Experience in managing systems with configuration management (Ansible);
- Acquaintance with Databases.
Nice to have:
- Experience with Hadoop ecosystem (HDFS, YARN, Spark, Hive);
- Experience with private clouds, on-premise data centers, on-premise kubernetes.
- Experience with DAG orchestration tools such as Kubeflow, Airflow.
- Experience in developing Kubernetes-operators (Golang).
- Experience with GPUs in production environments.
Інформація про компанію Playtika
- Linux
- CentOS
- Red Hat
- Ansible
- Puppet
- Chief
- Terraform
- AWS CloudFormation
- Bash
- Python
- JavaScript
- Nginx
- AWS
- Groovy
- CI/CD
- GCP
- Azure
Що потрібно робити:
- Експлуатація інформаційних систем банку та забезпечення їх стабільного функціонування
- Експлуатація Data Warehouse (Big Data)
Необхідні знання, досвід та особисті якості:
- 2+ роки досвіду адміністрування ОС Linux (CentOS / Redhat, тощо)
- Досвід автоматизації з використанням систем управління конфігураціями (Ansible, Puppet, Chief, тощо)
- Досвід використання систем оркестрації (Terraform, AWS CloudFormation)
- Вміння користуватися скриптовими мовами програмування (Bash, Python, JScript, тощо)
- Досвід підтримки докеризованих додатків; досвід використання AWS хмарних сервісів
- Досвід роботи з системами моніторингу; розуміння основних принципів роботи Nginx
Додатковою перевагою буде:
- Досвід використання management-сервисов AWS, Python, Groovy, CI/CD інструментів
- Досвід роботи з java-додатками; наявність сертифікатів AWS;знання та досвід використання хмарних провайдерів (GCP, Azure)
Інформація про компанію ПриватБанк
Переваги співробітникам
- English Courses
- Fitness Zone
- Gaming room
- Кімната відпочинку
- Кава, фрукти, перекуси
- Компенсація навчання
- Медичне страхування
- AWS
- Terraform
- GCP
- Azure
- Docker
- Git
- Github
- GitLab
- CI/CD
- Jenkins
- Linux
- Oracle
- PostgreSQl
- PHP
- Shell
- Kibana
- Grafana
- SonarQube
- ETL
- Apache Airflow
- Python
- Erlang
- Zabbix
Що потрібно робити:
- Автоматизація всіх процесів розгортання ПЗ
- Підготовка операційного середовища для внесення змін
- Підтримка та стандартизація тестового оточення
- Створення інструментів для моніторингу процесів
- Контроль продуктивності додатків
- Онлайн реагування на проблеми під час розгортання ПЗ
Необхідні знання, досвід та особисті якості:
- Вища освіта (бажано технічна)
- 2+ років досвіду роботи на посаді DevOps
- Досвід роботи із AWS (EC2-instances, Autoscaling group і ін.)
- Досвід створення інфраструктури у хмарі через Terraform
- Досвід роботи з будь-якими хмарними провайдерами (Google Cloud, Microsoft Azure)
- Досвід роботи з контейнеризацією (Docker)
- Розуміння та досвід роботи з системами контролю версій (git, github, GitLab)
- Самоорганiзованiсть для працi в командi дистанцiйно
- Будівництво CI/CD (Jenkins, Gitlab)
- Знання Linux
- Знання англійської мови на рівні Elementary (A1)
Додатковою перевагою буде:
- Досвід роботи з Oracle, PostgreSQL, PHP, Linux, Shell
- Досвід підключення Kibana, Grafana чи SonarQube etc.
- Практичні навички з ETL Airflow, Python чи Erlang
- Розуміння та досвід роботи з Zabbix
Інформація про компанію ПриватБанк
Переваги співробітникам
- English Courses
- Fitness Zone
- Gaming room
- Кімната відпочинку
- Кава, фрукти, перекуси
- Компенсація навчання
- Медичне страхування
- AWS
- Azure
- GCP
- Datadog
- Bash
- PowerShell
- Python
- Splunk
- Prometheus
- Zabbix
- IaC
- Terraform
- Azure ARM templates
- Azure CLI
- Ansible
- Kubernetes
- CI/CD
Svitla Systems Inc. is looking for a Senior DevOps Engineer (Azure/Datadog) for a full-time position (40 hours per week) in Ukraine and Europe.
Our client is a leading expert network connecting business and government professionals with industry experts to support informed decision-making. They provide a research enablement platform powered by real-time data, innovative technology, and specialized expertise. Through calls, conferences, surveys, and workshops, the platform enables clients to gain insights across industries like healthcare, finance, consumer goods, energy, technology, and legal sectors. Since 2003, the company has partnered with top consulting firms, hedge funds, and Fortune-ranked companies, helping them turn insights into action.
You will join the CloudOps team and help drive the optimization and performance of the infrastructure monitoring and observability practices.
You will be responsible for managing, maintaining, and optimizing Datadog for comprehensive monitoring and observability across our Azure infrastructure, Kubernetes environments, and application services.
By leveraging Datadog’s tools for monitoring, alerting, and automated remediation, you will play a key role in ensuring the high availability, reliability, and performance of cloud-based systems.
Requirements
- 5+ years of experience with clouds (AWS, GCP, Azure) in general.
- 3+ years of experience with Datadog as a monitoring and observability platform.
- At least 2-3 years of hands-on experience managing and optimizing Azure cloud infrastructure.
- Experience in automation using Datadog’s integration features (alerts, monitoring dashboards, and automated remediation).
- 2+ years of experience in Azure cost management (FinOps) and cloud cost optimization practices.
- Understanding scripting with Bash, PowerShell, Python, or similar languages.
- Strong troubleshooting and debugging capabilities in an agile software development environment.
- Experience with other monitoring tools (e.g., Splunk, Prometheus, Zabbix) is a plus.
- Strong problem-solving skills and a proactive approach to system monitoring and issue resolution.
- Proven experience managing projects and meeting deadlines while maintaining high-quality standards.
- The ability to prioritize tasks effectively and exhibit good judgment when managing resources.
- Excellent interpersonal and communication skills for cross-team collaboration.
- Independent and self-motivated individual with the ability to drive tasks to completion.
- A team-oriented person with a collaborative mindset who can work in a fast-paced, agile environment.
- Strategic thinking with the ability to balance operational needs with long-term goals.
- The ability to take ownership of tasks and a strong sense of accountability.
- At least upper-intermediate English level.
Overlap till 8 pm Kyiv time/7 pm CET is a must. The client’s team is in the EST time zone.
Nice to have
- Experience with other monitoring tools (e.g., Splunk, Prometheus, Zabbix).
- Knowledge of Infrastructure as Code using tools like Terraform, ARM templates, or Azure CLI.
- Azure Solutions Architect Expert certification or equivalent.
- Azure Security Engineer certification (Associate level).
- Familiarity with Ansible for automation and configuration management.
- Advanced knowledge of Kubernetes and container orchestration best practices.
- Experience with CI/CD pipelines and integrating Datadog with DevOps processes.
Responsibilities
- Datadog Implementation & Management: Take complete ownership of Datadog to monitor infrastructure, services, and applications across multiple environments (production, development, test). Ensure optimal configurations for observability and alerting.
- Performance & Health Monitoring: Using Datadog, monitor infrastructure and application performance, identify potential issues, and create automated remediation workflows to resolve them.
- Cost Management: Optimize and monitor Azure cloud costs using Datadog and other cloud tools, tracking and improving resource usage and cost efficiency.
- Automation & Remediation: Leverage Datadog’s alerting system and integrations to automate the remediation of common infrastructure and application issues.
- Kubernetes & Cloud Infrastructure: Collaborate with CloudOps and Engineering teams to monitor and optimize Kubernetes environments, ensuring containers, pods, and services run efficiently.
- Collaboration: Work closely with the Engineering, AppOps, and CloudOps teams to address complex infrastructure challenges and ensure smooth deployments and high availability.
- Security & Compliance: Ensure security and compliance best practices are followed for monitoring and logging, participating in security audits and incident response activities as required.
- Infrastructure as Code: Support the automation and deployment of infrastructure using tools like Terraform and Azure Resource Manager (ARM).
- FinOps: Contribute to FinOps activities by tracking resource usage and optimizing cloud costs, providing data-driven insights into cost-saving opportunities.
- Best Practices & Optimization: Continuously review and improve monitoring configurations, workflows, and processes for maximum efficiency, performance, and security.
Інформація про компанію Svitla
Переваги співробітникам
- English Courses
- Pet-friendly
- Team buildings
- Work-life balance
- Відпустка по догляду за дитиною
- Гнучкий графік роботи
- Кава, фрукти, перекуси
- Компенсація витрат на спорт
- Компенсація навчання
- Медичне страхування
- Оплачувані державні свята
- Оплачувані лікарняні
- Регулярний перегляд зарплатні
- New Relic
- YAML
- Terraform
- IaC
- Puppet
- PowerShell
- Python
Join a dynamic and collaborative team supporting a UK-based company specializing in financial and administrative services.
Project overview
This long-term project focuses on the strategic migration of the client’s infrastructure from on-premises Windows servers to a modern, AWS-based solution.
You will work closely with the UK-based development team and technical leads to ensure a smooth transition to the cloud while building and automating robust DevOps pipelines and infrastructure.
The project employs Agile methodologies and operates under the Scrum framework to maintain transparency, adaptability, and efficiency throughout development cycles.
Position overview
This is an exciting opportunity to contribute to a cutting-edge migration while honing your expertise in cloud platforms, automation, and DevOps best practices within a supportive, collaborative environment.
Requirements
- Experience with New Relic (NRQL) for monitoring and performance analysis
- Knowledge of YAML for configuration management
- Hands-on experience with Terraform and Infrastructure as Code (IaC)
- Expertise in Puppet for configuration management
- Proficiency in PowerShell and Python for scripting and automation
Інформація про компанію DataArt
Переваги співробітникам
- English Courses
- Fitness Zone
- Gaming room
- Paid overtime
- Team buildings
- Work-life balance
- Без дрес-коду
- Відпустка по догляду за дитиною
- Велика стабільна компанія
- Велопарковка
- Гнучкий графік роботи
- Довгострокові проекти
- Кімната відпочинку
- Кава, фрукти, перекуси
- Медичне страхування
- Оплачувані лікарняні
- Освітні програми, курси
- Kubernetes
- Linux
- AWS
- Terraform
- Ansible
- Chef
- Python
- Bash
- Prometheus
- Grafana
We are Wix’s Infrastructure Guild. We’re at the core of building and maintaining the production and CI/CD environment. Our mission is to improve production performance and resilience to ensure 99.99% uptime for millions of users and drive our developer velocity. We also design, develop, and operate cutting-edge services and robust tools for the entire developer community – both Frontend and Backend.
Job Description
As a DevOps Engineer at Wix, you’ll play a pivotal role in our cloud infrastructure group, driving the development, optimization, and maintenance of our cloud-based infrastructure. You will:
- Design, implement, and maintain scalable and reliable cloud infrastructure solutions
- Collaborate with cross-functional teams to understand business requirements and translate them into effective cloud solutions
- Manage infrastructure of Kubernetes from different flavors on top of AWS cloud at scale.
- Use your expertise in Linux and networking fundamentals to enhance system performance, security, and reliability
- Implement and maintain Infrastructure as Code (IaC) practices to ensure consistency and efficiency in deployment processes
- Troubleshoot and resolve complex issues related to cloud infrastructure and services
Qualifications:
- You have at least 5 years of experience working in live production systems
- You have hands-on experience with Kubernetes as a cluster administrator.
- You have experience in Linux and are well-versed in networking fundamentals
- You’ve proven experience with AWS
- You have strong knowledge of provisioning and configuration management systems (e.g., Terraform, Ansible, Chef, etc.)
- You’re familiar with scripting languages like Python and Bash
- You’ve experience with monitoring tools (Prometheus, Grafana)
- You have strong communication and collaboration skills
Інформація про компанію Wix
Переваги співробітникам
- Fitness Zone
- Кава, фрукти, перекуси
- Azure
- Microsoft IIS
- Nginx
- Apache
- MySQL
- Microsoft SQL Server
- Azure SQL
- Oracle
- PostgreSQl
- Bash
- Python
- SQL
- PowerShell
- Kubernetes
- Docker
- Helm
- Terraform
- GitLab
- Jenkins
- Groovy
- Linux
- Windows
As part of our Core Business Unit, you will work with various product lines across the company. The Core BU is the backbone, ensuring seamless collaboration among business units, departments, and teams – acting as the glue that keeps everything running smoothly.
Your future team of professionals
You will join the IT Department, which brings together System Administrators, DevOps, and Security Engineers to ensure the stability, scalability, and security of our infrastructure. The team develops and maintains cutting-edge solutions, optimizes processes, and safeguards the company’s digital assets.
Technical stack
Azure, Web Servers (IIS, Apache, nginx), Databases (MySQL, SQL Server, AzureSQL, PostgreSQL, Oracle), scripting languages (Bash, Python, SQL, PowerShell, Groovy for Jenkins), Terraform, containerization tools (Docker, Kubernetes, Helm).
Your future challenges
The DevOps Engineer drives seamless system integration and deployment at Devart Core, enhancing system performance, optimizing resources, and ensuring high-quality software delivery. Focused on automation, scalability, and security, this role also leads efforts to refine infrastructure and processes.
Key responsibilities include managing cloud services (Azure), server administration, CI/CD pipeline automation, containerization (Kubernetes, Docker), and scripting. Success in this role demands a proactive approach to solving complex technical challenges, strong collaboration skills, and a commitment to continuous improvement and innovation. This role directly impacts the operational efficiency and strategic goals of the unit.
Impact you will make
- Manage Azure cloud infrastructure by configuring services, ensuring security, performing diagnostics and optimization, as well as setting up test environments and migrating data.
- Develop, implement, improve, and maintain continuous integration and delivery (CI/CD) processes to enhance deployment workflows.
- Administer servers by supporting company websites, ensuring security, and maintaining fault tolerance for uninterrupted operations.
- Oversee database administration, support general information security, introduce new technologies, provide L2 and L3 technical support, and participate in internal department projects.
Skills we are looking for
- 5+ years of professional experience, well-versed in Azure services.
- Strong knowledge of web servers (IIS, nginx, Apache) and proficiency in managing databases such as MySQL, SQL Server, Azure SQL, Oracle, and PostgreSQL.
- Proficiency in scripting languages including Bash, Python, SQL, and PowerShell.
- Experience with containerization and orchestration tools (Kubernetes, Docker, Helm) and infrastructure tools like Terraform, GitLab, and Jenkins (including Groovy).
- Solid understanding of Linux and Windows environments, with knowledge of network architecture, security, threat mitigation, and load balancing.
Інформація про компанію Devart
Переваги співробітникам
- English Courses
- No overtime
- Team buildings
- Гнучкий графік роботи
- Медичне страхування
- Оплачувані лікарняні
- AWS
- Terraform
- Python
- Node.js
- CI/CD
Binariks is looking for a highly motivated and skilled Senior DevOps Engineer to join the team
About the project:
Our client's company leads the digital transformation of surgery. First-of-its-kind healthcare platform brings together telepresence, content management, and data insights to enable hospitals, surgeons, and device companies to share information and expertise.
What We’re Looking For
- 5+ years of DevOps experience required
- Extensive experience with AWS
- Experience with Terraform
- Comfortable in writing Python and Node scripts to support CI/CD developments
- Excellent written and verbal communication skills with customers
- Upper-intermediate level of spoken and written English
Your Responsibilities
- Working closely with the Quality Engineering team to support the changes to implement to speed up CI and CD
- Managing infrastructure running on AWS
- Supporting development processes
- Improving application performance
Інформація про компанію Binariks
Переваги співробітникам
- English Courses
- Gaming room
- Бухгалтерський супровід
- Компенсація навчання
- Медичне страхування
- Оплачувані лікарняні
- Освітні програми, курси
- Microsoft Azure
- CI/CD
- Azure DevOps
- YAML
- IaC
- ARM
- Azure Bicep
- Terraform
- Azure CLI
- PowerShell
- Docker
- AKS
- RBAC
- Azure Key Vault
- Git
- Github
- Python
- Bash
- AWS
- GCP
We are looking for a skilled Full-Time Azure DevOps Engineer to support projects, including internal initiatives and potentially new applications.
This role is ideal for a DevOps professional with strong experience in Microsoft Azure who is motivated to automate processes, enhance performance, and ensure the security and reliability of our systems.
Requirements:
- Azure Core Services:
- Proficiency in managing Azure resources (Virtual Machines, Storage, Virtual Networks).
- Experience with Azure Compute (AKS, Azure Container Instances) and Azure Functions.
- Knowledge of Azure Networking (Virtual Networks, Load Balancers, NSGs) and Storage Solutions (Blob Storage, Azure Files).
- Continuous Integration & Deployment (CI/CD):
- Experience with Azure DevOps for CI/CD pipeline creation and management.
- Knowledge of YAML-based pipelines and automation of build, test, and deployment processes.
- Experience managing artifacts and dependency feeds.
- Infrastructure as Code (IaC):
- Hands-on experience with ARM templates, Bicep, and Terraform for Azure resource provisioning.
- Familiarity with Azure CLI & PowerShell for automation.
- Containerization & Orchestration:
- Experience creating and managing Docker containers and Azure Kubernetes Service (AKS).
- Knowledge of Kubernetes basics (pods, deployments, services) and Helm for Kubernetes management.
- Monitoring, Logging & Security:
- Proficient with Azure Monitor, Log Analytics, and Application Insights.
- Strong understanding of Role-Based Access Control (RBAC) and security policies.
- Familiarity with secrets management using Azure Key Vault.
- Version Control & Collaboration:
- Strong knowledge of Git and experience with Azure Repos or GitHub.
- Familiarity with Git branching strategies and collaborative tools like Azure Boards.
- Automation & Configuration Management:
- Experience with Azure Automation, Runbooks, and scripting with PowerShell or Python.
Nice to Have:
- Advanced scripting skills in Python or Bash.
- Familiarity with multi-cloud environments (AWS, GCP).
- Knowledge of advanced CI/CD practices (Blue-Green Deployments, Feature Flags).
Responsibilities:
- Lead and manage multiple technical projects from initiation to completion, ensuring they meet scope, timeline, and budget requirements.
- Collaborate with internal and external stakeholders, maintaining transparent and continuous communication.
- Coordinate cross-functional teams of developers, engineers, designers, and analysts to ensure efficient resource allocation.
- Develop and maintain detailed project plans, timelines, and technical documentation.
- Identify and mitigate risks proactively to avoid project delays or budget overruns.
- Implement best practices for project delivery and continuous process improvement.
- Manage changes to project scope, schedule, and budget, aligning with business objectives.
- Ensure deliverables meet defined quality standards and client expectations.
Інформація про компанію Hygge Software
Переваги співробітникам
- Paid overtime
- Гнучкий графік роботи
- Компенсація навчання
- Освітні програми, курси
- Регулярний перегляд зарплатні
- AWS
- EC2
- AWS RDS
- Amazon S3
- CI/CD
- Kubernetes
MWDN company is looking for a self-motivated and goal-oriented skilled DevOps Engineer to join our client's team and help them optimize, manage, and scale their AWS infrastructure to meet growing demands.
What makes this project exciting?
This company revolutionizes the roofing industry with a cutting-edge software platform that empowers contractors and property owners. As a team member, you’ll be part of an innovative environment that uses advanced satellite, aerial, and drone technology to streamline every aspect of roofing projects.
Imagine working on tools that allow contractors to measure roofs with precision, visualize clients' new roofs, and generate instant estimates—all of which save time, increase accuracy, and improve client satisfaction.
By applying to this position, you’ll contribute to a company that improves the industry and helps businesses grow by integrating cutting-edge technology into everyday workflows.
They are committed to fostering innovation and efficiency and invite passionate individuals eager to make a lasting impact in the tech-driven construction world to join their journey.
What makes you a great fit
- Proven Experience: 3+ years of experience as a DevOps Engineer, with a focus on AWS infrastructure.
- AWS Expertise: Deep understanding of AWS services (EC2, RDS, S3, etc.).
- Database Management: Strong knowledge of database optimization, replication, and scaling.
- CI/CD: Proficiency with continuous integration and deployment pipelines.
- Scaling Solutions: Experience in scaling cloud infrastructure to support high-performance services.
- Monitoring & Alerts: Proficient in setting up and using monitoring tools.
- Level of spoken English: at least an upper-intermediate.
Nice to Have:
- Scripting & Automation: Hands-on experience with automation tools and scripting languages
- Familiarity with Kubernetes or container orchestration platforms.
- Security Best Practices: Knowledge of AWS security protocols and best practices, including access control, encryption, and incident response.
Your day-to-day in this position
- AWS Infrastructure Management: Optimize and manage AWS services, including EC2 instances and RDS databases.
- Database Optimization: Optimize RDS performance, explore database scaling opportunities, and introduce database replication and distribution of requests (e.g. using ProxySQL).
- Automated Deployment: Design and implement automated deployment pipelines for a variety of microservices built on different technologies.
- Initial Deployment Support: Assist in setting up server infrastructure for new microservices, reducing time spent on manual setup processes.
- Scaling Solutions: Explore and implement scaling strategies for EC2 instances, ensuring AI and ML services are running smoothly even under heavy loads.
- Monitoring & Alerts: Enhance our existing monitoring systems to provide more in-depth performance metrics and timely notifications.
- Backup Management: Develop and manage comprehensive backup strategies for all critical systems, beyond the current RDS backups.
- Access Management: Set up and manage external access controls for servers and internal user management, ensuring secure and reliable operations.
- Security Compliance: Resolve AWS and server-side security issues, ensuring the infrastructure is secure and compliant with best practices.
Інформація про компанію MWDN
Переваги співробітникам
- English Courses
- Team buildings
- Англомовне середовище
- Без бюрократії
- Бухгалтерський супровід
- Гнучкий графік роботи
- Закордонні відрядження
- Кава, фрукти, перекуси
- Оплачувані лікарняні
- Оплачувана відпустка
- Освітні програми, курси
- Юридичний супровід
- CI/CD
- Jenkins
- GitLab
- CircleCI
- AWS
- GCP
- Microsoft Azure
- Docker
- Kubernetes
- Bash
- Python
- Prometheus
- Grafana
- ELK
- AWS Lambda
- Helm
- ISO 27001
- Terraform
- Ansible
- SOC2
We are seeking a DevOps Engineer for a part-time, on-demand engagement to support our development and operations teams. This role is ideal for a skilled professional who excels in building, automating, and managing scalable infrastructure while being available to contribute as needed.
What you’ll be doing:
- Design, implement, and maintain CI/CD pipelines to automate application development and deployment.
- Optimize and manage cloud infrastructure (AWS, Azure, or Google Cloud) for scalability and cost efficiency.
- Monitor system performance, troubleshoot issues, and ensure high availability of applications and services.
- Collaborate with development teams to streamline the software delivery lifecycle.
- Implement and maintain infrastructure as code using tools like Terraform or Ansible.
- Enhance system security by applying best practices for access control, encryption, and compliance.
- Provide on-demand support for deployments, system maintenance, and incident resolution.
What you’ll need:
- 5+ years of experience in a DevOps or similar role.
- Hands-on experience with CI/CD tools like Jenkins, GitLab CI/CD, or CircleCI.
- Proficiency in cloud platforms (AWS, Google Cloud, Azure) and containerization tools (Docker, Kubernetes).
- Strong scripting skills in Bash, Python, or similar languages.
- Experience with monitoring tools such as Prometheus, Grafana, or ELK Stack.
- Knowledge of networking, system administration, and database management.
- Strong problem-solving skills and ability to work independently.
Nice-to-Have Skills:
- Familiarity with serverless architectures and tools like AWS Lambda.
- Experience with Kubernetes orchestration and Helm charts.
- Knowledge of compliance and security frameworks (e.g., SOC 2, ISO 27001).
- Background in supporting agile development teams in a startup environment.
Інформація про компанію Honeycomb Software
Переваги співробітникам
- Освітні програми, курси
- Регулярний перегляд зарплатні
- CI/CD
- GitLab
- Docker
- AWS CDK
- Terraform
- Kubernetes
- ECS
- JVM
- Java
- Kotlin
- Scala
- Prometheus
- Alertmanager
- Datadog
- Cognito
- Auth0
- Athena
- Amazon Redshift
- Okta
- MQTT
- Kafka
- AWS IoT Core
We invite an experienced, autonomous, and motivated DevOps Engineer to join our team. You’ll leverage your knowledge and expertise to help build innovative applications tailored for Point of Sale (POS) systems, with a focus on (but not limited to) Ingenico products. Our client is an independent consulting company specializing in digital payment and digital health that provides end-to-end consulting services and innovative solutions to clients worldwide.
Is that you?
- 5+ years of commercial DevOps experience
- Strong experience in setting up CI/CD pipelines with GitLab CI/CD; knowledge of release patterns
- Hands-on expertise with Docker
- Proficiency in IaC tools (CDK, Terraform)
- Skills in Kubernetes and ECS orchestration
- Solid background in JVM languages (Java, Kotlin, Scala)
- Familiarity with monitoring and alerting tools (Prometheus, Alertmanager, Datadog)
- Experience with Cognito or Auth0 and access control tools
- Proficiency in managing VPCs, Route 53, and Load Balancers and maintaining PCI-DSS and ISO27001 standard compliance
- Upper-Intermediate+ English level
Desirable:
- Experience with tools like Amazon Athena or Redshift
- Familiarity with Okta for identity and access management
- Knowledge of AWS IoT, MQTT, and Kafka for event-driven architectures
Key responsibilities and your contribution
In this role, you will help enhance the reliability, scalability, and performance of our systems by developing efficient CI/CD pipelines, implementing cloud infrastructure, and ensuring security and compliance.
- Develop and maintain efficient CI/CD pipelines for reliable software delivery across environments
- Implement and manage cloud infrastructure through code for scalable and secure deployment practices
- Set up and refine monitoring systems and timely incident alerts
- Integrate modern authentication and authorization solutions with secure access control protocols
- Configure network components like VPCs and load balancers while ensuring security and compliance
- Use containerization to improve scalability and optimize deployment strategies across production environments
- Work closely with developers on troubleshooting to ensure quick issue resolutions and continuous improvement
Інформація про компанію DevPro
Переваги співробітникам
- English Courses
- Team buildings
- Гнучкий графік роботи
- Медичне страхування
- Оплачувані лікарняні
- Освітні програми, курси
- Microsoft Azure
- Datadog
- IaC
- C#
- TypeScript
The company’s main web application is hosted in Azure cloud ecosystems. The web front-end of this system is constructed with React and HTML/CSS, while the back-end is built in Java and utilizes Hibernate and Postgres databases. The Java staff is hosted through Docker containers which are built on the Kubernetes cluster. This product is the hub for numerous related services and integrations. They have lots of microservices architecture. A team constantly growing, learning, and experimenting with other technologies including 3D modeling.
The role is all about monitoring. You will be creating dashboards, setting up alerts in Azure, and doing whatever is needed to keep the apps running smoothly. That’s the DevOps side of the job. On top of that, you should be able to work on the app code (C#/TypeScript) to add anything needed to collect more data for monitoring. That’s the developer part of the role.
Requirements
- 5+ years of commercial experience;
- Extensive experience with Azure, including provisioning, managing, and optimizing cloud resources for reliability and scalability;
- Experience with monitoring and analytics tools, especially DataDog;
- Proficiency in using IaC tools for automating cloud infrastructure management and deployment;
- Ability to code using C# and/or TypeScript;
- Proactive approach to identifying problems, performance bottlenecks and areas for improvement.
Responsibilities
- Run the production environment by monitoring availability and taking a holistic view of system health;
- Measure and optimize system performance, with an eye toward pushing our capabilities forward, getting ahead of customer needs, and innovating for continual improvement;
- Provide primary operational support and engineering for multiple large-scale distributed software applications;
- Gather and analyze metrics from operating systems as well as applications to assist in performance tuning and fault finding;
- Partner with dev teams to improve services through rigorous testing and release procedures.
English Level – Upper-Intermediate / Advanced. The candidate should be able to clearly communicate and deliver their ideas.
Work Schedule and Timezone – Flexible 8hr/day Mon – Fri. Minimum 2-3 hours overlap with the CST (Chicago) time.
Інформація про компанію EchoGlobal
Переваги співробітникам
- Work-life balance
- Бухгалтерський супровід
- Компенсація навчання
- Надається ноутбук
- Оплачувані державні свята
- Оплачувані лікарняні
- Юридичний супровід
- Docker
- Kubernetes
- Terraform
- IaC
- AWS
- Bash
- Python
- Node.js
- Jenkins
- GitHub Actions
- GitLab
- Ansible
- Argo Rollouts
- Helm
- Prometheus
- Datadog
- Grafana
- ELK
- Agile
- MySQL
- Aurora
- CI/CD
It is a service automation company delivering software for organizations that want to get more done. The SaaS ITSM (IT Service Management) and Asset Management AI solutions serve more than 5,000 customers across the globe. The company does the heavy lifting for IT and anyone delivering services in the digital workspace, by leveraging automation, analytics, and AI. As we continue our expedited growth, we're looking for a Senior DevOps Engineer.
Responsibilities:
- Design, implement, and manage scalable, reliable, and secure infrastructure using Kubernetes, Terraform, and AWS.
- Monitor and optimize the monitoring, logging, and alerting systems to ensure the health and performance of applications and infrastructure.
- Develop and maintain CI/CD pipelines to ensure smooth and efficient deployment processes.
- Collaborate with software engineering teams to understand their requirements and provide the necessary infrastructure and tooling support.
- Stay updated with the latest industry trends, tools, and technologies to continuously improve our infrastructure and processes.
- Provide technical, innovation, and best practices to the DevOps team.
Qualifications:
- Knowledge of containerization technologies such as Docker and Kubernetes (EKS).
- 5+ years of professional experience as a DevOps/Infrastructure/Site Reliability Engineer.
- Proficient in Terraform for infrastructure as code (IaC) implementation and management.
- 4+ years of professional experience with Amazon Web Services (AWS).
- Proficiency in scripting languages including Bash, Python, and NodeJS.
- Familiarity with continuous integration tools such as Jenkins, GH action, and GitLab.
- Experience with infrastructure-as-code (like Terraform, Ansible, Argo rollout, and Helm).
- Bachelor’s or Master's degree in Computer Science, Engineering, or related field - meaningful advantage.
- Strong collaboration and documentation skills.
- Excellent judgment, analytical thinking, and problem-solving skills.
Preferred Qualifications:
- AWS certification (e.g., AWS Certified DevOps Engineer, AWS Certified Solutions Architect).
- Knowledge of monitoring and logging tools (e.g., Prometheus, DataDog, Grafana, ELK stack).
- Familiarity with agile development methodologies and practices.
- Experience with MySQL (Aurora MySQL is a plus).
Інформація про компанію Gemicle
Переваги співробітникам
- English Courses
- Team buildings
- Гнучкий графік роботи
- Кава, фрукти, перекуси
- Оплачувані лікарняні
- Оплачувана відпустка
- Регулярний перегляд зарплатні
- AWS
- EC2
- RDS
- Amazon S3
- IAM
- Pulumi
- Python
- Ruby on Rails
- Docker
- Docker Compose
- GitHub Actions
- Elasticsearch
- PostgreSQl
- Redis
- Microservices
We seek a DevOps Engineer to enhance our infrastructure using DevOps and AWS best practices. You will help build and maintain scalable, secure systems, manage version control, and develop deployment pipelines using GitHub. We seek someone proactive, detail-oriented, and with solid automation skills.
What you’ll be doing
- AWS Infrastructure: Maintain cloud infrastructure collaboratively, ensuring scalability, security, and performance.
- Infrastructure as Code: Develop infrastructure using Pulumi (Python).
- Security & Continuity: Implement audit trails, logging, and business continuity measures.
- Alert Systems: Build and improve alert systems for issue resolution.
- Kubernetes: Assist with Kubernetes cluster management (knowledge is a plus).
- Container Management: Use Docker and Docker Compose for containerized apps.
- Developer Support: Assist developers with Docker Compose issues.
- CI/CD: Improve CI/CD workflows using GitHub Actions.
- Data Management: Use Elasticsearch, RDS Postgres, and Redis to manage data storage. Ensure backups and auditability.
- System Reliability: Maintain monitoring, logging, and alerting systems.
What you’ll need
- Experience: 5+ years in DevOps, SRE, or similar roles.
- AWS Skills: Strong experience with AWS (EC2, RDS, S3, IAM).
- Infrastructure as Code: Pulumi with Python.
- Experience with Ruby on Rails back-end development.
- Containerization: Strong knowledge of Docker and Docker Compose.
- CI/CD Tools: Experience with GitHub Actions.
- Data Stores: Familiarity with Elasticsearch, PostgreSQL, Redis.
- Best Practices: Knowledge of data protection and business continuity.
Nice to have
- Familiarity with microservices.
Інформація про компанію Syndicode
Переваги співробітникам
- Team buildings
- Оплачувані державні свята
- Оплачувані лікарняні
- Освітні програми, курси
- AWS
- IAM
- SO
- VPC
- AWS Transit Gateway
- Amazon S3
- EC2
- RDS
- ELB
- AWS CloudTrail
- AWS Config
- AWS Inspector
- Amazon GuardDuty
- AWS WAF
- IaC
- Terraform
- Microsoft Azure
- Oracle Cloud
Intetics Inc. is a leading American technology company providing custom software application development, distributed professional teams creation, software product quality assessment, and “all-things-digital” solutions, is looking for a Senior DevOps Engineer to enrich its team with a skilled professional.
Responsibilities:
- Design and build resilient Cloud infrastructures that are protected against security threats
- Develop and assess Cloud security solutions to secure systems, databases, and networks
- Conduct assessment and make recommendations to ensure that appropriate controls are in place
- Gain insight into security incidents and threats by monitoring/analyzing logs and performing vulnerability assessments
- Participate in efforts that shape the company's security policies, procedures, and standards for use in Cloud environments
- Create technical and managerial level security reports for Cloud-based applications and infrastructure
- Implement and test network and security Disaster Recovery procedures to ensure business continuity
- Monitor the use of sensitive data and regulate access to safeguard information
- Ensure the confidentiality and integrity of data during transmission, storage, and processing
- Review violations of security procedures and discuss procedures with violators to ensure they are not repeated
- Provide support to end users regarding network and security-related issues
Requirements
- BSc/MSc in Information Security or any other related field o General experience of not less than 5 years in the DevOps field
- Minimum 2 years of working experience in Information Security, with a proven focus on Cloud Security
- Technical knowledge of Amazon Web Services (AWS)
- Proficiency with AWS services such as IAM, Organizations, SO, VPC, Transit Gateway, S3, EC2, RDS, ELB, CloudTrail, Config, Inspector, GuardDuty, WAF, etc.
- Clear understanding of current threats to Cloud infrastructure and advanced knowledge of securing such environments
- Expertise in DevSecOps methodologies is considered a plus
- Background building and deploying applications to the cloud (AWS, Azure, etc.) using Infrastructure as Code
- Ability to work autonomously with minimum supervision and to integrate well within a team
- Capability to quickly learn new technologies in depth
- Availability to work in CET timezone
Nice to Have:
- Code tools such as Terraform
- Expertise in container security
- Proficiency with Microsoft Azure and Oracle Cloud
Інформація про компанію Intetics
Переваги співробітникам
- English Courses
- Relocation assistance
- Team buildings
- Work-life balance
- Медичне страхування
- Освітні програми, курси
- Terraform
- Azure Resource Manager
- Microsoft Azure
- Docker
- Kubernetes
- AKS
- Azure App Service
- ACI
- Azure DevOps
- GitLab CI
- GitHub Actions
- Jenkins
- Redis
- MSSQL
- Azure SQL
- MySQL
- CircleCI
- MongoDB
- PostgreSQl
- Grafana
- AWS
The project is connected with the Travel domain. The client is a publicly traded tourism company. Product: Microservice aggregator based on the API solutions for integration of GDSs with OTA (Online Travel Agencies)
You Have
- Infrastructure and configuration management:
- Terraform (Azurerm, aws )
- Azure Resource Manager
- Cloud providers:
- Microsoft Azure
- Docker and Kubernetes services
- Azure Cloud objects
- Containerization and orchestration services (AKS, ACI, Azure App service)
- Azure disaster recovery service (Recovery Service vault, Site Recovery, Recovery Plans)
- Azure SQL management (Elastic Pool, Failover groups, Geo-Replication)
- Azure Automation Accounts
- Azure B2C service administrations
- Azure Active Directory administrations
- Azure VM, KeyVaults, Application insights, Storage account, Azure container registry, Traffic manager
- Azure Virtual Networks, Subnets, Local network gateway, Azure VPN, Private endpoint
- CI/CD tools:
- Azure DevOps
- GitLab CI
- GitHub Actions
- CircleCI
- Jenkins
- Databases and open source technologies:
- Redis, Azure Cache for Redis
- MS SQL, Azure SQL, MySQL, MongoDB, PostgreSQL
- Grafana
- Git:
- installation and configuration local git
- creating a branch and merge
- git-lfs and usage
Would be a plus
- AWS experience
You Are Going To
- Transitioning services to efficient CI/CD processes
- Replacing manual deployments moving to Azure cloud hendling security issues
- Project maintenance
Інформація про компанію AltexSoft
Переваги співробітникам
- Work-life balance
- Компенсація навчання
- Медичне страхування
- Освітні програми, курси
Сторінки
Читайте нас в Telegram, щоб не пропустити анонси нових курсів.