Optum Logo

Optum

Lead DevOps Engineer

Posted 3 Days Ago
Be an Early Applicant
In-Office
Noida, Gautam Buddha Nagar, Uttar Pradesh
Expert/Leader
In-Office
Noida, Gautam Buddha Nagar, Uttar Pradesh
Expert/Leader
Lead the design and implementation of DevOps ecosystems using Infrastructure as Code. Automate operations, manage cloud infrastructure, and mentor junior engineers.
The summary above was generated by AI
Requisition Number: 2338210
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together.
A seasoned and forward-thinking Infrastructure & DevOps Leader with deep expertise in designing scalable, resilient, and automated infrastructure solutions. This role needs a high-performing lead engineer responsible for building and maintaining modern DevOps ecosystems, leveraging Infrastructure as Code (IaC), container orchestration, CI/CD pipelines, and cloud-native technologies. The ideal candidate combines solid technical acumen with leadership capabilities to drive innovation, operational excellence, and continuous improvement across the infrastructure landscape.
Primary Responsibilities:
  • Design and implement Infrastructure as Code (IaC) using tools like Terraform
  • Automate operational tasks using Python and other scripting languages
  • Leverage AI tools and AIOps to optimize operations and reduce manual overhead
  • Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI, or Azure DevOps
  • Implement Agentic DevOps practices to enhance automation and decision-making
  • Monitor infrastructure using Prometheus, Grafana, and Datadog; lead incident response and root cause analysis
  • Deploy and manage containerized applications using Docker and Kubernetes
  • Use Ansible or Chef for configuration management and provisioning
  • Manage cloud infrastructure across AWS, Azure, and Google Cloud, ensuring cost-efficiency and scalability
  • Automate machine image creation using Packer for consistent environments
  • Collaborate with development and QA teams to ensure seamless software delivery
  • Mentor junior engineers and promote best practices across the team
  • Lead system integration projects and consult with stakeholders to align infrastructure with business goals
  • Provide training and support on new systems and technologies
  • Stay current with industry trends and emerging technologies
  • Adhere to company policies and demonstrate flexibility in adapting to evolving business needs
  • Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so

Required Qualifications:
  • Bachelor's degree in Computer Science, Engineering, or a related field
  • Certifications in Kubernetes, Terraform, or any public cloud platform like AWS, Azure, or GCP
  • 10+ years of experience in a DevOps role or similar
  • CI/CD Pipelines: Hands-on experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.)
  • Experience with distributed systems and microservices architecture
  • Version Control: Experience with Git and branching strategies
  • DevOps Tools: Solid experience with tools like Terraform, Kubernetes, Docker, Packer, and Consul
  • System Implementation & Integration: Proven experience in system implementation and integration projects
  • Monitoring & Logging: Knowledge of monitoring tools (e.g., Prometheus, Grafana, Datadog) and logging tools (e.g., ELK Stack)
  • Cloud Platforms: Familiarity with AWS, Azure, or GCP services. Proven experience implementing Public Cloud Services using Terraform within Terraform Enterprise or HCP Terraform
  • Programming Languages: Proficiency in Python and experience with other scripting languages (e.g., Bash)
  • Infrastructure as Code: Proficiency in tools like Terraform and Ansible. Proven experience in authoring Terraform and shared Terraform Modules
  • Consulting Skills: Proven ability to consult with clients and stakeholders to understand their needs and provide expert advice

Soft Skills:
  • Solid analytical and problem-solving skills
  • Proven excellent communication and collaboration abilities
  • Ability to work in an agile and fast-paced environment

At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Top Skills

Ansible
AWS
Azure
Azure Devops
Chef
Datadog
Docker
Github Actions
Gitlab Ci
GCP
Grafana
Jenkins
Kubernetes
Packer
Prometheus
Python
Terraform

Similar Jobs at Optum

8 Minutes Ago
In-Office
Noida, Gautam Buddha Nagar, Uttar Pradesh, IND
Senior level
Senior level
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
The Advisory Consultant will design and develop healthcare systems using TriZetto Facets, customize solutions, perform testing, provide support, and collaborate with teams.
Top Skills: .NetAngularAWSAzureC#CSSEdiFacetsGCPGitHTMLMulesoftReactRest ApiSQL
8 Minutes Ago
In-Office
4 Locations
Expert/Leader
Expert/Leader
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
Lead controllership for SG&A and revenue accounting, manage month-end close, prepare journal entries and reconciliations, analyze variances, ensure compliance, coach accounting team, and use enterprise tools for reporting and analytics.
Top Skills: AribaBlacklineConcurHyperion SmartviewExcelMicrosoft PowerpointMicrosoft WordPeoplesoftPower BISQL
8 Minutes Ago
In-Office
4 Locations
Senior level
Senior level
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
Lead the strategy and execution of AI-powered automation in healthcare, driving product vision, roadmap delivery, and technical alignment across teams.
Top Skills: AIAutomationData ScienceEhr InteroperabilityMl

What you need to know about the Chennai Tech Scene

To locals, it's no secret that South India is leading the charge in big data infrastructure. While the environmental impact of data centers has long been a concern, emerging hubs like Chennai are favored by companies seeking ready access to renewable energy resources, which provide more sustainable and cost-effective solutions. As a result, Chennai, along with neighboring Bengaluru and Hyderabad, is poised for significant growth, with a projected 65 percent increase in data center capacity over the next decade.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account