Design, build, and operate AI-driven SRE tooling and dashboards. Implement observability, SLOs/SLIs, CI/CD, and automated LLM/agent pipelines to improve reliability, reduce MTTD/MTTR, and automate incident response and remediation across cloud infrastructure.
Requisition Number: 2341668
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together.
Primary Responsibilities:
Required Qualifications:
At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together.
Primary Responsibilities:
- AI/ML skill set:
- Develop automation and tooling primarily in Python for SRE Gap analysis, Incident response and remediation, Telemetry processing and feature extraction, Run book automation, Post-mortem automation process and AI/ML model pipelines
- Design and build intelligent, data driven dashboards that go beyond static visualization
- Focus on combining observability data, business metrics, and AI/ML insights to produce dashboards that are predictive, actionable and adaptive dynamically based on data patterns and user context
- Apply machine learning and intelligent automation to improve system reliability, reduce operational noise, and proactively prevent incidents and translate complex AI results into clear, interpretable visuals
- Ensure dashboards support proactive decision making, not just monitoring
- Work at the intersection of data visualization, AI/ML, observability, and user experience, translating complex telemetry and model outputs into intuitive dashboards used by Engineering, Leadership, Product and Executive teams
- Highlight anomalies, trends, predictive insights and surface intelligent recommendations and alerts
- Optimize conversational flows for accuracy, latency, and user experience
- SRE & Observability:
- Ensure the availability, performance, stability and resiliency of critical systems by implementing best practices in site reliability engineering
- Define and operationalize SLOs / SLIs and Error budgets
- Innovate in monitoring, distributed tracing, and logging strategies to provide deep visibility into system behavior.
- Apply DevOps best practices across pipelines and infrastructure
- Track the Compliance for certificates, secrets, vulnerabilities and risk records etc.
- Manage and optimize cloud infrastructure (AWS / Azure / GCP)
- Expertise in managing containerized applications using Docker and Kubernetes
- Collaborate with development and data engineering teams
- Solid scripting skills in Bash, Python for automation and backend support
- Design, build, and operate autonomous, safe, and reliable agents that augment incident response, observability, and operational workflows at scale
- Well versed with Copilot, Power automate, GPT and in-house tooling
- Apply DevOps best practices across pipelines and infrastructure
- Focus on combining observability data, business metrics, and AI/ML insights to produce dashboards that are predictive, actionable and adaptive dynamically based on data patterns and user context
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Required Qualifications:
- Bachelor's degree in computer science, Engineering, or a related field (or equivalent experience)
- 6+ years of experience in SRE, Observability, DevOps, CI/CD processes with 2+ years of hands-on experience in building LLM/AI agent solutions
- Hands on experience with CI/CD tools (Jenkins, GitHub Actions, GitLab CI, Azure DevOps)
- Experience with RAG pipelines, vector databases (FAISS, Pinecone, OpenSearch, etc.)
- Hands on experience with CI/CD tools (Jenkins, GitHub Actions, GitLab CI, Azure DevOps)
- Understanding prompt engineering, retrieval-augmented generation (RAG), and context management
- Deep understanding of distributed systems, Linux, networking, and identify systemic risks and proactively mitigate them
- Understanding strategies to improve SRE & Observability maturity and reduce MTTD and MTTR
- Deep understanding of distributed systems, Linux, networking, and identify systemic risks and proactively mitigate them
- Familiarity with ML frameworks such as TensorFlow, PyTorch, or Hugging Face
- Proficiency in Python and Bash scripting
- Proven expertise in OTEL solutions and other modern observability tools like Dynatrace, Elastic, Splunk, Grafana etc. and alerting solutions
- Proven ability to translate incidents into long term reliability improvements, telemetry into meaningful operational signals and continuously evaluate to improve SRE & Observability maturity and reduce MTTD and MTTR
- Proven ability to work across DevOps, frontend, backend, and data teams
- Proven solid Python and Bash (automation, APIs, SDKs, async, testing), plus scripting for ops tasks
- Hands on with LLMs and agent frameworks (e.g., Semantic Kernel, LangChain, AutoGen) to automate triage, runbooks, RCA assistance, and proactive remediation
- Proven expertise in OTEL solutions and other modern observability tools like Dynatrace, Elastic, Splunk, Grafana etc. and alerting solutions.
At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Top Skills
Python,Bash,Docker,Kubernetes,Aws,Azure,Gcp,Jenkins,Github Actions,Gitlab Ci,Azure Devops,Faiss,Pinecone,Opensearch,Tensorflow,Pytorch,Hugging Face,Opentelemetry,Dynatrace,Elastic,Splunk,Grafana,Langchain,Semantic Kernel,Autogen,Github Copilot,Power Automate,Gpt
Similar Jobs at Optum
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
Design, build, and optimize scalable ETL/ELT data pipelines and data architectures on cloud platforms. Ensure data quality, governance, security compliance, automated deployments, mentoring, and troubleshooting to enable analytics and ML workloads.
Top Skills:
Spark,Scala,Azure Databricks,Azure Data Factory,Airflow,Kafka,Python,Sql,Hadoop,Azure,Gcp,Docker,Kubernetes,Nosql,Snowflake,Jenkins,Tableau,Power Bi
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
Monitor alerts and system health, triage incidents, coordinate resolution across technical and vendor teams, maintain incident logs and reports, support compliance documentation, participate in war rooms and post-incident reviews, and use AI/Copilot to automate summaries and improve operational efficiency.
Top Skills:
Genesys,Aws,Cxone,Servicenow,Splunk,Dynatrace,Microsoft Copilot,Azure,Google
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
Design and optimize data architectures and pipelines, develop cloud-based scalable data platforms, ensure data quality and compliance, deploy Big Data solutions, collaborate with Data Science for ML-driven features, lead projects and mentor junior engineers.
Top Skills:
Sql,Python,Java,Hadoop,Spark,Aws,Azure,Gcp,Paas,Ci/Cd,Devops,Mlops,Data Warehousing,Big Data Frameworks,Etl/Elt
What you need to know about the Chennai Tech Scene
To locals, it's no secret that South India is leading the charge in big data infrastructure. While the environmental impact of data centers has long been a concern, emerging hubs like Chennai are favored by companies seeking ready access to renewable energy resources, which provide more sustainable and cost-effective solutions. As a result, Chennai, along with neighboring Bengaluru and Hyderabad, is poised for significant growth, with a projected 65 percent increase in data center capacity over the next decade.

