Design, build, and deploy AI/ML and multimodal generative systems. Develop full-stack applications integrating LLMs, implement RAG, manage data pipelines, containerize and orchestrate services, track experiments and deployments, and mentor engineers while collaborating with stakeholders and infrastructure teams.
Requisition Number: 2330030
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together.
Primary Responsibilities:
Required Qualifications:
At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together.
Primary Responsibilities:
- Build, fine-tune, and optimize machine learning and deep learning models using frameworks like PyTorch, TensorFlow, and Hugging Face
- Develop GPT-based architecture and multimodal AI applications for text, image, and audio generation
- Develop full-stack solutions with React.js or Next.js for the frontend and FastAPI/Django for backend services integrating AI models
- Implement Retrieval-Augmented Generation workflows using LangChain, Haystack, and LlamaIndex with vector databases like Pinecone or Weaviate
- Utilize MLflow, Weights & Biases, and Azure Machine Learning for experiment tracking, versioning, and deployment
- Containerize and orchestrate applications using Docker and Kubernetes; implement CI/CD pipelines for automated deployment
- Leverage Azure OpenAI, AWS SageMaker, and GCP Vertex AI for cloud-based solutions; optimize inference for edge devices using ONNX Runtime and TensorRT
- Use Robot Framework, PyTest, and AI-driven tools like TestGPT for unit test generation and intelligent test coverage
- Build ETL workflows and orchestrate data processing using Apache Spark and Airflow; manage structured and unstructured data in SQL/NoSQL systems
- Implement document intelligence, semantic search, recommendation systems, and AI-driven analytics dashboards using Plotly, Dash, or Power BI
- Work with stakeholders to define and mange scope and approach for projects
- Define, use and communicate design patterns and best practices
- Present and evaluate design solutions objectively
- Closely working Business Teams, Onshore partners, deployment and infrastructure teams
- Coach and mentor other developers in the team
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regard to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Required Qualifications:
- B.E./B Tech/MCA/M Tech / Bachelor's degree Computer Science, Computer Engineering, or other related discipline (16+ years of education, correspondence courses are not relevant)
- 8+ years of experience working with programming, scripting languages such as Java, NodeJS, React
- 8+ years of experience with APIs / micro-services development
- 4+ years of experience with container technologies (Kubernetes, Docker, etc.)
- 5+ years of overall hands-on AI/ML project development using Python programming
- 5+ years of deep learning using PyTorch
- 5+ years of classical machine learning (e.g., sklearn, boosting, NLP)
- 5+ years of data analysis using SQL
- 2+ years of working with large datasets using PySpark, Databricks, Pandas, Numpy, etc.
- 2+ years of experience with Transformers, Hugging Face, LLMs, Generative AI, LangChain
- 2+ years of proficiency with Azure AI, Azure ML, OpenAI
- 2+ years of ML model deployment (e.g., MLflow, Databricks, Azure)
- 1+ years of healthcare domain and system knowledge
- 1+ years of familiarity with Google Cloud Platform (e.g., Dialogflow, Vertex AI)
- Experience with event steaming platforms such as Kafka
- Experience with Cloud platform (AWS, Azure), GitHub actions
- Experience with DevOps, CI / CD
- Experience with automated testing frameworks
- Experience with RDBMS, Snowflake, data bricks, SQL Server
- Experience working with programming, scripting languages such as Java, NodeJS, React or Angular
- Experience with APIs / micro-services development
- Experience with container technologies (Kubernetes, Docker, etc.)
- Experience with monitoring tools like Splunk
- Experience with event steaming platforms such as Kafka
- Experience with AWS, Azure DevOps, GitHub actions
- Experience with DevOps, CI / CD
- Experience with automated testing frameworks
- Hands-on experience with Generative AI applications and building conversational agents
- Solid understanding of data engineering principles, including ETL pipelines and data workflows
- Development experience utilizing Agile
- Proven ability to proactively leverage AI tools throughout the software development lifecycle to drive faster iteration, reduce manual effort, and boost overall engineering productivity
At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Top Skills
Python,Pytorch,Tensorflow,Hugging Face,Transformers,Llms,Generative Ai,Langchain,Haystack,Llamaindex,Pinecone,Weaviate,React.Js,Next.Js,Fastapi,Django,Java,Nodejs,Angular,Docker,Kubernetes,Ci/Cd,Github Actions,Azure Openai,Azure Ml,Azure Ai,Aws Sagemaker,Gcp Vertex Ai,Onnx Runtime,Tensorrt,Mlflow,Weights & Biases,Databricks,Pyspark,Pandas,Numpy,Spark,Airflow,Sql,Nosql,Snowflake,Sql Server,Kafka,Splunk,Robot Framework,Pytest,Testgpt,Plotly,Dash,Power Bi
Similar Jobs at Optum
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
Lead ServiceNow architecture and platform engineering: produce solution architecture documents, drive integrations, manage upgrades and stability, mentor engineers, enforce operational runbooks and ITSM best practices, and optimize performance and scalability.
Top Skills:
Servicenow,Web Services,Ajax,Javascript,Business Rules,Soap,Rest,Sso,Saml,Cmdb,Itsm
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
Lead cloud-focused data engineering efforts: enable cloud platforms, automate database operations with Python and IaC, manage Oracle DB performance and migrations, optimize database logic and collaborate with architects and DevOps to ensure scalable, observable, and highly available data environments.
Top Skills:
Aws,Azure,Gcp,Python,Git,Jenkins,Azure Devops,Terraform,Arm Templates,Ansible,Shell,Bash,Powershell,Pl/Sql,Oracle,Oracle Autonomous Database,Sql,Containers,Object Storage,Managed Relational Databases,Iam,Networking,Database Observability Tools
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
Lead QA efforts: review requirements, design test plans, create manual and automated tests, integrate automation into CI/CD, perform API/UI/database/ETL/performance/security testing, validate cloud/container deployments, maintain reports, enforce quality gates, and collaborate with developers and DevOps for HIPAA-compliant releases.
Top Skills:
Selenium,Cypress,Playwright,Azure Devops,Github Actions,Postman,Restassured,Junit,Sql,Oracle,Postgresql,Etl,Jmeter,Gatling,Aws,Azure,Docker,Terraform,Sonarqube,Rally,Ci/Cd,Rdbms
What you need to know about the Chennai Tech Scene
To locals, it's no secret that South India is leading the charge in big data infrastructure. While the environmental impact of data centers has long been a concern, emerging hubs like Chennai are favored by companies seeking ready access to renewable energy resources, which provide more sustainable and cost-effective solutions. As a result, Chennai, along with neighboring Bengaluru and Hyderabad, is poised for significant growth, with a projected 65 percent increase in data center capacity over the next decade.

