Medtronic
Senior IT Architect (Senior Data Operations Engineer)
At Medtronic you can begin a life-long career of exploration and innovation, while helping champion healthcare access and equity for all. You’ll lead with purpose, breaking down barriers to innovation in a more connected, compassionate world.
A Day in the LifeOur Global Diabetes Capability Center in Pune is expanding to serve more people living with diabetes globally. Our state-of-the-art facility is dedicated to transforming diabetes management through innovative solutions and technologies that reduce the burden of living with diabetes.We’re a mission-driven leader in medical technology and solutions with a legacy of integrity and innovation, join our new Minimed India Hub as Senior Data Operations Engineer. We are seeking an experienced Senior DataOps Engineer to join our team. This candidate will have a strong background in DevOps, DataOps, or Cloud Engineering practices, with extensive experience in automating the CICD pipelines and modern data stack technologies.
This role offers a dynamic opportunity to join Medtronic's Diabetes business. Medtronic has announced its intention to separate the Diabetes division to promote future growth and innovation within the business and reallocate investments and resources across Medtronic, subject to applicable information and consultation requirements. While you will start your employment with Medtronic, upon establishment of SpinCo or the transition of the Diabetes business to another company, your employment may transfer to either SpinCo or the other company, at Medtronic's discretion and subject to any applicable information and consultation requirements in your jurisdiction.
Responsibilities may include the following and other duties may be assigned:
- Develop and maintain robust, scalable data pipelines and infrastructure automation workflows using GitHub, AWS, and Databricks.
- Implement and manage CI/CD pipelines using GitHub Actions and GitLab CI/CD for automated infrastructure deployment, testing, and validation.
- Deploy and manage Databricks LLM Runtime or custom Hugging Face models within Databricks notebooks and model serving endpoints.
- Manage and optimize Cloud Infrastructure costs, usage, and performance through tagging policies, right-sizing EC2 instances, storage tiering strategies, and auto-scaling.
- Set up infrastructure observability and performance dashboards using AWS CloudWatch for real-time insights into cloud resources and data pipelines.
- Develop and manage Terraform or CloudFormation modules to automate infrastructure provisioning across AWS accounts and environments.
- Implement and enforce cloud security policies, IAM roles, encryption mechanisms (KMS), and compliance configurations.
- Administer Databricks Workspaces, clusters, access controls, and integrations with Cloud Storage and identity providers.
- Enforce DevSecOps practices for infrastructure-as-code, ensuring all changes are peer-reviewed, tested, and compliant with internal security policies.
- Coordinate cloud software releases, patching schedules, and vulnerability remediations using Systems Manager Patch Manage.
- Automate AWS housekeeping and operational tasks such as:
- Cleanup of unused EBS Volumes, snapshots, old AMIs
- Rotation of secrets and credentials using secrets manager
- Log retention enforcement using S3 Lifecycle policies and CloudWatch Log groups
- Perform incident response, disaster recovery planning, and post-mortem analysis for operational outages.
- Collaborate with cross-functional teams including Data Scientists, Data Engineers, and other stakeholders to gather, implement the infrastructure and data requirements.
Required Knowledge and Experience:
- 8+ years of experience in DataOps / CloudOps / DevOps roles, with strong focus on infrastructure automation, data pipeline operations, observability, and cloud administration.
- Strong proficiency in at least one Scripting language (e.g., Python, Bash) and one infrastructure-as-code tool (e.g., Terraform, CloudFormation) for building automation scripts for AWS resource cleanup, tagging enforcement, monitoring and backups.
- Hands-on experience integrating and operationalizing LLMs in production pipelines, including prompt management, caching, token-tracking, and post-processing.
- Deep hands-on experience with AWS Services, including
- Core: EC2, S3, RDS, CloudWatch, IAM, Lambda, VPC
- Data Services: Athena, Glue, MSK, Redshift
- Security: KMS, IAM, Config, CloudTrail, Secrets Manager
- Operational: Auto Scaling, Systems Manager, CloudFormation/Terraform
- Machine Learning/AI: Bedrock, SageMaker, OpenSearch serverless
- Working knowledge of Databricks, including:
- Cluster and workspace management, job orchestration
- Integration with AWS Storage and identity (IAM passthrough)
- Experience deploying and managing CI/CD workflows using GitHub Actions, GitLab CI, or AWS CodePipeline.
- Strong understanding of cloud networking, including VPC Peering, Transit Gateway, security groups, and private link setup.
- Familiarity with container orchestration platforms (e.g., Kubernetes, ECS) for deploying platform tools and services.
- Strong understanding of data modeling, data warehousing concepts, and AI/ML Lifecycle management.
- Knowledge of cost optimization strategies across compute, storage, and network layers.
- Experience with data governance, logging, and compliance practices in cloud environments (e.g., SOC2, HIPAA, GDPR)
- Bonus: Exposure to LangChain, Prompt Engineering frameworks, Retrieval Augmented Generation (RAG), and vector database integration (AWS OpenSearch, Pinecone, Milvus, etc.).
Preferred Qualifications:
- AWS Certified Solutions Architect, DevOps Engineer or SysOps Administrator certifications.
- Hands-on experience with multi-cloud environments, particularly Azure or GCP, in addition to AWS.
- Experience with infrastructure cost management tools like AWS Cost Explorer, or FinOps dashboards.
- Ability to write clean, production-grade Python code for automation scripts, operational tooling, and custom CloudOps Utilities.
- Prior experience in supporting high-availability production environments with disaster recovery and failover architectures.
- Understanding of Zero Trust architecture and security best practices in cloud-native environments.
- Experience with automated cloud resources cleanup, tagging enforcement, and compliance-as-code using tools like Terraform Sentinel.
- Familiarity with Databricks Unity Catalog, access control frameworks, and workspace governance.
- Strong communication skills and experience working in agile cross-functional teams, ideally with Data Product or Platform Engineering teams.
Physical Job Requirements
The above statements are intended to describe the general nature and level of work being performed by employees assigned to this position, but they are not an exhaustive list of all the required responsibilities and skills of this position.
Medtronic offers a competitive Salary and flexible Benefits Package
A commitment to our employees lives at the core of our values. We recognize their contributions. They share in the success they help to create. We offer a wide range of benefits, resources, and competitive compensation plans designed to support you at every career and life stage.
We lead global healthcare technology and boldly attack the most challenging health problems facing humanity by searching out and finding solutions.
Our Mission — to alleviate pain, restore health, and extend life — unites a global team of 95,000+ passionate people.
We are engineers at heart— putting ambitious ideas to work to generate real solutions for real people. From the R&D lab, to the factory floor, to the conference room, every one of us experiments, creates, builds, improves and solves. We have the talent, diverse perspectives, and guts to engineer the extraordinary.
Learn more about our business, mission, and our commitment to diversity here