The Big Data Engineer II is responsible for designing and maintaining data pipelines on cloud and on-premises, ensuring data quality and compliance, collaborating with stakeholders, and utilizing big data frameworks and cloud platforms.
Description and Requirements
GG10.2 Data Engineer plays a critical role in data and analytics life cycle and significantly contributes to production grade data and analytics solutions. The role requires one to demonstrate Big Data, Engineering and Cloud expertise.It is an individual contributor role, expected to independently function 5-8+ years of relevant experience Bachelors degree in computer science, information technology or equivalent educational qualification • Design, build, and maintain robust ETL/ELT pipelines on cloud(Azure) or on-prem to collect, ingest and store large volumes of structured and unstructured data for batch/real time processing• Monitor, optimize, and troubleshoot data pipelines to ensure reliability, scalability, and performance• Ensure data processing, quality, security, and compliance guidelines, policies and standards are followed• Collaborate with multiple partners from Business, Technology, Operations and D&A capabilities (Data Governance, Data Quality, Data Modeling, Data Architecture, Data science, DevOps, BI & insights) • SQL, Python/Scala• NoSql and distributed databases (Hbase, Cosmos DB)• ETL pipleine development• Big Data Frameworks : Apache Spark, Hadoop, Hive• Cloud platforms: Azure data factory, Eventhub, Azure functions, Synapse, Databricks• Datawarehouses, data marts, data lakes• Medallion architecture• Performance tuning, optimization, and data quality validation• Real-time and batch data processing , streaming pieplines with Spark• Communication skills, analytical skills, structured problem-solving skills.,• Partner, Stakeholder engagement experience • DevOps practices: Git, AzureDevops, CI/CD pipelines• Unix shell scripting, MongoDB, Nifi• Exposure to Gen AI technology and tools
About MetLife
Recognized on Fortune magazine's list of the "World's Most Admired Companies" and Fortune World's 25 Best Workplaces™, MetLife, through its subsidiaries and affiliates, is one of the world's leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East.
Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by our core values - Win Together, Do the Right Thing, Deliver Impact Over Activity, and Think Ahead - we're inspired to transform the next century in financial services. At MetLife, it's #AllTogetherPossible . Join us!
#BI-Hybrid
GG10.2 Data Engineer plays a critical role in data and analytics life cycle and significantly contributes to production grade data and analytics solutions. The role requires one to demonstrate Big Data, Engineering and Cloud expertise.It is an individual contributor role, expected to independently function 5-8+ years of relevant experience Bachelors degree in computer science, information technology or equivalent educational qualification • Design, build, and maintain robust ETL/ELT pipelines on cloud(Azure) or on-prem to collect, ingest and store large volumes of structured and unstructured data for batch/real time processing• Monitor, optimize, and troubleshoot data pipelines to ensure reliability, scalability, and performance• Ensure data processing, quality, security, and compliance guidelines, policies and standards are followed• Collaborate with multiple partners from Business, Technology, Operations and D&A capabilities (Data Governance, Data Quality, Data Modeling, Data Architecture, Data science, DevOps, BI & insights) • SQL, Python/Scala• NoSql and distributed databases (Hbase, Cosmos DB)• ETL pipleine development• Big Data Frameworks : Apache Spark, Hadoop, Hive• Cloud platforms: Azure data factory, Eventhub, Azure functions, Synapse, Databricks• Datawarehouses, data marts, data lakes• Medallion architecture• Performance tuning, optimization, and data quality validation• Real-time and batch data processing , streaming pieplines with Spark• Communication skills, analytical skills, structured problem-solving skills.,• Partner, Stakeholder engagement experience • DevOps practices: Git, AzureDevops, CI/CD pipelines• Unix shell scripting, MongoDB, Nifi• Exposure to Gen AI technology and tools
About MetLife
Recognized on Fortune magazine's list of the "World's Most Admired Companies" and Fortune World's 25 Best Workplaces™, MetLife, through its subsidiaries and affiliates, is one of the world's leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East.
Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by our core values - Win Together, Do the Right Thing, Deliver Impact Over Activity, and Think Ahead - we're inspired to transform the next century in financial services. At MetLife, it's #AllTogetherPossible . Join us!
#BI-Hybrid
Top Skills
Spark
Azure Data Factory
Azure Functions
Cosmos Db
Databricks
Event Hub
Hadoop
Hbase
Hive
MongoDB
Nifi
NoSQL
Python
Scala
SQL
Synapse
Unix
Similar Jobs at MetLife
Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
The Data Engineer will design, build, and maintain ETL/ELT pipelines, ensuring data quality, security, and compliance while collaborating across multiple teams.
Top Skills:
SparkAzureAzure Data FactoryAzure DevopsAzure FunctionsCosmos DbDatabricksEventhubGitHadoopHbaseHiveMongoDBNifiNoSQLPythonScalaSQLSynapse
Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
Lead development and maintenance of Big Data solutions for Disability & Absence products, ensuring high-quality, efficient, and scalable applications.
Top Skills:
AzureGCPHadoopHbaseHiveIn-Memory Data ProcessingKafkaNifiNoSQLPigPythonScalaShell ScriptSolrSpark
Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
The Data Scientist II role involves designing and deploying machine learning models, performing data analysis, and collaborating with various teams to enhance model performance and ensure compliance with AI guidelines.
Top Skills:
Azure MlNatural Language ProcessingPythonPyTorchScikit-LearnTensorFlow
What you need to know about the Chennai Tech Scene
To locals, it's no secret that South India is leading the charge in big data infrastructure. While the environmental impact of data centers has long been a concern, emerging hubs like Chennai are favored by companies seeking ready access to renewable energy resources, which provide more sustainable and cost-effective solutions. As a result, Chennai, along with neighboring Bengaluru and Hyderabad, is poised for significant growth, with a projected 65 percent increase in data center capacity over the next decade.

