Alight Solutions Logo

Alight Solutions

Senior Data Engineer

Posted An Hour Ago
Be an Early Applicant
In-Office
2 Locations
Senior level
In-Office
2 Locations
Senior level
The Senior Data Engineer is responsible for designing and maintaining scalable ETL pipelines across Hadoop and AWS, optimizing data workflows, and providing technical leadership. The role requires collaboration with stakeholders to ensure data quality and compliance across hybrid data platforms.
The summary above was generated by AI

Our story

At Alight, we believe a company’s success starts with its people. At our core, we Champion People, help our colleagues Grow with Purpose and true to our name we encourage colleagues to “Be Alight.”

Our Values:

Champion People – be empathetic and help create a place where everyone belongs.

Grow with purpose – Be inspired by our higher calling of improving lives.

Be Alight – act with integrity, be real and empower others.

It’s why we’re so driven to connect passion with purpose. Alight helps clients gain a benefits advantage while building a healthy and financially secure workforce by unifying the benefits ecosystem across health, wealth, wellbeing, absence management and navigation.

With a comprehensive total rewards package, continuing education and training, and tremendous potential with a growing global organization, Alight is the perfect place to put your passion to work.

Join our team if you Champion People, want to Grow with Purpose through acting with integrity and if you embody the meaning of Be Alight.

Role: Senior Data Engineer

Summary

Senior Data Engineer with strong expertise across traditional big‑data platforms (Hadoop ecosystem) and modern cloud-native architectures (AWS). Responsible for building scalable, secure, and high‑performance data pipelines that span Hadoop clusters and AWS cloud services. Leverages deep knowledge of distributed systems, Spark optimization, cloud automation, and big‑data management to support analytics, BI, ML, and AI use cases across the enterprise. Ensures reliability, governance, cost-efficiency, and operational excellence across hybrid data platforms.
Associate should be self-driven, can work with minimal guidance and guide the team technically.

Core Responsibilities

  • Design, build, and maintain high‑volume ETL/ELT pipelines across Hadoop (HDFS, Hive, Spark, Kafka) and AWS (Glue, EMR, Lambda, Step Functions, Redshift).
  • Develop distributed data processing solutions using PySpark, Spark SQL, and scalable cloud serverless patterns.
  • Implement reusable data ingestion frameworks for batch (Sqoop, Hive, Spark) and streaming (Kafka, Kinesis).
  • Optimize data workflows using partitioning, bucketing, compression, file formats (Parquet/ORC).
  • Understanding hybrid data lake architectures using S3 + HDFS, ensuring governance consistency (Atlas, Ranger, Lake Formation).
  • Understanding the reporting requirements and perform data profiling and create design for same.
  • Create data flow diagram and do data modelling.
  • Job orchestration using Airflow, Control‑M, Step Functions, or event-driven triggers.
  • Understand auto-scaling, capacity planning, and performance tuning on EMR and Spark clusters.
  • Ensure data is protected and compliant with regulatory standards.
  • Work closely with business stakeholders to enable high‑quality datasets.
  • Provide technical leadership in architecture decisions, code reviews, and best‑practice adoption and provide technical guidance to peers/juniors in team.
  • Improve reliability, scalability, and performance through automation, autoscaling, and capacity planning.
  • Own deployment, incident response, and post-incident reviews for production environments, troubleshooting Spark performance issues, job failures, and cluster bottlenecks.
  • Understanding security best practices (IAM, KMS, security groups, WAF, parameter/secret management).
  • Optimize cost and usage of AWS resources and recommend architecture improvements.
  • Collaborate closely with developers, QA, and product teams to streamline release processes.

Requirements

Technical Skills

  • Strong experience from 5-8 eyars with the Hadoop ecosystem (HDFS, Hive, Spark, YARN, Kafka).
  • Strong hands-on expertise in Scala, PySpark, Spark optimization techniques, HiveQL, and distributed computing.
  • Good work experience in SQL in hive and impala
  • Good understanding of AWS data stack (S3, Glue, EMR, Lambda, Kinesis, Redshift, Step Functions).
  • Proficiency in at least one scripting/programming language: Python, Shell scripting.
  • Strong experience with CI/CD, GitHub, Git commands.
  • Expertise in ETL and Data Warehousing and cloud concepts.
  • Good understanding of data modelling (star/snowflake), partitioning strategies, and schema evolution.
  • Expertise in data profiling and decision making.
  • Able to understand, design and create data flow diagrams and do data modelling. (knowledge of Miro will be added advantage)
  • Able to understand the architecture and design end-to-end data flow.
  • Hands-on experience with Airflow, Control‑M, or other orchestrators.
  • To monitor and support BAU and year end activities, if needed.
  • Well versed with security and compliance aspects in Cloud.
  • Good understanding of AWS networking (VPC, subnets, routing, SGs, NACLs).
  • Familiarity with serverless patterns and containerization (Docker, ECS/EKS).
  • Experience with monitoring/logging tools and incident management practices.

Other Requirements

  • Strong logical and analytical, problem-solving, and communication skills.
  • Communicate effectively and concisely with multiple stakeholders and coordinate and collaborate with cross functional teams.
  • Ability to support both legacy Hadoop workloads and cloud-first architectures.
  • AWS certifications (Data Engineer, Solutions Architect, or Developer) are a plus.
  • Good to have health care domain knowledge.

Benefits

We offer programs and plans for a healthy mind, body, wallet and life because it’s important our benefits care for the whole person. Options include a variety of health coverage options, wellbeing and support programs, retirement, vacation and sick leave, maternity, paternity & adoption leave, continuing education and training as well as several voluntary benefit options. 

By applying for a position with Alight, you understand that, should you be made an offer, it will be contingent on your undergoing and successfully completing a background check consistent with Alight’s employment policies. Background checks may include some or all the following based on the nature of the position: SSN/SIN validation, education verification, employment verification, and criminal check, search against global sanctions and government watch lists, credit check, and/or drug test. You will be notified during the hiring process which checks are required by the position.

Our commitment to Inclusion

We celebrate differences and believe in fostering an environment where everyone feels valued, respected, and supported. We know that diverse teams are stronger, more innovative, and more successful.

At Alight, we welcome and embrace all individuals, regardless of their background, and are dedicated to creating a culture that enables every employee to thrive. Join us in building a brighter, more inclusive future.

As part of this commitment, Alight will ensure that persons with disabilities are provided reasonable accommodations for the hiring process. If reasonable accommodation is needed, please contact [email protected].

Equal Opportunity Policy Statement

Alight is an Equal Employment Opportunity employer and does not discriminate against anyone based on sex, race, color, religion, creed, national origin, ancestry, age, physical or mental disability, medical condition, pregnancy, marital or domestic partner status, citizenship, military or veteran status, sexual orientation, gender, gender identity or expression, genetic information, or any other legally protected characteristics or conduct covered by federal, state, or local law.   In addition, we take affirmative action to employ, disabled persons, disabled veterans and other covered veterans.

Alight provides reasonable accommodations to the known limitations of otherwise qualified employees and applicants for employment with disabilities and sincerely held religious beliefs, practices and observances, unless doing so would result in undue hardship. Applicants for employment may request a reasonable accommodation/modification by contacting their recruiter.

Authorization to work in the Employing Country

Applicants for employment in the country in which they are applying (Employing Country) must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the Employing Country and with Alight.

Note, this job description does not restrict management's right to assign or reassign duties and responsibilities of this job to other entities; including but not limited to subsidiaries, partners, or purchasers of Alight business units.

We offer you a competitive total rewards package, continuing education & training, and tremendous potential with a growing worldwide organization.
 


DISCLAIMER:


Nothing in this job description restricts management's right to assign or reassign duties and responsibilities of this job to other entities; including but not limited to subsidiaries, partners, or purchasers of Alight business units.

.

Top Skills

Airflow
AWS
Ci/Cd
Emr
Git
Glue
Hadoop
Hdfs
Hive
Kafka
Lambda
Pyspark
Python
Redshift
S3
Shell Scripting
Spark
Step Functions

Similar Jobs

11 Days Ago
Remote or Hybrid
India
Senior level
Senior level
Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
The Senior Data Engineer will build ETL/ELT processes, develop reusable frameworks, optimize performance, and collaborate with teams to deliver data solutions.
Top Skills: AzureAzure Data FactoryAzure DevopsAzure SynapseCosmos DbDatabricksPythonScalaSparkSQL
11 Days Ago
Remote or Hybrid
India
Senior level
Senior level
Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
The role involves designing and maintaining ETL pipelines, ensuring data quality and compliance, leading teams, and collaborating with various business partners.
Top Skills: SparkAzure Data FactoryAzure FunctionsCosmos DbDatabricksEvent HubHadoopHbaseHiveMongoDBNifiNoSQLPythonScalaSQLSynapse
7 Days Ago
In-Office
8 Locations
Mid level
Mid level
Artificial Intelligence • Consulting
Responsible for L2 technical support for data platforms, advanced troubleshooting, code-level fixes, and ensuring SLA compliance. Requires strong skills in Azure Data Factory, Databricks, SQL, and Python for performance tuning and data reliability.
Top Skills: Azure Data FactoryAzure ServicesCi/Cd PipelinesDatabricksPythonSQL

What you need to know about the Chennai Tech Scene

To locals, it's no secret that South India is leading the charge in big data infrastructure. While the environmental impact of data centers has long been a concern, emerging hubs like Chennai are favored by companies seeking ready access to renewable energy resources, which provide more sustainable and cost-effective solutions. As a result, Chennai, along with neighboring Bengaluru and Hyderabad, is poised for significant growth, with a projected 65 percent increase in data center capacity over the next decade.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account