MediaRadar Logo

MediaRadar

Senior Data Engineer - Databricks

Posted Yesterday
Be an Early Applicant
In-Office or Remote
Hiring Remotely in Vadodara, Gujarat
Senior level
In-Office or Remote
Hiring Remotely in Vadodara, Gujarat
Senior level
The Senior Data Engineer will design and maintain scalable ETL pipelines, work with large datasets, ensure data quality, and mentor junior engineers.
The summary above was generated by AI

MediaRadar, An Industry Leader in Marketing Intelligence now including the data and capabilities of Vivvix, powers the mission-critical marketing and sales decisions that drive competitive advantage. Our next-generation marketing intelligence platform enables clients to achieve peak performance with always-on insights that span the media, creative, and business strategies of 5 million brands across 30+ media channels and 275 billion in media spend. By bringing the advertising past, present, and future into focus, our clients rapidly act on the competitive moves and emerging advertising trends impacting their business.

About the Role

We are looking for an experienced and strategic Senior Data Engineer to join our data team. In this role, you will be responsible for building and maintaining scalable, high-performance data solutions using Azure Databricks, Apache Spark, AKS, Airflow, Postgres, and modern data lakehouse architectures. You’ll play a key role in the full software development lifecycle—from design and implementation to deployment and documentation—while collaborating cross-functionally to support analytics, reporting, and operational data needs. This is an exciting opportunity to work along with a great team of data engineers, demanding technologies and an engaging work environment to help shape our data engineering best practices.


Requirements

What You’ll Do:

  • Involve in Design, development, and maintenance of scalable ETL/ELT pipelines on Azure Databricks using Apache Spark (PySpark/Spark SQL).
  • Design and implement both batch and real-time data ingestion and transformation processes.
  • Build and manage Delta Lake tables, schemas, and data models to support efficient querying and analytics.
  • Consolidate and process large-scale datasets from various structured and semi-structured sources (e.g., JSON, Parquet, Avro).
  • Write optimized SQL queries for large datasets using Spark SQL and PostgreSQL.
  • Develop, schedule, and monitor workflows using Databricks Workflows, Airflow or similar orchestration tools.
  • Design, build, and deploy cloud-native, containerized applications on Azure Kubernetes Service (AKS) and integrate with Azure services.
  • Ensure data quality, governance, and compliance through validation, documentation, and secure practices.
  • Collaborate with data analysts, data architects, and business stakeholders to translate requirements into technical solutions.
  • Contribute to and enforce best practices in data engineering, including version control (Git), CI/CD pipelines, and coding standards.
  • Continuously enhance data systems for improved performance, reliability, and scalability.
  • Mentor junior engineers and help evolve team practices and documentation.
  • Stay up to date on emerging trends, technologies, and best practices in the data engineering space.
  • Work effectively within an agile, cross-functional project team.

What You’ve Done:

  • Proven experience as a Data Engineer, with a strong focus on Azure Databricks and Apache Spark.
  • Proficiency in Python, PySpark, Spark SQL, and working with large-scale datasets in different data formats.
  • Strong experience designing and building ETL/ELT workflows in both batch and streaming environments.
  • Solid understanding of data lakehouse architectures and Delta Lake.
  • Experience in Azure Kubernetes Service (AKS) is desired.
  • Proficient in SQL and experience with PostgreSQL or similar relational databases.
  • Experience with workflow orchestration tools (e.g., Databricks Workflows, Airflow, Azure Data Factory).
  • Familiarity with data governance, quality control, and security best practices.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills, with a track record of working cross-functionally.
  • Experience mentoring junior engineers and leading by example.
  • Comfortable working in agile development environments and using tools like Git, CI/CD, and issue trackers (e.g., Jira).

Benefits

At MediaRadar, we are committed to creating an inclusive and accessible workplace where everyone can thrive. We believe that diversity of backgrounds, perspectives, and experiences makes us stronger and more innovative. We are proud to be an Equal Opportunity Employer and make employment decisions without regard to race, color, religion, sex (including pregnancy, sexual orientation, or gender identity), national origin, age, disability, genetic information, or any other legally protected status. This is a full-time exempt role with base salary plus benefits. Final compensation will depend on location, skill level, and experience.

Top Skills

Airflow
Aks
Spark
Avro
Azure Databricks
Azure Kubernetes Service
Ci/Cd
Delta Lake
Git
JSON
Parquet
Postgres
Python
Spark Sql

Similar Jobs

14 Days Ago
Remote
India
Senior level
Senior level
Information Technology • Software • Consulting
The Senior Data Engineer will design and build data pipelines on Azure Databricks, ensuring data security and governance while collaborating with clients and vendors on projects.
Top Skills: Azure DatabricksDelta LakeEltETLPyspark
18 Days Ago
Remote
India
Senior level
Senior level
Information Technology • Consulting
The Senior Data Engineer will build and optimize data pipelines in Azure Databricks, migrate existing pipelines from Azure Synapse, and design scalable data models to support a financial services client's data platform modernization.
Top Skills: Azure DatabricksAzure SynapseDelta LakeSparkSQL
8 Days Ago
Remote
India
Senior level
Senior level
Information Technology • Consulting
Design, build, and optimize scalable ELT/ETL pipelines on Azure using ADF and Databricks, integrate data into Snowflake, implement Data Vault 2.0 modelling, ensure data quality and performance, and collaborate with stakeholders and analytics teams to maintain production workflows.
Top Skills: Azure Data Factory (Adf)Azure DatabricksData Vault 2.0AzureSnowflakeSQL

What you need to know about the Chennai Tech Scene

To locals, it's no secret that South India is leading the charge in big data infrastructure. While the environmental impact of data centers has long been a concern, emerging hubs like Chennai are favored by companies seeking ready access to renewable energy resources, which provide more sustainable and cost-effective solutions. As a result, Chennai, along with neighboring Bengaluru and Hyderabad, is poised for significant growth, with a projected 65 percent increase in data center capacity over the next decade.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account