Synechron Logo

Synechron

Specialist Data Engineer

Posted Yesterday
Be an Early Applicant
Remote
Hiring Remotely in Hinjawadi, Pune, Mahārāshtra
Mid level
Remote
Hiring Remotely in Hinjawadi, Pune, Mahārāshtra
Mid level
Design, implement and optimize large-scale Databricks-based ETL/ELT pipelines on Azure. Transform data with PySpark/Spark and SQL, integrate multiple sources, ensure performance and cost optimization, enforce governance with Unity Catalog, and deploy/manage Databricks artifacts using Asset Bundles and GitLab.
The summary above was generated by AI

Job Summary:

We are seeking highly skilled Azure Data Engineer with strong expertise in Databricks to join our data team. The ideal candidate will design, implement and optimize large-scale data pipeline, ensuring scalability, reliability and performance. This role involves working closely with multiple teams and business stakeholders to deliver cutting-edge data solutions.

Key Responsibilities:

Data Pipeline Development:

  • Build and maintain scalable ETL/ELT pipelines using Databricks.
  • Leverage PySpark/Spark and SQL to transform and process large datasets.
  • Integrate data from multiple sources including Azure Blob Storage, ADLS and other relational/non-relational systems.

Collaboration & Analysis:

  • Work Closely with multiple teams to prepare data for dashboard and BI Tools.
  • Collaborate with cross-functional teams to understand business requirements and deliver tailored data solutions.

Performance & Optimization:

  • Optimize Databricks workloads for cost efficiency and performance.
  • Monitor and troubleshoot data pipelines to ensure reliability and accuracy.

Governance & Security:

  • Implement and manage data security, access controls and governance standards using Unity Catalog.
  • Ensure compliance with organizational and regulatory data policies.

Deployment:

  • Leverage Databricks Asset Bundles for seamless deployment of Databricks jobs, notebooks and configurations across environments.
  • Manage version control for Databricks artifacts and collaborate with team to maintain development best practices.

Technical Skills:

  • Strong expertise in Databricks (Delta Lake, Unity Catalog, Lakehouse Architecture, Table Triggers, Delta Live Pipelines, Databricks Runtime etc.)
  • Proficiency in Azure Cloud Services.
  • Solid Understanding of Spark and PySpark for big data processing.
  • Experience in relational databases.
  • Knowledge on Databricks Asset Bundles and GitLab.

Preferred Experience:

  • Familiarity with Databricks Runtimes and advanced configurations.
  • Knowledge of streaming frameworks like Spark Streaming.
  • Experience in developing real-time data solutions.

Certifications:

  • Azure Data Engineer Associate or Databricks certified Data Engineer Associate certification. (Optional)

S​YNECHRON’S DIVERSITY & INCLUSION STATEMENT
 

Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.

All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.

Candidate Application Notice

Top Skills

Databricks,Delta Lake,Unity Catalog,Lakehouse Architecture,Table Triggers,Delta Live Pipelines,Databricks Runtime,Pyspark,Spark,Sql,Azure Blob Storage,Adls,Azure,Databricks Asset Bundles,Gitlab,Spark Streaming,Relational Databases

Similar Jobs

6 Hours Ago
Remote
India
Senior level
Senior level
Cloud • Information Technology • Productivity • Software • Automation
Design, implement, test, and deploy scalable backend microservices using Python or Java. Ensure reliability, performance, and maintainability, integrate with other components, resolve production issues, follow CI/CD and infrastructure-as-code practices, and participate in code reviews.
Top Skills: Python,Java,Fastapi,Django,Flask,Spring Boot,Sql,Nosql,Vector Databases,Knowledge Graphs,Aws,Azure,Google Cloud,Kubernetes,Eks,Docker,Terraform,Cloudformation,Ansible,Linux,Github,Harness,Jenkins,Git,Jira,Confluence,Aws Bedrock,Langchain,Langgraph,Llamaindex
6 Hours Ago
Remote
India
Senior level
Senior level
Cloud • Information Technology • Productivity • Software • Automation
Design, develop, and architect cloud and infrastructure software solutions. Prepare specifications and documentation, implement tests and monitoring, deploy and manage containerized applications, and troubleshoot complex systems alongside development teams.
Top Skills: Aws,Virtual Machines,Networking,Storage,Docker,Kubernetes,Terraform,Cloudformation,Ansible,Python,Newrelic,Datadog,Sql,Nosql,Container Security,Secrets Management
6 Hours Ago
Remote
India
Senior level
Senior level
Cloud • Information Technology • Productivity • Software • Automation
Technical leader designing and implementing scalable, fault-tolerant backend microservices and Agentic AI systems. Leads architecture, cloud infrastructure, data strategies, incident response, testing, and mentorship to deliver production-grade high-throughput solutions.
Top Skills: Python,Java,Fastapi,Django,Flask,Spring Boot,Sql,Nosql,Vector Databases,Opensearch,Llms,Rag,Prompt Engineering,Agentic Ai,Kubernetes,Eks,Containers,Terraform,Cloudformation,Ansible,Aws,Azure,Google Cloud,Ci/Cd,Slis,Slos,Message Queues,Event Streaming,Distributed Caching,Chaos Engineering

What you need to know about the Chennai Tech Scene

To locals, it's no secret that South India is leading the charge in big data infrastructure. While the environmental impact of data centers has long been a concern, emerging hubs like Chennai are favored by companies seeking ready access to renewable energy resources, which provide more sustainable and cost-effective solutions. As a result, Chennai, along with neighboring Bengaluru and Hyderabad, is poised for significant growth, with a projected 65 percent increase in data center capacity over the next decade.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account