DATAMAXIS, Inc Logo

DATAMAXIS, Inc

Sr. Databricks Data Engineer - I

Job Posted 4 Days Ago Reposted 4 Days Ago
Be an Early Applicant
India
Senior level
India
Senior level
This role involves designing and developing enterprise data solutions and robust data pipelines using Databricks, PySpark, and SQL, focusing on optimization and continuous improvement.
The summary above was generated by AI

Job Title: Databricks Data Engineer - I
Experience: 5+ years
Location: Remote
Job Type: Full-time with AB2

We are seeking an experienced Databricks Data Engineer who can play a crucial role in our Fintech data lake project.What You Bring
• 5+ years of experience working in data warehousing systems
• 3+ strong hands-on programming expertise in Databricks landscape, including SparkSQL, Workflows
• for data processing and pipeline development
• 3+ strong hands-on data transformation/ETL skills using Spark SQL, Pyspark, Unity Catalog working
• in Databricks Medallion architecture
• 2+ yrs work experience in one of cloud platforms: Azure, AWS or GCP
• Experience working in using Git version control, and well versed with CI/CD best practices to
• automate the deployment and management of data pipelines and infrastructure
• Nice to have hands-on experience building data ingestion pipelines from ERP systems (Oracle
• Fusion preferably) to a Databricks environment, using Fivetran or any alternative data connectors
• Experience in a fast-paced, ever-changing and growing environment
• Understanding of metadata management, data lineage, and data glossaries is a plus
• Must have eport development experience using PowerBI, SplashBI or any enterprise reporting toolWhat You’ll Do
• Involve in design and development of enterprise data solutions in Databricks, from ideation to
• deployment, ensuring robustness and scalability.
• Work with the Data Architect to build, and maintain robust and scalable data pipeline architectures on
• Databricks using PySpark and SQL
• Assemble and process large, complex ERP datasets to meet diverse functional and non-functional
• requirements.
• Involve in continuous optimization efforts, implementing testing and tooling techniques to enhance
• data solution quality
• Focus on improving performance, reliability, and maintainability of data pipelines.
• Implement and maintain PySpark and databricks SQL workflows for querying and analyzing large
• datasets
• Involve in release management using Git and CI/CD practices
• Develop business reports using SplashBI reporting tool leveraging the data from Databricks gold layer.Qualifications
• Bachelors Degree in Computer Science, Engineering, Finance or equivalent experience
• Good communication skills 

We are seeking an experienced Databricks Data Engineer who can play a crucial role in our Fintech data lake project.What You Bring
• 5+ years of experience working in data warehousing systems
• 3+ strong hands-on programming expertise in Databricks landscape, including SparkSQL, Workflows
• for data processing and pipeline development
• 3+ strong hands-on data transformation/ETL skills using Spark SQL, Pyspark, Unity Catalog working
• in Databricks Medallion architecture
• 2+ yrs work experience in one of cloud platforms: Azure, AWS or GCP
• Experience working in using Git version control, and well versed with CI/CD best practices to
• automate the deployment and management of data pipelines and infrastructure
• Nice to have hands-on experience building data ingestion pipelines from ERP systems (Oracle
• Fusion preferably) to a Databricks environment, using Fivetran or any alternative data connectors
• Experience in a fast-paced, ever-changing and growing environment
• Understanding of metadata management, data lineage, and data glossaries is a plus
• Must have eport development experience using PowerBI, SplashBI or any enterprise reporting toolWhat You’ll Do
• Involve in design and development of enterprise data solutions in Databricks, from ideation to
• deployment, ensuring robustness and scalability.
• Work with the Data Architect to build, and maintain robust and scalable data pipeline architectures on
• Databricks using PySpark and SQL
• Assemble and process large, complex ERP datasets to meet diverse functional and non-functional
• requirements.
• Involve in continuous optimization efforts, implementing testing and tooling techniques to enhance
• data solution quality
• Focus on improving performance, reliability, and maintainability of data pipelines.
• Implement and maintain PySpark and databricks SQL workflows for querying and analyzing large
• datasets
• Involve in release management using Git and CI/CD practices
• Develop business reports using SplashBI reporting tool leveraging the data from Databricks gold layer.Qualifications
• Bachelors Degree in Computer Science, Engineering, Finance or equivalent experience
• Good communication skills 

Top Skills

AWS
Azure
Ci/Cd
Databricks
GCP
Git
Power BI
Pyspark
Sparksql
Splashbi
Unity Catalog
Workflows

Similar Jobs

2 Hours Ago
Hybrid
World Trade Center, Yeshwanthpur, Bengaluru Urban, Karnataka, IND
Senior level
Senior level
Big Data • Fintech • Information Technology • Business Intelligence • Financial Services • Cybersecurity • Big Data Analytics
Oversee data acquisition initiatives ensuring compliance and driving growth. Manage stakeholder relationships and lead a high-performing data team, focusing on regulatory adherence and strategic execution.
2 Hours Ago
Remote or Hybrid
2 Locations
Senior level
Senior level
Big Data • Food • Hardware • Machine Learning • Retail • Automation • Manufacturing
Lead analytics initiatives in the supply chain, develop data models, and drive strategic projects to enhance business performance.
Top Skills: BigQueryCloud ComposerDatabricksDataflowDataprocGCPPower BIPub/SubSap BwSap EccSap S4HanaSparkTableau
2 Hours Ago
Hybrid
2 Locations
Senior level
Senior level
Big Data • Food • Hardware • Machine Learning • Retail • Automation • Manufacturing
This role focuses on leading GenAI application projects, creating insights from data, and collaborating with stakeholders to drive data-driven business solutions.
Top Skills: CC++Cloud ComputingJavaJavaScriptPythonRSQL

What you need to know about the Chennai Tech Scene

To locals, it's no secret that South India is leading the charge in big data infrastructure. While the environmental impact of data centers has long been a concern, emerging hubs like Chennai are favored by companies seeking ready access to renewable energy resources, which provide more sustainable and cost-effective solutions. As a result, Chennai, along with neighboring Bengaluru and Hyderabad, is poised for significant growth, with a projected 65 percent increase in data center capacity over the next decade.
By clicking Apply you agree to share your profile information with the hiring company.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account