DATAMAXIS, Inc Logo

DATAMAXIS, Inc

Sr. Databricks Data Engineer - I

Reposted 14 Days Ago
Be an Early Applicant
India
Senior level
India
Senior level
The role involves designing and developing data solutions using Databricks, focusing on data pipeline architecture, optimization, and reporting.
The summary above was generated by AI

Job Title: Databricks Data Engineer - I
Experience: 5+ years
Location: Remote
Job Type: Full-time with AB2

We are seeking an experienced Databricks Data Engineer who can play a crucial role in our Fintech data lake project.What You Bring
• 5+ years of experience working in data warehousing systems
• 3+ strong hands-on programming expertise in Databricks landscape, including SparkSQL, Workflows
• for data processing and pipeline development
• 3+ strong hands-on data transformation/ETL skills using Spark SQL, Pyspark, Unity Catalog working
• in Databricks Medallion architecture
• 2+ yrs work experience in one of cloud platforms: Azure, AWS or GCP
• Experience working in using Git version control, and well versed with CI/CD best practices to
• automate the deployment and management of data pipelines and infrastructure
• Nice to have hands-on experience building data ingestion pipelines from ERP systems (Oracle
• Fusion preferably) to a Databricks environment, using Fivetran or any alternative data connectors
• Experience in a fast-paced, ever-changing and growing environment
• Understanding of metadata management, data lineage, and data glossaries is a plus
• Must have eport development experience using PowerBI, SplashBI or any enterprise reporting toolWhat You’ll Do
• Involve in design and development of enterprise data solutions in Databricks, from ideation to
• deployment, ensuring robustness and scalability.
• Work with the Data Architect to build, and maintain robust and scalable data pipeline architectures on
• Databricks using PySpark and SQL
• Assemble and process large, complex ERP datasets to meet diverse functional and non-functional
• requirements.
• Involve in continuous optimization efforts, implementing testing and tooling techniques to enhance
• data solution quality
• Focus on improving performance, reliability, and maintainability of data pipelines.
• Implement and maintain PySpark and databricks SQL workflows for querying and analyzing large
• datasets
• Involve in release management using Git and CI/CD practices
• Develop business reports using SplashBI reporting tool leveraging the data from Databricks gold layer.Qualifications
• Bachelors Degree in Computer Science, Engineering, Finance or equivalent experience
• Good communication skills 

We are seeking an experienced Databricks Data Engineer who can play a crucial role in our Fintech data lake project.What You Bring
• 5+ years of experience working in data warehousing systems
• 3+ strong hands-on programming expertise in Databricks landscape, including SparkSQL, Workflows
• for data processing and pipeline development
• 3+ strong hands-on data transformation/ETL skills using Spark SQL, Pyspark, Unity Catalog working
• in Databricks Medallion architecture
• 2+ yrs work experience in one of cloud platforms: Azure, AWS or GCP
• Experience working in using Git version control, and well versed with CI/CD best practices to
• automate the deployment and management of data pipelines and infrastructure
• Nice to have hands-on experience building data ingestion pipelines from ERP systems (Oracle
• Fusion preferably) to a Databricks environment, using Fivetran or any alternative data connectors
• Experience in a fast-paced, ever-changing and growing environment
• Understanding of metadata management, data lineage, and data glossaries is a plus
• Must have eport development experience using PowerBI, SplashBI or any enterprise reporting toolWhat You’ll Do
• Involve in design and development of enterprise data solutions in Databricks, from ideation to
• deployment, ensuring robustness and scalability.
• Work with the Data Architect to build, and maintain robust and scalable data pipeline architectures on
• Databricks using PySpark and SQL
• Assemble and process large, complex ERP datasets to meet diverse functional and non-functional
• requirements.
• Involve in continuous optimization efforts, implementing testing and tooling techniques to enhance
• data solution quality
• Focus on improving performance, reliability, and maintainability of data pipelines.
• Implement and maintain PySpark and databricks SQL workflows for querying and analyzing large
• datasets
• Involve in release management using Git and CI/CD practices
• Develop business reports using SplashBI reporting tool leveraging the data from Databricks gold layer.Qualifications
• Bachelors Degree in Computer Science, Engineering, Finance or equivalent experience
• Good communication skills 

Top Skills

AWS
Azure
Ci/Cd
Databricks
GCP
Git
Power BI
Pyspark
Sparksql
Splashbi
Unity Catalog

Similar Jobs

10 Minutes Ago
In-Office
Bangalore, Bengaluru Urban, Karnataka, IND
Senior level
Senior level
Artificial Intelligence • Big Data • Cloud • Information Technology • Software • Cybersecurity • Data Privacy
This role involves designing and developing scalable, high-performance system infrastructure, enhancing the Linux system stack, and diagnosing complex customer environment issues.
Top Skills: C++JavaLinuxScala
10 Minutes Ago
Easy Apply
Hybrid
Bengaluru, Karnataka, IND
Easy Apply
Entry level
Entry level
Fintech • Payments • Financial Services
As a Technical Support Engineer, you will provide technical support to merchants, offering expertise on Adyen's platform and APIs, while ensuring a seamless support experience.
Top Skills: APIsHTMLJavaScriptPostmanScripting
2 Hours Ago
In-Office or Remote
Bengaluru, Karnataka, IND
Senior level
Senior level
Cloud • Information Technology • Productivity • Security • Software • App development • Automation
As a Principal Frontend Software Engineer, you will design, build, and maintain web frontend systems, collaborate with teams to solve user issues, and mentor other engineers.
Top Skills: AngularjsChaiCSSCypressHTML5Javascript (Es6)JestMochaReactVue

What you need to know about the Chennai Tech Scene

To locals, it's no secret that South India is leading the charge in big data infrastructure. While the environmental impact of data centers has long been a concern, emerging hubs like Chennai are favored by companies seeking ready access to renewable energy resources, which provide more sustainable and cost-effective solutions. As a result, Chennai, along with neighboring Bengaluru and Hyderabad, is poised for significant growth, with a projected 65 percent increase in data center capacity over the next decade.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account