Inivos (www.inivosglobal.com) leads innovation in technology, specializing in Enterprise Solutions, Product Development, and Staff Augmentation. Our team of over 180 professionals, including consultants, developers, and quality engineers, delivers cutting-edge solutions that set new industry benchmarks. Within just five years, we’ve established a robust client base across North America, the UK, Scandinavia, South Africa, UAE, Sri Lanka, Bangladesh, Singapore, Netherlands and Australia.
Job Description
We are seeking a highly skilled Associate Lead - Data Engineer, and the ideal candidate will play a pivotal role in migrating and transforming data, performing complex data transformations and ensuring seamless integration. As in the position of an Associate Lead - Data Engineer, you will,
- Lead the design, development, and optimization of ETL/ELT pipelines using Azure Data Factory and other Azure services.
- Architect and implement data lake house and data warehouse solutions (Azure Synapse, Delta Lake, Databricks, etc.).
- Develop and maintain Power BI dashboards, reports, and data models to provide actionable insights for stakeholders.
- Collaborate with cross-functional teams to gather business requirements and translate them into technical solutions.
- Implement and manage Azure Fabric-based solutions for data integration, scalability, and performance.
- Ensure best practices in data quality, governance, and security across all data platforms.
- Mentor and guide junior data engineers, fostering knowledge-sharing and best practices.
- Participate in agile ceremonies, contributing to sprint planning, reviews, and retrospectives
Requirements
- Bachelor’s degree in Computer Science, Data Engineering, or related discipline.
- 5+ years of proven experience in data engineering with strong expertise in:
- Azure Data Factory (pipeline design, orchestration, data flows).
- Azure Fabric and other Azure cloud services (Synapse, Databricks, Storage, Analysis Services).
- Power BI (DAX, Power Query, dashboard/report development)
- Strong experience in SQL, Python, and Spark for data transformation and processing.
- Solid understanding of data warehousing concepts, performance optimization, and data modelling
- Hands-on experience with big data and distributed systems (Databricks, Snowflake, Redshift, etc. is a plus).
- Familiarity with CI/CD, version control (Git/Bitbucket), and agile methodologies.
- Excellent problem-solving, communication, and leadership skills.
Benefits
Competitive compensation.
Recognition & appreciation.
International exposure.
Open work culture.
Remote working model.
Medical & Insurance entitlement.
Recreational activities and events.
Bonus entitlement.