As a Data Architect, you'll design and model data structures, develop efficient ETL/ELT pipelines, and ensure data architecture aligns with business goals while collaborating with teams.
Project Role : Data Architect
Project Role Description : Define the data requirements and structure for the application. Model and design the application data structure, storage and integration.
Must have skills : Databricks Unified Data Analytics Platform
Good to have skills : NA
Minimum 7.5 year(s) of experience is required
Educational Qualification : 15 years full time education
Summary: As a Data Architect, you will define the data requirements and structure for the application. Your typical day will involve modeling and designing the application data structure, storage, and integration, ensuring that the data architecture aligns with the overall business objectives and technical specifications. You will collaborate with various teams to ensure that the data architecture is robust, scalable, and efficient, while also addressing any challenges that arise during the development process. Your role will be pivotal in shaping the data landscape of the organization, enabling data-driven decision-making and fostering innovation through effective data management practices. Responsibilities: Develop high-quality, scalable ETL/ELT pipelines using Databricks technologies including Delta Lake, Auto Loader, and DLT. Excellent programming and debugging skills in Python. Strong hands-on experience with Py Spark to build efficient data transformation and validation logic. Must be proficient in at least one cloud platform: AWS, GCP, or Azure. Create modular DBX functions for transformation, PII masking, and validation logic — reusable across DLT and notebook pipelines. Implement ingestion patterns using Auto Loader with checkpointing and schema evolution for structured and semi-structured data. Build secure and observable DLT pipelines with DLT Expectations, supporting Bronze/Silver/Gold medallion layering. Configure Unity Catalog: set up catalogs, schemas, user/group access, enable audit logging, and define masking for PII fields. Enable secure data access across domains and workspaces via Unity Catalog External Locations, Volumes, and lineage tracking. Access and utilize data assets from the Databricks Marketplace to support enrichment, model training, or benchmarking. Collaborate with data sharing stakeholders to implement Delta Sharing — both internally and externally. Integrate Power BI/Tableau/Looker with Databricks using optimized connectors (ODBC/JDBC) and Unity Catalog security controls. Build stakeholder-facing SQL Dashboards within Databricks to monitor KPIs, data pipeline health, and operational SLAs. Prepare Gen AI-compatible datasets: manage vector embeddings, index with Databricks Vector Search, and use Feature Store with ML flow. Package and deploy pipelines using Databricks Asset Bundles through CI/CD pipelines in GitHub or GitLab. Troubleshoot, tune, and optimize jobs using Photon engine and serverless compute, ensuring cost efficiency and SLA reliability. Experience with cloud-based services relevant to data engineering, data storage, data processing, data warehousing, real-time streaming, and serverless computing. Hands on Experience in applying Performance optimization techniques Understanding data modeling and data warehousing principles is essential. Nice to Have: 1. Certifications: Databricks Certified Professional or similar certifications. 2. Machine Learning: Knowledge of machine learning concepts and experience with popular ML libraries. 3. Knowledge of big data processing (e.g., Spark, Hadoop, Hive, Kafka) 4. Data Orchestration: Apache Airflow. 5. Knowledge of CI/CD pipelines and DevOps practices in a cloud environment. 6. Experience with ETL tools like Informatica, Talend, Mati Llion, or Five Tran. 7. Familiarity with DBT (Data Build Tool) Additional Information: - The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Bengaluru office. Educational Qualification: - 15 years full time education is required.15 years full time education
Project Role Description : Define the data requirements and structure for the application. Model and design the application data structure, storage and integration.
Must have skills : Databricks Unified Data Analytics Platform
Good to have skills : NA
Minimum 7.5 year(s) of experience is required
Educational Qualification : 15 years full time education
Summary: As a Data Architect, you will define the data requirements and structure for the application. Your typical day will involve modeling and designing the application data structure, storage, and integration, ensuring that the data architecture aligns with the overall business objectives and technical specifications. You will collaborate with various teams to ensure that the data architecture is robust, scalable, and efficient, while also addressing any challenges that arise during the development process. Your role will be pivotal in shaping the data landscape of the organization, enabling data-driven decision-making and fostering innovation through effective data management practices. Responsibilities: Develop high-quality, scalable ETL/ELT pipelines using Databricks technologies including Delta Lake, Auto Loader, and DLT. Excellent programming and debugging skills in Python. Strong hands-on experience with Py Spark to build efficient data transformation and validation logic. Must be proficient in at least one cloud platform: AWS, GCP, or Azure. Create modular DBX functions for transformation, PII masking, and validation logic — reusable across DLT and notebook pipelines. Implement ingestion patterns using Auto Loader with checkpointing and schema evolution for structured and semi-structured data. Build secure and observable DLT pipelines with DLT Expectations, supporting Bronze/Silver/Gold medallion layering. Configure Unity Catalog: set up catalogs, schemas, user/group access, enable audit logging, and define masking for PII fields. Enable secure data access across domains and workspaces via Unity Catalog External Locations, Volumes, and lineage tracking. Access and utilize data assets from the Databricks Marketplace to support enrichment, model training, or benchmarking. Collaborate with data sharing stakeholders to implement Delta Sharing — both internally and externally. Integrate Power BI/Tableau/Looker with Databricks using optimized connectors (ODBC/JDBC) and Unity Catalog security controls. Build stakeholder-facing SQL Dashboards within Databricks to monitor KPIs, data pipeline health, and operational SLAs. Prepare Gen AI-compatible datasets: manage vector embeddings, index with Databricks Vector Search, and use Feature Store with ML flow. Package and deploy pipelines using Databricks Asset Bundles through CI/CD pipelines in GitHub or GitLab. Troubleshoot, tune, and optimize jobs using Photon engine and serverless compute, ensuring cost efficiency and SLA reliability. Experience with cloud-based services relevant to data engineering, data storage, data processing, data warehousing, real-time streaming, and serverless computing. Hands on Experience in applying Performance optimization techniques Understanding data modeling and data warehousing principles is essential. Nice to Have: 1. Certifications: Databricks Certified Professional or similar certifications. 2. Machine Learning: Knowledge of machine learning concepts and experience with popular ML libraries. 3. Knowledge of big data processing (e.g., Spark, Hadoop, Hive, Kafka) 4. Data Orchestration: Apache Airflow. 5. Knowledge of CI/CD pipelines and DevOps practices in a cloud environment. 6. Experience with ETL tools like Informatica, Talend, Mati Llion, or Five Tran. 7. Familiarity with DBT (Data Build Tool) Additional Information: - The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Bengaluru office. Educational Qualification: - 15 years full time education is required.15 years full time education
About Accenture
Accenture is a leading global professional services company that helps the world’s leading businesses, governments and other organizations build their digital core, optimize their operations, accelerate revenue growth and enhance citizen services—creating tangible value at speed and scale. We are a talent- and innovation-led company with approximately 791,000 people serving clients in more than 120 countries. Technology is at the core of change today, and we are one of the world’s leaders in helping drive that change, with strong ecosystem relationships. We combine our strength in technology and leadership in cloud, data and AI with unmatched industry experience, functional expertise and global delivery capability. Our broad range of services, solutions and assets across Strategy & Consulting, Technology, Operations, Industry X and Song, together with our culture of shared success and commitment to creating 360° value, enable us to help our clients reinvent and build trusted, lasting relationships. We measure our success by the 360° value we create for our clients, each other, our shareholders, partners and communities.Visit us at www.accenture.com
Equal Employment Opportunity Statement
We believe that no one should be discriminated against because of their differences. All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, military veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by applicable law. Our rich diversity makes us more innovative, more competitive, and more creative, which helps us better serve our clients and our communities.
Top Skills
Apache Airflow
Auto Loader
AWS
Azure
Ci/Cd
Databricks
Delta Lake
Dlt
GCP
Jdbc
Looker
Odbc
Power BI
Py Spark
Python
Tableau
Accenture Chennai, Tamil Nadu, IND Office
Chennai, India
Similar Jobs
Aerospace • Digital Media • Information Technology • Internet of Things • Mobile • Software
Architect and implement scalable data processing and ML systems to model and predict satellite network behavior. Lead development of data pipelines, model serving, real-time inference, CI/CD, and cloud orchestration; mentor engineers, ensure code quality, stakeholder demos, and maintain technical documentation.
Top Skills:
Python,Numpy,Pandas,Tensorflow,Pytorch,Scikit-Learn,Xgboost,Sql,Nosql,Data Lakes,Data Warehouses,Ci/Cd,Azure Devops,Cloud Orchestration,Api Standards
Information Technology
Design and model application data architecture, build scalable Databricks ETL/ELT pipelines (Delta Lake/Auto Loader/DLT), implement Unity Catalog security and Delta Sharing, integrate BI tools, prepare GenAI-ready datasets, and optimize jobs for cost and SLA adherence.
Top Skills:
Databricks,Delta Lake,Auto Loader,Dlt,Python,Pyspark,Aws,Gcp,Azure,Dbx,Unity Catalog,Delta Sharing,Databricks Vector Search,Feature Store,Mlflow,Databricks Marketplace,Power Bi,Tableau,Looker,Odbc,Jdbc,Github,Gitlab,Photon Engine,Serverless Compute,Spark,Hadoop,Hive,Kafka,Apache Airflow,Informatica,Talend,Matillion,Fivetran,Dbt,Databricks Asset Bundles
Information Technology
Design and implement scalable Databricks-based data architectures, build ETL/ELT pipelines (Delta Lake, Auto Loader, DLT), configure Unity Catalog and security, enable BI/ML integrations, and optimize performance and deployments via CI/CD.
Top Skills:
Databricks,Delta Lake,Auto Loader,Dlt,Unity Catalog,Delta Sharing,Databricks Vector Search,Feature Store,Mlflow,Databricks Marketplace,Databricks Asset Bundles,Dbx,Photon Engine,Serverless Compute,Python,Pyspark,Aws,Gcp,Azure,Power Bi,Tableau,Looker,Odbc,Jdbc,Github,Gitlab
What you need to know about the Chennai Tech Scene
To locals, it's no secret that South India is leading the charge in big data infrastructure. While the environmental impact of data centers has long been a concern, emerging hubs like Chennai are favored by companies seeking ready access to renewable energy resources, which provide more sustainable and cost-effective solutions. As a result, Chennai, along with neighboring Bengaluru and Hyderabad, is poised for significant growth, with a projected 65 percent increase in data center capacity over the next decade.

