Lead development of big data solutions, focusing on text data processing, data pipelines, and cloud services, primarily in AWS and GCP.
Company Description
At DemandMatrix, our vision is to disrupt the $100 billion sales and marketing intelligence industry by using domain knowledge, machine learning and AI. Fortune 100 companies like Microsoft, Google, Adobe, Amazon, IBM trust us to identify their next customer.
Job DescriptionWhat will you do?
- To help us go to the next level we are looking to onboard a hands-on SME in leveraging big data tech to solve the most complex data issues. You will spend almost half of time with hands-on coding.
- It involves large scale text data processing, event driven data pipelines, in-memory computations, optimization considering CPU core to network IO to disk IO.
- You will be using cloud native services in AWS and GCP.
Who Are You?
- Solid grounding in computer engineering, Unix, data structures and algorithms would enable you to meet this challenge.
- Designed and built multiple big data modules and data pipelines to process large volume.
- Genuinely excited about technology and worked on projects from scratch.
Must have:
- 7+ years of hands-on experience in Software Development with a focus on big data and large data pipelines.
- Minimum 3 years of experience to build services and pipelines using Python.
- Expertise with a variety of data processing systems, including streaming, event, and batch (Spark, Hadoop/MapReduce)
- Understanding of at least one NoSQL stores like MongoDB, Elasticsearch, HBase
- Understanding of how data models, sharding and data location strategies for distributed data stores in large scale high-throughput and high-availability environments and their effect in non-structured text data processing
- Experience with running scalable & high available systems with AWS or GCP.
Good to have:
- Experience with Docker / Kubernetes
- Exposure with CI/CD
- Knowledge of Crawling/Scraping
- Entire Work From Home
- Birthday Leave
- Remote Work
Top Skills
AWS
Docker
Elasticsearch
GCP
Hadoop
Hbase
Kubernetes
MongoDB
Python
Spark
Similar Jobs
Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
Lead development and maintenance of Big Data solutions for Disability & Absence products, ensuring high-quality, efficient, and scalable applications.
Top Skills:
AzureGCPHadoopHbaseHiveIn-Memory Data ProcessingKafkaNifiNoSQLPigPythonScalaShell ScriptSolrSpark
Information Technology • Consulting
The Big Data Lead will support the Data Ingestion team using Spark for PDF unstructured data mapping, while leveraging ADB, ADF, SQL, and Python/Pyspark.
Top Skills:
AdbAdfPysparkPythonSparkSQL
Artificial Intelligence • Blockchain • Fintech • Financial Services • Cryptocurrency • NFT • Web3
The role involves building customer support experiences, defining service APIs, integrating AI features, and ensuring high software quality in a collaborative environment.
Top Skills:
DockerGoMongoDBPostgresPythonReactTypescript
What you need to know about the Chennai Tech Scene
To locals, it's no secret that South India is leading the charge in big data infrastructure. While the environmental impact of data centers has long been a concern, emerging hubs like Chennai are favored by companies seeking ready access to renewable energy resources, which provide more sustainable and cost-effective solutions. As a result, Chennai, along with neighboring Bengaluru and Hyderabad, is poised for significant growth, with a projected 65 percent increase in data center capacity over the next decade.


.png)