HighLevel Logo

HighLevel

Staff Engineer - Database Platform

Job Posted 17 Days Ago Posted 17 Days Ago
Be an Early Applicant
Remote
Hiring Remotely in Delhi, Connaught Place, New Delhi, Delhi
Senior level
Remote
Hiring Remotely in Delhi, Connaught Place, New Delhi, Delhi
Senior level
The Senior Database Engineer is responsible for designing, optimizing, and maintaining high-performance ClickHouse databases, ensuring efficient data processing and storage. They will collaborate with engineering teams and implement best practices for database management.
The summary above was generated by AI

About HighLevel:

HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. We are proud to support a global and growing community of over 2 million businesses, from marketing agencies to entrepreneurs to small businesses and beyond. Our platform empowers users across industries to streamline operations, drive growth, and crush their goals.

HighLevel processes over 15 billion API hits and handles more than 2.5 billion message events every day. Our platform manages 470 terabytes of data distributed across five databases, operates with a network of over 250 micro-services, and supports over 1 million domain names.


Our People

With over 1,500 team members across 15+ countries, we operate in a global, remote-first environment. We are building more than software; we are building a global community rooted in creativity, collaboration, and impact. We take pride in cultivating a culture where innovation thrives, ideas are celebrated, and people come first, no matter where they call home.


Our Impact

Every month, our platform powers over 1.5 billion messages, helps generate over 200 million leads, and facilitates over 20 million conversations for the more than 2 million businesses we serve. Behind those numbers are real people growing their companies, connecting with customers, and making their mark - and we get to help make that happen.


Learn more about us on our YouTube Channel or Blog Posts


About the Role:

We are seeking a highly skilled Senior Database Engineer with expertise in ClickHouse and other columnar databases. The ideal candidate will have a deep understanding of database performance optimization, query tuning, data modeling, and large-scale data processing. You will be responsible for designing, implementing, and maintaining high-performance analytical databases that support real-time and batch processing.

Responsibilities:

  • Design, optimize, and maintain ClickHouse databases to support high-throughput analytical workloads
  • Develop and implement efficient data models for fast query performance and storage optimization
  • Monitor and troubleshoot database performance issues, ensuring minimal downtime and optimal query execution
  • Work closely with data engineers, software developers, and DevOps teams to integrate ClickHouse with data pipelines
  • Optimise data ingestion processes, ensuring efficient storage and retrieval of structured and semi-structured data
  • Implement partitioning, sharding, and indexing strategies for large-scale data processing
  • Evaluate and benchmark ClickHouse against other columnar databases such as Apache Druid, Apache Pinot, or Snowflake
  • Establish best practices for backup, replication, high availability, and disaster recovery
  • Automate database deployment, schema migrations, and performance monitoring using infrastructure-as-code approaches

Requirements:

  • 5+ years of experience working with high-performance databases, with a focus on ClickHouse or similar columnar databases
  • Strong knowledge of SQL, query optimisation techniques, and database internals
  • Experience handling large-scale data (TBs to PBs) and optimizing data storage & retrieval
  • Hands-on experience with ETL/ELT pipelines, streaming data ingestion, and batch processing
  • Proficiency in at least two scripting/programming languages like NodeJS, Python, Go, or Java for database automation
  • Familiarity with Kafka, Apache Spark, or Flink for real-time data processing
  • Experience in Kubernetes, Docker, Terraform, or Ansible for database deployment & orchestration is a plus
  • Strong understanding of columnar storage formats (Parquet, ORC, Avro) and their impact on performance
  • Knowledge of cloud-based ClickHouse deployments (AWS, GCP, or Azure) is a plus
  • Excellent problem-solving skills, ability to work in a fast-paced environment, and a passion for performance tuning

Preferred Skills:

  • Experience with alternative columnar databases like Apache Druid, Apache Pinot, or Snowflake
  • Background in big data analytics, time-series databases, or high-performance data warehousing
  • Prior experience working with distributed systems and high-availability architectures

EEO Statement:

The company is an Equal Opportunity Employer. As an employer subject to affirmative action regulations, we invite you to voluntarily provide the following demographic information. This information is used solely for compliance with government recordkeeping, reporting, and other legal requirements. Providing this information is voluntary and refusal to do so will not affect your application status. This data will be kept separate from your application and will not be used in the hiring decision.

#LI-Remote #LI-NJ1

Top Skills

Ansible
Spark
Avro
AWS
Azure
Clickhouse
Docker
Flink
GCP
Go
Java
Kafka
Kubernetes
Node.js
Orc
Parquet
Python
SQL
Terraform

Similar Jobs

Yesterday
Remote
India
Junior
Junior
Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
The Associate Consultant-Analytics generates and analyzes reports, manages client portfolios, and ensures timely execution of processes and documentation for financial reporting.
Top Skills: Data AnalysisFinancial AnalysisReport Generation
Yesterday
Remote
India
Senior level
Senior level
Cloud • Information Technology • Productivity • Security • Software • App development • Automation
As a Senior Machine Learning Engineer at Atlassian, you will develop and implement cutting-edge machine learning algorithms, train complex models, and collaborate with teams to integrate AI features into Jira. Your role involves designing system architectures, conducting experiments, guiding junior engineers, and ensuring the practical application of AI within the product.
2 Days Ago
Easy Apply
Remote
India
Easy Apply
Senior level
Senior level
Artificial Intelligence • Fintech • Hardware • Information Technology • Sales • Software • Transportation
The Senior Applied Scientist will develop and implement data ETLs, apply statistical and machine learning techniques, and assess non-traditional data sources.
Top Skills: PythonSQL

What you need to know about the Chennai Tech Scene

To locals, it's no secret that South India is leading the charge in big data infrastructure. While the environmental impact of data centers has long been a concern, emerging hubs like Chennai are favored by companies seeking ready access to renewable energy resources, which provide more sustainable and cost-effective solutions. As a result, Chennai, along with neighboring Bengaluru and Hyderabad, is poised for significant growth, with a projected 65 percent increase in data center capacity over the next decade.
By clicking Apply you agree to share your profile information with the hiring company.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account