Data Engineer

Location Israel , Tel Aviv

Description

About Explorium

Explorium is a leading provider of B2B data foundations for AI agents. We offer go-to-market data and infrastructure designed to power context-aware AI products and strategies. Our platform harmonizes diverse data sources to deliver high-quality, structured, and trustworthy insights—empowering businesses to build intelligent systems that drive real growth.

About the Role

We are looking for a talented and motivated Data Engineer to join our growing Data Products team. You will design, implement, test, deploy, and maintain production-grade data products—mainly focused on data pipelines, transformation layers, and real-time data systems. You’ll work with modern tools such as DBT (on Spark) and Databricks, and leverage technologies like LLMs and GenAI to create scalable and innovative solutions that address real-world business problems.

This role is ideal for someone who combines strong engineering capabilities with a keen understanding of data-driven business use cases.

Key Responsibilities

  • Own the full lifecycle of data product development—from concept and design to deployment and maintenance.
  • Develop and maintain production ETL/ELT pipelines using DBT (on Spark) and orchestrated workflows in Databricks.
  • Apply best practices in Python and SQL to create scalable, maintainable, and efficient data transformations.
  • Build monitoring, alerting, and testing pipelines to ensure reliability and performance in production.
  • Utilize LLMs and Generative AI technologies to enhance data workflows and feature engineering.
  • Collaborated with leading industry data providers to assess and integrate third-party data assets, enhancing the quality and performance of Explorium’s data products.

Requirements

Must-Haves

  • 4+ years of experience in production-level data engineering, data product development, or related roles.
  • Deep proficiency in SQL, Python, and working with large-scale data processing systems.
  • Proven track record of owning and scaling production-grade data pipelines, including versioning, testing, and monitoring.
  • Strong understanding of data modeling, normalization/denormalization trade-offs, and data quality management.
  • Experience working with modern data stack tools: DBT, Databricks, Spark, Airflow, Delta Lake, etc.
  • Strong analytical and experimentation skills, including the ability to design and evaluate data-driven hypotheses and KPIs.

Nice-to-Haves

  • Hands-on experience with DBT.
  • Hands-on experience with Databricks or similar data lakehouse platforms.
  • Familiarity with data modeling techniques and working with both SQL and NoSQL databases.
  • Familiarity with GenAI and LLM applications—particularly in extracting structure from unstructured data at scale.
  • Experience working with a wide variety of external data sources and vendors.
  • Familiarity with cloud-native data platforms (e.g., AWS, Azure, or GCP).
  • BSc/BA in Computer Science, Engineering, or a related technical field—or graduation from a top-tier IDF tech unit.

Apply for this role