Data Engineer

Tel- Aviv · Full-time

About The Position

Explorium is a cutting-edge data science company that has recently closed a Series C round bringing their total funding to $127 million.

Explorium offers a first of its kind data science platform powered by augmented data discovery and feature engineering. By automatically connecting to thousands of external data sources and leveraging machine learning to distill the most impactful signals, the Explorium platform empowers data scientists and business leaders to drive decision-making by eliminating the barrier to acquire the right data and enabling superior predictive power.

We are looking for a talented Data Engineer to join our team and revolutionize the world of external data! As a crucial member of our team, you'll be designing, implementing, testing, deploying, and maintaining data pipelines and ETLs across various data sources, industries, and use cases.

As a Data Engineer, you will engineer features, transform data into analytical insights, and create unique business indicators using machine learning techniques. You'll work with a range of tools and translate business needs into technical requirements, driving the development of innovative data products


  • 1-2 years of professional experience in a relevant role such as Data Scientist, Data Analyst, or Data Engineer.
  • 2+ years of production-level development in Python & SQL.
  • Bachelor’s degree in Computer Science, Data Science, or a related field, or an alumnus of an IDF technology unit.
  • Demonstrated experience in the data field, particularly in the development of production ETLs and data pipelines.
  • Experience with feature engineering and data analysis.
  • Experience with cloud services, preferably AWS (Lambda, S3, etc.)
  • Basic knowledge of machine learning techniques.
  • Excellent problem-solving skills and the ability to translate business requirements into technical solutions.


  • Experience in software engineering, particularly in backend or full-stack development, with practical knowledge in areas such as API development, building RESTful applications, working microservices & dockers, etc..
  • Experience with infrastructure development, including setting up CI/CD pipelines and working with tools like git actions, Databricks, and Elementary.
  • Advanced machine learning knowledge.
  • Proficiency in Spark and DBT frameworks.
  • Familiarity with additional programming languages or data-related tools.
  • Master's degree in a related field or additional certifications.

Apply for this role