Senior Data Engineer

Explorium is a cutting-edge data science company that has recently closed a Series B round bringing their total funding to $50 million.

Explorium offers a first of its kind data science platform powered by augmented data discovery and feature engineering. By automatically connecting to thousands of external data sources and leveraging machine learning to distill the most impactful signals, the Explorium platform empowers data scientists and business leaders to drive decision-making by eliminating the barrier to acquire the right data and enabling superior predictive power.

Job Description

We are looking for a talented Data Engineer with a passion for data and complex problems.

As a Data Engineer, you will join a diversified engineering group consisting of Data Engineers, Machine Learning Engineers, and Algorithms Engineers. You will work on data pipelines varying from high resolution geospatial data to the stock market and implement both infrastructure and serving. You will have a key role in Explorium’s Data Organization, responsible for collecting, integrating and serving high quality features for machine learning models.

At Explorium we believe strongly in personal and professional development, constantly researching new technologies and methodologies.


  • Work closely with business and research teams to deliver high quality results to customers and partners.
  • Design and Implement complex end-to-end data pipelines including data extraction, feature engineering, data quality and data serving.
  • Work with Data Scientists to deliver high scale, high quality features for machine learning models.
  • Take ownership of the project from POC to production.
  • Contribute to a wide variety of projects using a range of technologies and tools.
  • If you are someone that thrives in a fast-paced environment where being self-directed, creative and determined are a requirement, we would love for you to join us.


  • Bsc in CS/Mathematics/Statistics/Physics/EE or equivalent experience.
  • 2+ years of industry experience working with large data sets with Spark.
  • 3+ years of hands-on experience programming in Scala and Python. 
  • Experience with working with complex data sets.
  • Experience with Databases, SQL and NoSql, and Data modeling.
  • Experience with data visualization and analytics.
  • Experience working with cloud compute and storage services on AWS/GCP.
  • Hacker mindset - deliver results fast using creative solutions.
  • A team player with excellent collaboration skills.
  • Kafka, Airflow and K8S - advantage.
  • Experience with building Machine Learning models - advantage.
  • Experience with designing datalakes - advantage.
  • Deep knowledge in internal Spark tuning - advantage.
Back to open roles

Apply for this role