Explorium is a cutting-edge data science company that has recently closed a Series C round bringing their total funding to $120 million.
Explorium offers a first of its kind data science platform powered by augmented data discovery and feature engineering. By automatically connecting to thousands of external data sources and leveraging machine learning to distill the most impactful signals, the Explorium platform empowers data scientists and business leaders to drive decision-making by eliminating the barrier to acquire the right data and enabling superior predictive power.
We are looking for a talented Data Engineer with a passion for data and complex problems.
As a Data Engineer, you will join a diversified engineering group consisting of Data Engineers, Machine Learning Engineers, and Algorithms Engineers. You will work on data pipelines varying from high resolution geospatial data to the stock market and implement both infrastructure and serving. You will have a key role in Explorium’s Data Organization, responsible for collecting, integrating and serving high quality features for machine learning models.
At Explorium we believe strongly in personal and professional development, constantly researching new technologies and methodologies.
- Work closely with business and research teams to deliver high quality results to customers and partners.
- Design and Implement complex end-to-end data pipelines including data extraction, feature engineering, data quality and data serving.
- Work with Data Scientists to deliver high scale, high quality features for machine learning models.
- Take ownership of the project from POC to production.
- Contribute to a wide variety of projects using a range of technologies and tools.
- If you are someone that thrives in a fast-paced environment where being self-directed, creative and determined are a requirement, we would love for you to join us.
- 5+ years of industry experience with building data-intensive platforms.
- 3+ years of hands-on experience programming in Scala and Python.
- Experience with working with complex data sets.
- Experience with Databases, SQL and NoSql, and Data modeling.
- Experience working with cloud compute and storage services on AWS/GCP.
- Kafka, Airflow, and K8S - advantage.
- Experience with building Machine Learning models - advantage.
- Experience with designing data lakes - advantage.