Explorium is a cutting-edge data science company that has recently closed a Series B round bringing their total funding to $50 million.
Explorium offers a first of its kind data science platform powered by augmented data discovery and feature engineering. By automatically connecting to thousands of external data sources and leveraging machine learning to distill the most impactful signals, the Explorium platform empowers data scientists and business leaders to drive decision-making by eliminating the barrier to acquire the right data and enabling superior predictive power.
We are looking for a talented, self-driven DevOps engineer to join the Explorium DevOps team.
In this role, you will take an active part in building and deploying Explorium’s on-prem offering, as well as enhancing its cloud platform. You will be involved in direct communication with customers, support customer installations and lead issues to resolution.
The ideal candidate should possess a strong knowledge of cloud and on-premise environments, have the ability to work in a distributed team and have experience in on-going communication with customers and partners. This professional holds a strong knowledge of Site Reliability Engineering and DevOps methodologies related to Delivery solutions & Platform Automation.
- Build, deploy and maintain Explorium's modern on-prem offering.
- Closely communicate with customers and partners, fully owning deployment of Explorium’s platform for enterprise customers.
- Dig into and troubleshoot customer issues, leading support and delivery issues to resolution.
- Proactively monitor production systems to quickly identify upcoming applicative/infrastructure issues.
- Troubleshoot server performance and availability issues.
- Experience in deploying and supporting on-prem solutions in customer environments, including close, direct engagement with enterprise customers.
- Strong expertise (3+ years) in Linux (Ubuntu, RedHat, Centos) and bash scripting expertise.
- Experience with CI/CD tools (e.g. Jenkins/gitlab/drone)
- Proficiency in all the public cloud providers, (AWS, GCP, Azure) - AWS is a must.
- Strong Docker experience.
- Familiar with container orchestration framework (Kubernetes/OpenShift or others).
- Experience using cloud system monitoring tools (e.g.Prometheus, Grafana, Logz.io).
- Experience with relational DB administration (e.g. PostgreSQL).
- Strong sense of urgency and ownership when dealing with customer issues.
- Understating of python backend and scripting.
- Experience in configuration automation using ansible/chef/puppet or similar tools.
- Experience administering and tuning web servers (Nginx), application servers.
- Infrastructure as a code experience (Terraform/Cloudformation).