Senior DevOps Engineer

Explorium is a cutting-edge data science company that has recently closed a Series B round bringing their total funding to $50 million.

Explorium offers a first of its kind data science platform powered by augmented data discovery and feature engineering. By automatically connecting to thousands of external data sources and leveraging machine learning to distill the most impactful signals, the Explorium platform empowers data scientists and business leaders to drive decision-making by eliminating the barrier to acquire the right data and enabling superior predictive power.

We are looking for a Senior DevOps engineer with experience in infrastructure best practices and an excellent understanding of both Reliability and Continuous delivery domains. In this critical role you'll build the foundations that will support the company rocket launch into the hyper-growth stage.

We are looking for you if you have a passion for clean & reproducible infrastructure and an appetite for constantly evaluating and implementing best practices while inventing our own solutions when we breach into uncharted territory. We expect you to be a real team player with a strong can-do attitude, great communication skills, and self-learning ability.

Responsibilities:

  • Responsible for the company's production environment
  • Respond to and resolve emergent service problems, build tools and automation to proactively prevent the next issue
  • Manage the availability, latency, scalability and efficiency of our services by engineering reliability into software and systems
  • Review and influence new and evolving design, architecture, standards, and methods for our infrastructure
  • A strong sense of ownership and accountability, passion to cultivate knowledge sharing and empowering your peers
  • At least 7 years architecting, building and administering large and complex Linux based systems
  • At least 3 years building highly available systems and supporting SaaS cloud-based infrastructure preferably in a startup environment
  • Experience with observability tools and best-practices(Grafana, Prometheus, Elastic stack, Tracing and APM)
  • Deep knowledge in Linux ecosystem, container and container orchestration
  • Hands-on experience with cloud vendors architecture and services(ASW, GCP, Azure)
  • Experience with CI/CD/GitOps tooling and flows (Jenkins, GitHub Actions)
  • Experience with Infrastructure as code tools and practices
  • Ability to effectively asses, evaluate and implement new tools and technologies

Bonus points:

  • Experience with DevSecOps
  • Python Dev / Ops background
  • Data Ops / ML Ops background
Back to open roles

Apply for this role

New! Get Our Latest Research Report "2021 State of External Data Acquisition" Read Now