hero
Vertex Ventures US
companies
Jobs

Senior Software Engineer, AI Gateway

Docker

Docker

Software Engineering, Data Science
Alberta, Canada · British Columbia, Canada · Ontario, Canada
USD 189,600-260,700 / year + Equity
Posted on Aug 29, 2025

Location

Canada (Alberta, British Columbia, Ontario), United States

Employment Type

Full time

Department

Engineering

Compensation

  • US Salary RangeUS Salary Range $189.6K – $260.7K • Offers Equity

The salary range is a guideline and actual starting compensation will be determined by location, level, skills, and experience.

At Docker, we make app development easier so developers can focus on what matters. Our remote-first team spans the globe, united by a passion for innovation and great developer experiences. With over 20 million monthly users and 20 billion image pulls, Docker is the #1 tool for building, sharing, and running apps—trusted by startups and Fortune 100s alike. We’re growing fast and just getting started. Come join us for a whale of a ride!

Docker AI Gateway is our answer to the complexity of taking AI agents from prototype to production. It’s a powerful, intelligent, and secure control point that eliminates the toil of model orchestration, tool management, observability, and governance—so developers can focus on building incredible AI agents, not gluing together infrastructure.

The Gateway sits at the center of modern AI applications, offering:

  • A model and tool routing layer with built-in security and cost optimization

  • A familiar OpenAI-compatible interface and MCP server

  • Unified observability and policy enforcement

  • Auto-RAG, tool injection, session summarization, and more

We’re just getting started—and we need exceptional engineers to help us build the backbone of the future of agent-based development.

Responsibilities

  • Design and implement core systems powering the AI Gateway, including the model router, MCP gateway, and control plane

  • Build infrastructure that supports dynamic model selection, auto-failover, cost-based routing, and policy enforcement

  • Own critical capabilities such as secure credential storage, session summarization, caching, and rate limiting

  • Develop APIs for developers building with OpenAI-compatible interfaces and the Model Context Protocol

  • Build the underlying infrastructure to support evaluation, telemetry, replay, and backtesting for agents and LLM workflows

  • Lead architectural decisions and mentor engineers as the team scales

  • Collaborate with product and design to create delightful experiences in our control plane UI

  • Contribute to roadmap planning, technical strategy, and cross-functional alignment

Key Problems You’ll Help Solve

  • Build a unified abstraction layer across diverse model and tool providers (OpenAI, Anthropic, Google, AWS Bedrock)

  • Implement secure and scalable identity and credential vaulting for tool and model access

  • Create infrastructure to support real-time and historical analytics of AI agent behavior

  • Ensure policy enforcement and logging works end-to-end—from prompt to tool to response

  • Develop seamless developer experiences through intuitive APIs and first-class observability

Qualifications

  • 6+ years of backend engineering experience with production-grade systems

  • Deep knowledge of distributed and highly scalable systems, cloud-native infrastructure, and API design

  • Experience building secure, high-throughput services (e.g., gateways, proxies, load balancers, policy engines)

  • Fluency in Go, and/or Rust (both preferred)

  • Familiarity with AI/ML platforms or model serving infrastructure

  • A strong product mindset—you're excited about building developer-facing tools

  • Ownership mentality with a bias for shipping, learning, and iterating

Bonus Qualifications

  • Prior experience with OpenAI, Anthropic, or similar LLM APIs

  • Familiarity with RAG architectures, vector databases, or agent frameworks (e.g., LangChain, AutoGPT, CrewAI)

  • Experience with policy engines (e.g., OPA), observability frameworks (e.g., OpenTelemetry), or API gateways

  • Understanding of OAuth2.1, secret management, and cloud IAM systems

  • Experience with Kubernetes, Docker, and microservices architecture

We use Covey as part of our hiring and / or promotional process for jobs in NYC and certain features may qualify it as an AEDT. As part of the evaluation process we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound on April 13, 2024.

Please see the independent bias audit report covering our use of Covey here.

Perks

  • Freedom & flexibility; fit your work around your life

  • Designated quarterly Whaleness Days

  • Home office setup; we want you comfortable while you work

  • 16 weeks of paid Parental leave

  • Technology stipend equivalent to $100 net/month

  • PTO plan that encourages you to take time to do the things you enjoy

  • Quarterly, company-wide hackathons

  • Training stipend for conferences, courses and classes

  • Equity; we are a growing start-up and want all employees to have a share in the success of the company

  • Docker Swag

  • Medical benefits, retirement and holidays vary by country

Docker embraces diversity and equal opportunity. We are committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better our company will be.

Due to the remote nature of this role, we are unable to provide visa sponsorship.

#LI-REMOTE

Compensation Range: $189.6K - $260.7K