Docker MCP Catalog – Docker https://www.docker.com Fri, 06 Mar 2026 13:00:39 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.docker.com/app/uploads/2024/02/cropped-docker-logo-favicon-32x32.png Docker MCP Catalog – Docker https://www.docker.com 32 32 Celebrating Women in AI: 3 Questions with Cecilia Liu on Leading Docker’s MCP Strategy https://www.docker.com/blog/women-in-ai-cecilia-liu-docker-mcp-strategy/ Fri, 06 Mar 2026 12:59:30 +0000 https://www.docker.com/?p=85765 To celebrate International Women’s Day, we sat down with Cecilia Liu, Senior Product Manager at Docker, for three questions about the vision and strategy behind Docker’s MCP solutions. From shaping product direction to driving AI innovation, Cecilia plays a key role in defining how Docker enables secure, scalable AI tooling.

WomensDay1 resize

Cecilia leads product management for Docker’s MCP Catalog and Toolkit, our solution for running MCP servers securely and at scale through containerization. She drives Docker’s AI strategy across both enterprise and developer ecosystems, helping organizations deploy MCP infrastructure with confidence while empowering individual developers to seamlessly discover, integrate, and use MCP in their workflows. With a technical background in AI frameworks and an MBA from NYU Stern, Cecilia bridges the worlds of AI infrastructure and developer tools, turning complex challenges into practical, developer-first solutions.

What products are you responsible for?

I own Docker’s MCP solution. At its core, it’s about solving the problems that anyone working with MCP runs into: how do you find the right MCP servers, how do you actually use them without a steep learning curve, and how do you deploy and manage them reliably across a team or organization.

How does Docker’s MCP solution benefit developers and enterprise customers?

Dev productivity is where my heart is. I want to build something that meaningfully helps developers at every stage of their cycle — and that’s exactly how I think about Docker’s MCP solution.

For end-user developers and vibe coders, the goal is simple: you shouldn’t need to understand the underlying infrastructure to get value from MCP. As long as you’re working with AI, we make it easy to discover, configure, and start using MCP servers without any of the usual setup headaches. One thing I kept hearing in user feedback was that people couldn’t even tell if their setup was actually working. That pushed us to ship in-product setup instructions that walk you through not just configuration, but how to verify everything is running correctly. It sounds small, but it made a real difference.

For developers building MCP servers and integrating them into agents, I’m focused on giving them the right creation and testing tools so they can ship faster and with more confidence. That’s a big part of where we’re headed.

And for security and enterprise admins, we’re solving real deployment pain, making it faster and cheaper to roll out and manage MCP across an entire organization. Custom catalogs, role-based access controls, audit logging, policy enforcement. The goal is to give teams the visibility and control they need to adopt AI tooling confidently at scale.

Customers love us for all of the above, and there’s one more thing that ties it together: the security that comes built-in with Docker. That trust doesn’t happen overnight, and it’s something we take seriously across everything we ship.

What are you excited about when it comes to the future of MCP?

What excites me most is honestly the pace of change itself. The AI landscape is shifting constantly, and with every new tool that makes AI more powerful, there’s a whole new set of developers who need a way to actually use it productively. That’s a massive opportunity.

MCP is where that’s happening right now, and the adoption we’re seeing tells me the need is real. But what gets me out of bed is knowing the problems we’re solving: discoverability, usability, deployment. They are all going to matter just as much for whatever comes next. We’re not just building for today’s tools. We’re building the foundation that developers will reach for every time something new emerges.

Cecilia is speaking about scaling MCP for enterprises at the MCP Dev Summit in NYC on 3rd of April, 2026. If you’re attending, be sure to stop by Docker’s booth (D/P9).

Learn more

]]>
Get Started with the Atlassian Rovo MCP Server Using Docker https://www.docker.com/blog/atlassian-remote-mcp-server-getting-started-with-docker/ Wed, 04 Feb 2026 13:52:53 +0000 https://www.docker.com/?p=85051 We’re excited to announce that the remote Atlassian Rovo MCP server is now available in Docker’s MCP Catalog and Toolkit, making it easier than ever to connect AI assistants to Jira and Confluence. With just a few clicks, technical teams can use their favorite AI agents to create and update Jira issues, epics, and Confluence pages without complex setup or manual integrations.

In this post, we’ll show you how to get started with the Atlassian remote MCP server in minutes and how to use it to automate everyday workflows for product and engineering teams.

Atlassian server figure 1

Figure 1: Discover over 300+ MCP servers including the remote Atlassian MCP server in Docker MCP Catalog.

What is the Atlassian Rovo MCP Server?

Like many teams, we rely heavily on Atlassian tools, especially Jira to plan, track, and ship product and engineering work. The Atlassian Rovo MCP server enables AI assistants and agents to interact directly with Jira and Confluence, closing the gap between where work happens and how teams want to use AI.

With the Atlassian Rovo MCP server, you can:

  • Create and update Jira issues and epics
  • Generate and edit Confluence pages
  • Use your preferred AI assistant or agent to automate everyday workflows

Traditionally, setting up and configuring MCP servers can be time-consuming and complex. Docker removes that friction, making it easy to get up and running securely in minutes.

Enable the Atlassian Rovo MCP Server with One Click

Docker’s MCP Catalog is a curated collection of 300+ MCP servers, including both local and remote options. It provides a reliable starting point for developers building with MCP so you don’t have to wire everything together yourself.

Prerequisites

To get started with the Atlassian remote MCP server:

  1. Open Docker Desktop and click on the MCP Toolkit tab. 
  2. Navigate to Docker MCP Catalog
  3. Search for the Atlassian Rovo MCP server. 
  4. Select the remote version with cloud icon
  5. Enable it with a single click

That’s it. No manual installs. No dependency wrangling.

Why use the Atlassian Rovo MCP server with Docker

Demo by Cecilia Liu: Set up the Atlassian Rovo MCP server with Docker with just a few clicks and use it to generate Jira epics with Claude Desktop

Seamless Authentication with Built-in OAuth

The Atlassian Rovo MCP server uses Docker’s built-in OAuth, so authorization is seamless. Docker securely manages your credentials and allows you to reuse them across multiple MCP clients. You authenticate once, and you’re good to go.

Behind the scenes, this frictionless experience is powered by the MCP Toolkit, which handles environment setup and dependency management for you.

Works with Your Favorite AI Agent

Once the Atlassian Rovo MCP server is enabled, you can connect it to any MCP-compatible client.

For popular clients like Claude Desktop, Claude Code, Codex, or Gemini CLI, connecting is just one click. Just click Connect, restart Claude Desktop, and now we’re ready to go.

From there, we can ask Claude to:

  • Write a short PRD about MCP
  • Turn that PRD into Jira epics and stories
  • Review the generated epics and confirm they’re correct

And just like that, Jira is updated.

One Setup, Any MCP Client

Sometimes AI assistants have hiccups. Maybe you hit a daily usage limit in one tool. That’s not a blocker here.

Because the Atlassian Rovo MCP server is connected through the Docker MCP Toolkit, the setup is completely client-agnostic. Switching to another assistant like Gemini CLI or Cursor is as simple as clicking Connect. No need for reconfiguration or additional setup!

Now we can ask any connected AI assistant such as Gemini CLI to, for example, check all new unassigned Jira tickets. It just works.

Coming Soon: Share Atlassian-Based Workflows Across Teams

We’re working on new enhancements that will make Atlassian-powered workflows even more powerful and easy to share. Soon, you’ll be able to package complete workflows that combine MCP servers, clients, and configurations. Imagine a workflow that turns customer feedback into Jira tickets using Atlassian and Confluence, then shares that entire setup instantly with your team or across projects. That’s where we’re headed.

Frequently Asked Questions (FAQ)

What is the Atlassian Rovo MCP server?

The Atlassian MCP Rovo server enables AI assistants and agents to securely interact with Jira and Confluence. It allows AI tools to create and update Jira issues and epics, generate and edit Confluence pages, and automate everyday workflows for product and engineering teams.

How do I use the Atlassian Rovo MCP server with Docker? 

You can enable the Atlassian Rovo MCP server directly from Docker Desktop or CLI. Simply open the MCP Toolkit tab, search for the Atlassian MCP server, select the remote version, and enable it with one click. Connect to any MCP-compatible client. For popular tools like Claude Code, Codex, and Gemini, setup is even easier with one-click integration. 

Why use Docker to run the Atlassian Rovo MCP server?

Using Docker to run the Atlassian Rovo MCP server removes the complexity of setup, authentication, and client integration. Docker provides one-click enablement through the MCP Catalog, built-in OAuth for secure credential management, and a client-agnostic MCP Toolkit that lets teams connect any AI assistant or agent without reconfiguration so you can focus on automating Jira and Confluence workflows instead of managing infrastructure.

Less Setup. Less Context Switching. More Work Shipped.

That’s how easy it is to set up and use the Atlassian Rovo MCP server with Docker. By combining the MCP Catalog and Toolkit, Docker removes the friction from connecting AI agents to the tools teams already rely on.

Learn more

]]>
Using MCP Servers: From Quick Tools to Multi-Agent Systems https://www.docker.com/blog/mcp-servers-docker-toolkit-cagent-gateway/ Thu, 22 Jan 2026 19:35:33 +0000 https://www.docker.com/?p=84789 Model Context Protocol (MCP) servers are a spec for exposing tools, models, or services to language models through a common interface. Think of them as smart adapters: they sit between a tool and the LLM, speaking a predictable protocol that lets the model interact with things like APIs, databases, and agents without needing to know implementation details.

But like most good ideas, the devil’s in the details.

The Promise—and the Problems of Running MCP Servers

Running an MCP sounds simple: spin up a Python or Node server that exposes your tool. Done, right? Not quite.

You run into problems fast:

  • Runtime friction: If an MCP is written in Python, your environment needs Python (plus dependencies, plus maybe a virtualenv strategy, plus maybe GPU drivers). Same goes for Node. This multiplies fast when you’re managing many MCPs or deploying them across teams.
  • Secrets management: MCPs often need credentials (API keys, tokens, etc.). You need a secure way to store and inject those secrets into your MCP runtime. That gets tricky when different teams, tools, or clouds are involved.
  • N×N integration pain: Let’s say you’ve got three clients that want to consume MCPs, and five MCPs to serve up. Now you’re looking at 15 individual integrations. No thanks.

To make MCPs practical, you need to solve these three core problems: runtime complexity, secret injection, and client-to-server wiring. 

If you’re wondering where I’m going with all this, take a look at those problems. We already have a technology that has been used by developers for over a decade that helps solve them: Docker containers.

In the rest of this blog I’ll walk through three different approaches, going from least complex to most complex, for integrating MCP servers into your developer experience. 

Option 1 — Docker MCP Toolkit & Catalog

For the developer who already uses containers and wants a low-friction way to start with MCP.

If you’re already comfortable with Docker but just getting your feet wet with MCP, this is the sweet spot. In the raw MCP world, you’d clone Python/Node servers, manage runtimes, inject secrets yourself, and hand-wire connections to every client. That’s exactly the pain Docker’s MCP ecosystem set out to solve.

Docker’s MCP Catalog is a curated, containerized registry of MCP servers. Each entry is a prebuilt container with everything you need to run the MCP server. 

The MCP Toolkit (available via Docker Desktop) is your control panel: search the catalog, launch servers with secure defaults, and connect them to clients.

How it helps:

  • No language runtimes to install
  • Built-in secrets management
  • One-click enablement via Docker Desktop
  • Easily wire the MCPs to your existing agents (Claude Desktop, Copilot in VS Code, etc)
  • Centralized access via the MCP Gateway
MCP Catalog

Figure 1: Docker MCP Catalog: Browse hundreds of MCP servers with filters for local or remote and clear distinctions between official and community servers

A Note on the MCP Gateway
One important piece working behind the scenes in both the MCP Toolkit and cagent (a framework for easily building multi-agent applications that we cover below) is the MCP Gateway, an open-source project from Docker that acts as a centralized frontend for all your MCP servers. Whether you’re using a GUI to start containers or defining agents in YAML, the Gateway handles all the routing, authentication, and translation between clients and tools. It also exposes a single endpoint that custom apps or agent frameworks can call directly, making it a clean bridge between GUI-based workflows and programmatic agent development.

Moving on: Using MCP servers alongside existing AI agents is often the first step for many developers. You wire up a couple tools, maybe connect to a calendar or a search API, and use them in something like Claude, ChatGPT, or a small custom agent. For step-by-step tutorials on how to automate dev workflows with Docker’s MCP Catalog and Toolkit with popular clients, check out these guides on ChatGPT, Claude Desktop,Codex, Gemini CLI, and Claude Code
Once that pattern clicks, the next logical step is to use those same MCP servers as tools inside a multi-agent system.

Option 2 — cagent: Declarative Multi-Agent Apps

For the developer who wants to build custom multi-agent applications but isn’t steeped in traditional agentic frameworks.

If you’re past simple MCP servers and want agents that can delegate, coordinate, and reason together, cagent is your next step. It’s Docker’s open-source, YAML-first framework for defining and running multi-agent systems—without needing to dive into complex agent SDKs or LLM loop logic.

Cagent lets you describe:

  • The agents themselves (model, role, instructions)
  • Who delegates to whom
  • What tools each agent can access (via MCP or local capabilities)

Below is an example of a pirate flavored chat bot:

agents:
  root:
    description: An agent that talks like a pirate
    instruction: Always answer by talking like a pirate.
    welcome_message: |
      Ahoy! I be yer pirate guide, ready to set sail on the seas o' knowledge! What be yer quest? 
    model: auto


cagent run agents.yaml

You don’t write orchestration code. You describe what you want, and Cagent runs the system.

Why it works:

  • Tools are scoped per agent
  • Delegation is explicit
  • Uses MCP Gateway behind the scene
  • Ideal for building agent systems without writing Python

If you’d like to give cagent a try, we have a ton of examples in the project’s GitHub repository. Check out this guide on building multi-agent systems in 5 minutes. 

Option 3 — Traditional Agent Frameworks (LangGraph, CrewAI, ADK)

For developers building complex, custom, fully programmatic agent systems.

Traditional agent frameworks like LangGraph, CrewAI, or Google’s Agent Development Kit (ADK) let you define, control, and orchestrate agent behavior directly in code. You get full control over logic, state, memory, tools, and workflows.

They shine when you need:

  • Complex branching logic
  • Error recovery, retries, and persistence
  • Custom memory or storage layers
  • Tight integration with existing backend code

Example: LangGraph + MCP via Gateway


import requests
from langgraph.graph import StateGraph
from langchain.agents import Tool
from langchain_openai import ChatOpenAI

# Discover MCP endpoint from Gateway
resp = requests.get("http://localhost:6600/v1/servers")
servers = resp.json()["servers"]
duck_url = next(s["url"] for s in servers if s["name"] == "duckduckgo")

# Define a callable tool
def mcp_search(query: str) -> str:
    return requests.post(duck_url, json={"input": query}).json()["output"]

search_tool = Tool(name="web_search", func=mcp_search, description="Search via MCP")

# Wire it into a LangGraph loop
llm = ChatOpenAI(model="gpt-4")
graph = StateGraph()
graph.add_node("agent", llm.bind_tools([search_tool]))
graph.add_edge("agent", "agent")
graph.set_entry_point("agent")

app = graph.compile()
app.invoke("What’s the latest in EU AI regulation?")

In this setup, you decide which tools are available. The agent chooses when to use them based on context, but you’ve defined the menu.
And yes, this is still true in the Docker MCP Toolkit: you decide what to enable. The LLM can’t call what you haven’t made visible.


Choosing the Right Approach

Approach

Best For

You Manage

You Get

Docker MCP Toolkit + Catalog

Devs new to MCP, already using containers

Tool selection

One-click setup, built-in secrets, Gateway integration

Cagent

YAML-based multi-agent apps without custom code

Roles & tool access

Declarative orchestration, multi-agent workflows

LangGraph / CrewAI / ADK

Complex, production-grade agent systems

Full orchestration

Max control over logic, memory, tools, and flow

Wrapping Up
Whether you’re just connecting a tool to Claude, designing a custom multi-agent system, or building production workflows by hand, Docker’s MCP tooling helps you get started easily and securely. 

Check out the Docker MCP Toolkit, cagent, and MCP Gateway for example code, docs, and more ways to get started.

]]>
Highlights from AWS re:Invent: Supercharging Kiro with Docker Sandboxes and MCP Catalog https://www.docker.com/blog/aws-reinvent-kiro-docker-sandboxes-mcp-catalog/ Fri, 12 Dec 2025 20:22:05 +0000 https://www.docker.com/?p=83901 At the recent AWS re:Invent, Docker focused on a very real developer problem: how to run AI agents locally without giving them access to your machine, credentials, or filesystem.

With AWS introducing Kiro, Docker demonstrated how Docker Sandboxes and MCP Toolkit allow developers to run agents inside isolated containers, keeping host environments and secrets out of reach. The result is a practical setup where agents can write code, run tests, and use tools safely, while you stay focused on building, not cleaning up accidental damage.

Local AI Agents, Isolation, and Docker at AWS re:Invent

Two weeks ago, a Reddit user posted how their filesystem was accidentally deleted by Google Antigravity. And the top comment?

Alright no more antigravity outside of a container

And another user’s home directory was recently wiped using Claude Code this past week. And yet another top comment:

That’s exactly why Claude code should be used only inside an isolated container or vm

We agree that this should never happen and that containers provide the proper isolation and segmentation.

At AWS re:Invent 2025, we were able to show off this vision using Kiro running in our new Docker sandboxes, using MCP servers provided by the Docker MCP Toolkit. 

If you weren’t able to attend or visit us at the booth, fear not! I’ll share the demo with you.

image1

Jim Clark, one of Docker’s Principal Engineers, providing a demo of running an secured AI development environment using Docker’s sandboxes and MCP Toolkit

Giving Kiro safety guardrails

Docker Sandboxes provide the ability to run an agent inside an isolated environment using containers. In this environment, the agent has no access to credentials stored on the host and can only access the files of the specified project directory.

As an example, I have some demo AWS credentials on my machine:

> cat ~/.aws/credentials
[default]
aws_access_key_id=demo_access_key
aws_secret_access_key=demo_secret_key

Now, I’m going to clone the Catalog Service demo project and start a sandbox using Kiro:

git clone https://github.com/dockersamples/catalog-service-node.git
cd catalog-service-node
docker sandbox run --mount-docker-socket kiro

The --mount-docker-socket flag is added to give the sandbox the Docker socket, which will allow the agent to run my integration tests that use Testcontainers.

On the first launch, I will be required to authenticate. After that’s done, I will ask Kiro to tell me about the AWS credentials it has access to:

     ⢀⣴⣶⣶⣦⡀⠀⠀⠀⢀⣴⣶⣦⣄⡀⠀⠀⢀⣴⣶⣶⣦⡀⠀⠀⢀⣴⣶⣶⣶⣶⣶⣶⣶⣶⣶⣦⣄⡀⠀⠀⠀⠀⠀⠀⢀⣠⣴⣶⣶⣶⣶⣶⣦⣄⡀⠀⠀⠀
    ⢰⣿⠋⠁⠈⠙⣿⡆⠀⢀⣾⡿⠁⠀⠈⢻⡆⢰⣿⠋⠁⠈⠙⣿⡆⢰⣿⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠈⠙⠻⣦⠀⠀⠀⠀⣴⡿⠟⠋⠁⠀⠀⠀⠈⠙⠻⢿⣦⠀⠀
    ⢸⣿⠀⠀⠀⠀⣿⣇⣴⡿⠋⠀⠀⠀⢀⣼⠇⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⢠⣤⣤⣤⣤⣄⠀⠀⠀⠀⣿⡆⠀⠀⣼⡟⠀⠀⠀⠀⣀⣀⣀⠀⠀⠀⠀⢻⣧⠀
    ⢸⣿⠀⠀⠀⠀⣿⡿⠋⠀⠀⠀⢀⣾⡿⠁⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⢸⣿⠉⠉⠉⣿⡇⠀⠀⠀⣿⡇⠀⣼⡟⠀⠀⠀⣰⡿⠟⠛⠻⢿⣆⠀⠀⠀⢻⣧
    ⢸⣿⠀⠀⠀⠀⠙⠁⠀⠀⢀⣼⡟⠁⠀⠀⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⢸⣿⣶⣶⡶⠋⠀⠀⠀⠀⣿⠇⢰⣿⠀⠀⠀⢰⣿⠀⠀⠀⠀⠀⣿⡆⠀⠀⠀⣿⡆
    ⢸⣿⠀⠀⠀⠀⠀⠀⠀⠀⠹⣷⡀⠀⠀⠀⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⣼⠟⠀⢸⣿⠀⠀⠀⢸⣿⠀⠀⠀⠀⠀⣿⡇⠀⠀⠀⣿⡇
    ⢸⣿⠀⠀⠀⠀⠀⣠⡀⠀⠀⠹⣷⡄⠀⠀⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⠀⣤⣄⠀⠀⠀⠀⠹⣿⡅⠀⠀⠸⣿⠀⠀⠀⠸⣿⠀⠀⠀⠀⠀⣿⠇⠀⠀⠀⣿⠇
    ⢸⣿⠀⠀⠀⠀⣾⡟⣷⡀⠀⠀⠘⣿⣆⠀⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⠀⣿⡟⣷⡀⠀⠀⠀⠘⣿⣆⠀⠀⢻⣧⠀⠀⠀⠹⣷⣦⣤⣤⣾⠏⠀⠀⠀⣼⡟
    ⢸⣿⠀⠀⠀⠀⣿⡇⠹⣷⡀⠀⠀⠈⢻⡇⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⠀⣿⡇⠹⣷⡀⠀⠀⠀⠈⢻⡇⠀⠀⢻⣧⠀⠀⠀⠀⠉⠉⠉⠀⠀⠀⠀⣼⡟
    ⠸⣿⣄⡀⢀⣠⣿⠇⠀⠙⣷⡀⠀⢀⣼⠇⠀⠸⣿⣄⡀⢀⣠⣿⠇⠸⣿⣄⡀⢀⣠⣿⠇⠀⠙⣷⡀⠀⠀⢀⣼⠇⠀⠀⠀⠻⣷⣦⣄⡀⠀⠀⠀⢀⣠⣴⣾⠟
    ⠀⠈⠻⠿⠿⠟⠁⠀⠀⠀⠈⠻⠿⠿⠟⠁⠀⠀⠈⠻⠿⠿⠟⠁⠀⠀⠈⠻⠿⠿⠟⠁⠀⠀⠀⠈⠻⠿⠿⠟⠁⠀⠀⠀⠀⠀⠈⠙⠻⠿⠿⠿⠿⠟⠋⠁
Model: Auto (/model to change) | Plan: KIRO FREE (/usage for more detail)

!> Tell me about the AWS credentials you have access to

From here, Kiro will search the typical places AWS credentials are configured. But, finally, it reaches the following conclusion:

Currently, there are no AWS credentials configured on your system

And why is this? The credentials on the host are not accessible inside the sandbox environment. The agent is in the isolated environment and only has access to the current project directory.

Giving Kiro secure tools with the MCP Toolkit

If we take a step back and think about it, the only credential an agent should have access to is to authenticate with the model provider. All other credentials belong to the tools (or MCP servers) around the agent.

And that’s where the MCP Toolkit comes in!

Sandboxes don’t yet have an automatic way to connect to the MCP Toolkit (it’s coming soon!). Until that’s available I will start a MCP Gateway with the following command:

docker mcp gateway run --transport=streaming

There are a variety of ways to configure Kiro with MCP servers, but the project-level configuration provides an easy way that also works with sandboxes.

In the project, I will create a .kiro/settings/mcp.json file with the following contents:

{
  "mcpServers": {
    "docker-mcp-toolkit": {
      "type": "http",
      "url": "http://host.docker.internal:8811/"
    }
  }
}

After restarting Kiro, I can ask it about the available tools:

/tools

The output then shows the following tools coming from the MCP Gateway:

docker-mcp-toolkit (MCP)
- code-mode             trusted
- mcp-add               trusted
- mcp-config-set        trusted
- mcp-create-profile    trusted
- mcp-exec              trusted
- mcp-find              trusted
- mcp-remove            trusted

These tools highlight the ability to dynamically add and remove MCP servers through the Gateway. 

By using an AGENTS.md file, I can provide to the agent on which MCP servers to use and give an additional nudge to remove those servers when they’re no longer needed (which will reduce my context size and costs). This is what my file looks like:

# Special instructions

- When info is needed from GitHub, use the `github-official` MCP server. This will ensure proper auth tokens are used.
- When adding MCP servers, be sure to activate all tools.
- When you have the required info from any MCP server, remove the MCP server to reduce the number of tools in the context.

Before Kiro can use the GitHub MCP server, I need to ensure I’ve configured the MCP server with my OAuth credentials.

image3 3

Screenshot of the Docker Desktop dashboard showing the GitHub Official MCP server listing within the MCP Toolkit. Specifically, it shows the OAuth configuration has been completed.

Putting it all together

With all of the setup completed, now I’m ready to have Kiro do some work for me. I’ll simply ask it to complete one of the issues on GitHub. I can simply ask:

> Can you help me complete the work for issue #64?

Watching the output, I’ll first see Kiro add the github-official MCP server by using the mcp-add tool.

Running tool mcp-add with the param (from mcp server: docker-mcp-toolkit)
 ⋮  {
 ⋮    "name": "github-official",
 ⋮    "activate": true
 ⋮  }
 - Completed in 0.609s

From there, I’ll see that Kiro will check the git remotes to determine the GitHub organization and repository being used:

> I need to determine the repository owner and name. Let me check the current directory for git information:
I will run the following command: git remote -v (using tool: shell)
Purpose: Get git remote information to identify the repository

origin  https://github.com/dockersamples/catalog-service-node.git (fetch)
origin  https://github.com/dockersamples/catalog-service-node.git (push)
 - Completed in 0.6s

Then, it will use the GitHub MCP server to get the issue details:

> Now I can get the issue details:
Running tool issue_read with the param (from mcp server: docker-mcp-toolkit)
 ⋮  {
 ⋮    "method": "get",
 ⋮    "owner": "dockersamples",
 ⋮    "repo": "catalog-service-node",
 ⋮    "issue_number": 64
 ⋮  }
 - Completed in 0.613s

> Perfect! Issue #64 is about adding product metadata to S3 objects.

From here, Kiro will go and make the changes to my source code, update the integration tests, and even run the integration tests to ensure it all works. After it’s completed, I have the following output:

I've successfully completed the work for issue #64. Here's what was implemented:

Changes Made:

1. StorageService.js: 
   - Added Tagging parameter to S3 PutObjectCommand with product_id=${id}
   - Added Metadata parameter with product_name: productName
   - Added new AWS SDK imports for GetObjectTaggingCommand and HeadObjectCommand
   - Created helper functions getObjectTags() and getObjectMetadata()

2. ProductService.js:
   - Modified uploadProductImage() to fetch product name from database and pass it to uploadFile()
   - Exposed new functions getProductImageTags() and getProductImageMetadata()

3. Integration Test:
   - Added validation to verify S3 objects have correct tags (product_id with the product ID)
   - Added validation to verify S3 objects have correct metadata (product_name with the product name)

Test Results:
All integration tests pass, confirming that:
- S3 objects are tagged with the product identifier
- S3 objects include metadata with the product name
- The existing functionality remains intact

And notably, right before that, I see that Kiro automatically removed the GitHub MCP server to remove its tools from my context:

Let me clean up by removing the GitHub MCP server since we're done with it:
Running tool mcp-remove with the param (from mcp server: docker-mcp-toolkit)
 ⋮  {
 ⋮    "name": "github-official"
 ⋮  }
 - Completed in 0.2s

With those tools removed from the context, the model has less to tokenize and process which means faster responses and less cost.

Highlighting what’s important

Taking a step back at what we ran, we have the following:

  • An agent in an isolated environment. With the agent running in a container, it’s unable to access and leak credentials stored on my host machine. And rogue requests to delete my filesystem are limited to the containerized environment where it’s running as a non-root user.
  • Isolated and containerized MCP servers. Each MCP server runs in its isolated container, preventing host access. In addition, I don’t have to spend any time worrying about runtime environments or configuration. With a container, “it just works!”
  • API credentials only where they’re needed. The only component that needs access to my GitHub credential is the GitHub MCP server, where it is securely injected. This approach further prevents potential leaks and exposures.

In other words, we have a microserviced architecture where each component runs in its own container and follows least privilege by having access to only the things it needs access to.

image2 4

Looking forward

Here at Docker, we’re quite excited about this architecture and there’s still a lot to do. Two items I’m excited about include:

  • A network boundary for agentic workloads. This boundary would limit network access to only authorized hostnames. Then, if a prompt injection tries to send sensitive information to evildomain.com, that request is blocked.
  • Governance and control for organizations. With this, your organization can authorize the MCP servers that are used and even create its own custom catalogs and rule sets.

If you want to try out Sandboxes, you can do so by enabling the Experimental Feature in Docker Desktop 4.50+. We’d love to hear your feedback and thoughts!

Learn more 

]]>
How to Add MCP Servers to ChatGPT with Docker MCP Toolkit https://www.docker.com/blog/add-mcp-server-to-chatgpt/ Fri, 12 Dec 2025 13:48:14 +0000 https://www.docker.com/?p=83522 ChatGPT is great at answering questions and generating code. But here’s what it can’t do: execute that code, query your actual database, create a GitHub repo with your project, or scrape live data from websites. It’s like having a brilliant advisor who can only talk, never act.

Docker MCP Toolkit changes this completely. 

Here’s what that looks like in practice: You ask ChatGPT to check MacBook Air prices across Amazon, Walmart, and Best Buy. If competitor prices are lower than yours, it doesn’t just tell you, it acts: automatically adjusting your Stripe product price to stay competitive, logging the repricing decision to SQLite, and pushing the audit trail to GitHub. All through natural conversation. No manual coding. No copy-pasting scripts. Real execution.

“But wait,” you might say, “ChatGPT already has a shopping research feature.” True. But ChatGPT’s native shopping can only lookup prices. Only MCP can execute: creating payment links, generating invoices, storing data in your database, and pushing to your GitHub. That’s the difference between an advisor and an actor.

By the end of this guide, you’ll build exactly this: a Competitive Repricing Agent that checks competitor prices on demand, compares them to yours, and automatically adjusts your Stripe product prices when competitors are undercutting you.

Here’s how the pieces fit together:

  • ChatGPT provides the intelligence: understanding your requests and determining what needs to happen
  • Docker MCP Gateway acts as the secure bridge: routing requests to the right tools
  • MCP Servers are the hands: executing actual tasks in isolated Docker containers

The result? ChatGPT can query your SQL database, manage GitHub repositories, scrape websites, process payments, run tests, and more—all while Docker’s security model keeps everything contained and safe.

In this guide, you’ll learn how to add seven MCP servers to ChatGPT by connecting to Docker MCP Toolkit. We’ll use a handful of must-have MCP servers: Firecrawl for web scraping, SQLite for data persistence, GitHub for version control, Stripe for payment processing, Node.js Sandbox for calculations, Sequential Thinking for complex reasoning, and Context7 for documentation. Then, you’ll build the Competitive Repricing Agent shown above, all through conversation.

Caption: YouTube video where Ajeet demonstrates building the competitive repricing agent

What is Model Context Protocol (MCP)?

Before we dive into the setup, let’s clarify what MCP actually is.

Model Context Protocol (MCP) is the standardized way AI agents like ChatGPT and Claude connect to tools, APIs, and services. It’s what lets ChatGPT go beyond conversation and perform real-world actions like querying databases, deploying containers, analyzing datasets, or managing GitHub repositories.

In short: MCP is the bridge between ChatGPT’s reasoning and your developer stack. And Docker? Docker provides the guardrails that make it safe.

Why Use Docker MCP Toolkit with ChatGPT?

I’ve been working with AI tools for a while now, and this Docker MCP integration stands out for one reason: it actually makes ChatGPT productive.

Most AI integrations feel like toys: impressive demos that break in production. Docker MCP Toolkit is different. It creates a secure, containerized environment where ChatGPT can execute real tasks without touching your local machine or production systems.

Every action happens in an isolated container. Every MCP server runs in its own security boundary. When you’re done, containers are destroyed. No residue, no security debt, complete reproducibility across your entire team.

What ChatGPT Can and Can’t Do Without MCP

Let’s be clear about what changes when you add MCP.

Without MCP

You ask ChatGPT to build a system to regularly scrape product prices and store them in a database. ChatGPT responds with Python code, maybe 50 lines using BeautifulSoup and SQLite. Then you must copy the code, install dependencies, create the database schema, run the script manually, and set up a scheduler if you want it to run regularly.

Yes, ChatGPT remembers your conversation and can store memories about you. But those memories live on OpenAI’s servers—not in a database you control.

With MCP

You ask ChatGPT the same thing. Within seconds, it calls Firecrawl MCP to actually scrape the website. It calls SQLite MCP to create a database on your machine and store the data. It calls GitHub MCP to save a report to your repository. The entire workflow executes in under a minute.

Real data gets stored in a real database on your infrastructure. Real commits appear in your GitHub repository. Close ChatGPT, come back tomorrow, and ask “Show me the price trends.” ChatGPT queries your SQLite database and returns results instantly because the data lives in a database you own and control, not in ChatGPT’s conversation memory.

The data persists in your systems, ready to query anytime; no manual script execution required.

Why This Is Different from ChatGPT’s Native Shopping

ChatGPT recently released a shopping research feature that can track prices and make recommendations. Here’s what it can and cannot do:

What ChatGPT Shopping Research can do:

  • Track prices across retailers
  • Remember price history in conversation memory
  • Provide comparisons and recommendations

What ChatGPT Shopping Research cannot do:

  • Automatically update your product prices in Stripe
  • Execute repricing logic based on competitor changes
  • Store pricing data in your database (not OpenAI’s servers)
  • Push audit trails to your GitHub repository
  • Create automated competitive response workflows

With Docker MCP Toolkit, ChatGPT becomes a competitive pricing execution system. When you ask it to check prices and competitors are undercutting you, it doesn’t just inform you, it acts: updating your Stripe prices to match or beat competitors, logging decisions to your database, and pushing audit records to GitHub. The data lives in your infrastructure, not OpenAI’s servers.

Setting Up ChatGPT with Docker MCP Toolkit

Prerequisites

Before you begin, ensure you have:

  • A machine with 8 GB RAM minimal, ideally 16GB
  • Install Docker Desktop
  • A ChatGPT Plus, Pro, Business, or Enterprise Account
  • ngrok account (free tier works) – For exposing the Gateway publicly

Step 1. Enable ChatGPT developer mode

  • Head over to ChatGPT and create a new account. 
  • Click on your profile icon at the top left corner of the ChatGPT page and select “Settings”. Select “Apps and Connectors” and scroll down to the end of the page to select “Advanced Settings.”
image1

SettingsApps & ConnectorsAdvancedDeveloper Mode (ON)

image10

ChatGPT Developer Mode provides full Model Context Protocol (MCP) client support for all tools, both read and write operations. This feature was announced in the first week of September 2025, marking a significant milestone in AI-developer integration. ChatGPT can perform write actions—creating repositories, updating databases, modifying files—all with proper confirmation modals for safety.

Key capabilities:

  • Full read/write MCP tool support
  • Custom connector creation
  • OAuth and authentication support
  • Explicit confirmations for write operations
  • Available on Plus, Pro, Business, Enterprise, and Edu plans

Step 2. Create MCP Gateway

This creates and starts the MCP Gateway container that ChatGPT will connect to.

docker mcp server init --template=chatgpt-app-basic test-chatgpt-app

Successfully initialized MCP server project in test-chatgpt-app (template: chatgpt-app-basic)
Next steps:
  cd test-chatgpt-app
  docker build -t test-chatgpt-app:latest .

Step 3. List out all the project files

ls -la
total 64
drwxr-xr-x@   9 ajeetsraina  staff   288 16 Nov 16:53 .
drwxr-x---+ 311 ajeetsraina  staff  9952 16 Nov 16:54 ..
-rw-r--r--@   1 ajeetsraina  staff   165 16 Nov 16:53 catalog.yaml
-rw-r--r--@   1 ajeetsraina  staff   371 16 Nov 16:53 compose.yaml
-rw-r--r--@   1 ajeetsraina  staff   480 16 Nov 16:53 Dockerfile
-rw-r--r--@   1 ajeetsraina  staff    88 16 Nov 16:53 go.mod
-rw-r--r--@   1 ajeetsraina  staff  2576 16 Nov 16:53 main.go
-rw-r--r--@   1 ajeetsraina  staff  2254 16 Nov 16:53 README.md
-rw-r--r--@   1 ajeetsraina  staff  6234 16 Nov 16:53 ui.html

Step 4. Examine the Compose file

services:
  gateway:
    image: docker/mcp-gateway                # Official Docker MCP Gateway image
    command:
      - --servers=test-chatgpt-app           # Name of the MCP server to expose
      - --catalog=/mcp/catalog.yaml          # Path to server catalog configuration
      - --transport=streaming                # Use streaming transport for real-time responses
      - --port=8811                           # Port the gateway listens on
    environment:
      - DOCKER_MCP_IN_CONTAINER=1            # Tells gateway it's running inside a container
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock  # Allows gateway to spawn sibling containers
      - ./catalog.yaml:/mcp/catalog.yaml           # Mount local catalog into container
    ports:
      - "8811:8811"                           # Expose gateway port to host


Step 5. Bringing up the compose services

docker compose up -d
[+] Running 2/2
 ✔ Network test-chatgpt-app_default      Created                                            0.0s
 ✔ Container test-chatgpt-app-gateway-1  Started  

docker ps | grep test-chatgpt-app
eb22b958e09c   docker/mcp-gateway   "/docker-mcp gateway…"   21 seconds ago   Up 20 seconds   0.0.0.0:8811->8811/tcp, [::]:8811->8811/tcp   test-chatgpt-app-gateway-1

Step 6. Verify the MCP session

curl http://localhost:8811/mcp
GET requires an active session

Step 7. Expose with Ngrok

Install ngrok and expose your local gateway. You will need to sign up for an ngrok account to obtain an auth token.

brew install ngrok
ngrok config add-authtoken <your_token_id>
ngrok http 8811

Note the public URL (like https://91288b24dc98.ngrok-free.app). Keep this terminal open.

Step 8. Connect ChatGPT

In ChatGPT, go to SettingsApps & ConnectorsCreate.

image11

Step 9. Create connector:

SettingsApps & ConnectorsCreate

- Name: Test MCP Server
- Description: Testing Docker MCP Toolkit integration
- Connector URL: https://[YOUR_NGROK_URL]/mcp
- Authentication: None
- Click "Create"

Test it by asking ChatGPT to call the greet tool. If it responds, your connection works. Here’s how it looks:

image7
image6

Real-World Demo: Competitive Repricing Agent

Now that you’ve connected ChatGPT to Docker MCP Toolkit, let’s build something that showcases what only MCP can do—something ChatGPT’s native shopping feature cannot replicate.

We’ll create a Competitive Repricing Agent that checks competitor prices on demand, and when competitors are undercutting you, automatically adjusts your Stripe product prices to stay competitive, logs the repricing decision to SQLite, and pushes audit records to GitHub.

Time to build: 15 minutes  

Monthly cost: Free Stripe (test mode) + $1.50-$15 (Firecrawl API)

Infrastructure: $0 (SQLite is free)

The Challenge

E-commerce businesses face a constant dilemma:

  • Manual price checking across multiple retailers is time-consuming and error-prone
  • Comparing competitor prices and calculating optimal repricing requires multiple tools
  • Executing price changes across your payment infrastructure requires context-switching
  • Historical trend data is scattered across spreadsheets
  • Strategic insights require manual analysis and interpretation

Result: Missed opportunities, delayed reactions, and losing sales to competitors with better prices.

The Solution: On-Demand Competitive Repricing Agent

Docker MCP Toolkit transforms ChatGPT from an advisor into an autonomous agent that can actually execute. The architecture routes your requests through a secure MCP Gateway that orchestrates specialized tools: Firecrawl scrapes live prices, Stripe creates payment links and invoices, SQLite stores data on your infrastructure, and GitHub maintains your audit trail. Each tool runs in an isolated Docker container: secure, reproducible, and under your control.

The 7 MCP Servers We’ll Use

Server

Purpose

Why It Matters

Firecrawl

Web scraping

Extracts live prices from any website

SQLite

Data persistence

Stores 30+ days of price history

Stripe

Payment management

Updates your product prices to match or beat competitors

GitHub

Version control

Audit trail for all reports

Sequential Thinking

Complex reasoning

Multi-step strategic analysis

Context7

Documentation

Up-to-date library docs for code generation

Node.js Sandbox

Calculations

Statistical analysis in isolated containers

The Complete MCP Workflow (Executes in under 3 minutes)

image10 1

Step 1. Scrape and Store (30 seconds)

  • Agent scrapes live prices from Amazon, Walmart, and Best Buy 
  • Compares against your current Stripe product price

Step 2: Compare Against Your Price (15 seconds) 

  • Best Buy drops to $509.99—undercutting your $549.99
  • Agent calculates optimal repricing strategy
  • Determines new competitive price point

Step 3: Execute Repricing (30 seconds)

  • Updates your Stripe product with the new competitive price
  • Logs repricing decision to SQLite with full audit trail
  • Pushes pricing change report to GitHub

Step 4: Stay Competitive (instant)

  • Your product now priced competitively
  • Complete audit trail in your systems
  • Historical data ready for trend analysis

The Demo Setup: Enable Docker MCP Toolkit

Open Docker Desktop and enable the MCP Toolkit from the Settings menu.

To enable:

  1. Open Docker Desktop
  2. Go to SettingsBeta Features
  3. Toggle Docker MCP Toolkit ON
  4. Click Apply
image8

Click MCP Toolkit in the Docker Desktop sidebar, then select Catalog to explore available servers.

For this demonstration, we’ll use seven MCP servers:

  • SQLiteRDBMS with advanced analytics, text and vector search, geospatial capabilities, and intelligent workflow automation
  • Stripe –  Updates your product prices to match or beat competitors for automated repricing workflows
  • GitHub – Handles version control and deployment
  • Firecrawl – Web scraping and content extraction
  • Node.js Sandbox – Runs tests, installs dependencies, validates code (in isolated containers)
  • Sequential Thinking – Debugs failing tests and optimizes code
  • Context7 – Provides code documentation for LLMs and AI code editors

Let’s configure each one step by step.

1. Configure SQLite MCP Server

The SQLite MCP Server requires no external database setup. It manages database creation and queries through its 25 built-in tools.

To setup the SQLite MCP Server, follow these steps:

  1. Open Docker Desktop → access MCP Toolkit → Catalog
  2. Search “SQLite”
  3. Click + Add
  4. No configuration needed, just click Start MCP Server
docker mcp server ls
# Should show sqlite-mcp-server as enabled

That’s it. ChatGPT can now create databases, tables, and run queries through conversation.

2. Configure Stripe MCP Server

The Stripe MCP server gives ChatGPT full access to payment infrastructure—listing products, managing prices, and updating your catalog to stay competitive.

Get Stripe API Key

  1. Go to dashboard.stripe.com
  2. Navigate to Developers → API Keys
  3. Copy your Secret Key:
    • Use sk_test_... for sandbox/testing
    • Use sk_live_... for production

Configure in Docker Desktop

  1. Open Docker Desktop → MCP Toolkit → Catalog
  2. Search for “Stripe”
  3. Click + Add
  4. Go to the Configuration tab
  5. Add your API key:
    • Field: stripe.api_key
    • Value: Your Stripe secret key
  6. Click Save and Start Server

Or via CLI:

docker mcp secret set STRIPE.API_KEY="sk_test_your_key_here"
docker mcp server enable stripe

3. Configure GitHub Official MCP Server

The GitHub MCP server lets ChatGPT create repositories, manage issues, review pull requests, and more.

Option 1: OAuth Authentication (Recommended)

OAuth is the easiest and most secure method:

  1. In MCP ToolkitCatalog, search “GitHub Official”
  2. Click + Add
  3. Go to the OAuth tab in Docker Desktop
  4. Find the GitHub entry
  5. Click “Authorize”
  6. Your browser opens GitHub’s authorization page
  7. Click “Authorize Docker” on GitHub
  8. You’re redirected back to Docker Desktop
  9. Return to the Catalog tab, find GitHub Official
  10. Click Start Server

Advantage: No manual token creation. Authorization happens through GitHub’s secure OAuth flow with automatic token refresh.

Option 2: Personal Access Token

If you prefer manual control or need specific scopes:

Step 1: Create GitHub Personal Access Token

  1. Go to https://github.com and sign in
  2. Click your profile picture → Settings
  3. Scroll to “Developer settings” in the left sidebar
  4. Click “Personal access tokens”“Tokens (classic)”
  5. Click “Generate new token”“Generate new token (classic)”
  6. Name it: “Docker MCP ChatGPT”
  7. Select scopes:
    • repo (Full control of repositories)
    • workflow (Update GitHub Actions workflows)
    • read:org (Read organization data)
  8. Click “Generate token”
  9. Copy the token immediately (you won’t see it again!)

Step 2: Configure in Docker Desktop

In MCP Toolkit → Catalog, find GitHub Official:

  1. Click + Add (if not already added)
  2. Go to the Configuration tab
  3. Select “Personal Access Token” as the authentication method
  4. Paste your token
  5. Click Start Server

Or via CLI:

docker mcp secret set GITHUB.PERSONAL_ACCESS_TOKEN="github_pat_YOUR_TOKEN_HERE"

Verify GitHub Connection

docker mcp server ls

# Should show github as enabled

4. Configure Firecrawl MCP Server

The Firecrawl MCP server gives ChatGPT powerful web scraping and search capabilities.

Get Firecrawl API Key

  1. Go to https://www.firecrawl.dev
  2. Create an account (or sign in)
  3. Navigate to API Keys in the sidebar
  4. Click “Create New API Key”
  5. Copy the API key
image13

Configure in Docker Desktop

  1. Open Docker DesktopMCP ToolkitCatalog
  2. Search for “Firecrawl”
  3. Find Firecrawl in the results
  4. Click + Add
  5. Go to the Configuration tab
  6. Add your API key:
    • Field: firecrawl.api_key
    • Value: Your Firecrawl API key
  7. Leave all other entries blank
  8. Click Save and Add Server

Or via CLI:

docker mcp secret set FIRECRAWL.API_KEY="fc-your-api-key-here"
docker mcp server enable firecrawl

What You Get

6+ Firecrawl tools, including:

  • firecrawl_scrape – Scrape content from a single URL
  • firecrawl_crawl – Crawl entire websites and extract content
  • firecrawl_map – Discover all indexed URLs on a site
  • firecrawl_search – Search the web and extract content
  • firecrawl_extract – Extract structured data using LLM capabilities
  • firecrawl_check_crawl_status – Check crawl job status

5. Configure Node.js Sandbox MCP Server

The Node.js Sandbox enables ChatGPT to execute JavaScript in isolated Docker containers.

Note: This server requires special configuration because it uses Docker-out-of-Docker (DooD) to spawn containers.

Understanding the Architecture

The Node.js Sandbox implements the Docker-out-of-Docker (DooD) pattern by mounting /var/run/docker.sock. This gives the sandbox container access to the Docker daemon, allowing it to spawn ephemeral sibling containers for code execution.

When ChatGPT requests JavaScript execution:

  1. Sandbox container makes Docker API calls
  2. Creates temporary Node.js containers (with resource limits)
  3. Executes code in complete isolation
  4. Returns results
  5. Auto-removes the container

Security Note: Docker socket access is a privilege escalation vector (effectively granting root-level host access). This is acceptable for local development but requires careful consideration for production use.

Add Via Docker Desktop

  1. MCP ToolkitCatalog
  2. Search “Node.js Sandbox”
  3. Click + Add

Unfortunately, the Node.js Sandbox requires manual configuration that can’t be done entirely through the Docker Desktop UI. We’ll need to configure ChatGPT’s connector settings directly.

Prepare Output Directory

Create a directory for sandbox output:

# macOS/Linux
mkdir -p ~/Desktop/sandbox-output

# Windows
mkdir %USERPROFILE%\Desktop\sandbox-output

Configure Docker File Sharing

Ensure this directory is accessible to Docker:

  1. Docker DesktopSettingsResourcesFile Sharing
  2. Add ~/Desktop/sandbox-output (or your Windows equivalent)
  3. Click Apply & Restart

6. Configure Sequential Thinking MCP Server

The Sequential Thinking MCP server gives ChatGPT the ability for dynamic and reflective problem-solving through thought sequences. Adding the Sequential Thinking MCP server is straightforward –  it doesn’t require any API key. Just search for Sequential Thinking in the Catalog and get it to your MCP server list.

In Docker Desktop:

  1. Open Docker DesktopMCP ToolkitCatalog
  2. Search for “Sequential Thinking”
  3. Find Sequential Thinking in the results
  4. Click “Add MCP Server” to add without any configuration

The Sequential Thinking MCP server should now appear under “My Servers” in Docker MCP Toolkit.

What you get:

  • A single Sequential Thinking tool that includes:
    • sequentialthinking – A detailed tool for dynamic and reflective problem-solving through thoughts. This tool helps analyze problems through a flexible thinking process that can adapt and evolve. Each thought can build on, question, or revise previous insights as understanding deepens.
image2

7. Configure Context7 MCP Server

The Context7 MCP enables ChatGPT to access the latest and up-to-date code documentation for LLMs and AI code editors. Adding the Context7 MCP server is straightforward. It doesn’t require any API key. Just search for Context7 in the Catalog and get it added to the MCP server lists.

In Docker Desktop:

  1. Open Docker DesktopMCP ToolkitCatalog
  2. Search for “Context7”
  3. Find Context7 in the results
  4. Click “Add MCP Server” to add without any configuration
image3

The Context7 MCP server should now appear under “My Servers” in Docker MCP Toolkit

What you get:

  • 2 Context7 tools including:
    • get-library-docs – Fetches up-to-date documentation for a library.
    • resolve-library-id – Resolves a package/product name to a Context7-compatible library ID and returns a list of matching libraries. 

Verify if all the MCP servers are available and running.

docker mcp server ls

MCP Servers (7 enabled)

NAME                   OAUTH        SECRETS      CONFIG       DESCRIPTION
------------------------------------------------------------------------------------------------
context7               -            -            -            Context7 MCP Server -- Up-to-da...
fetch                  -            -            -            Fetches a URL from the internet...
firecrawl              -            ✓ done       partial    Official Firecrawl MCP Server...
github-official        ✓ done       ✓ done       -            Official GitHub MCP Server, by ...
node-code-sandbox      -            -            -            A Node.js–based Model Context P...
sequentialthinking     -            -            -            Dynamic and reflective problem-...
sqlite-mcp-server      -            -            -            The SQLite MCP Server transform...
stripe                 -            ✓ done       -            Interact with Stripe services o...

Tip: To use these servers, connect to a client (IE: claude/cursor) with docker mcp client connect <client-name>

Configuring ChatGPT App and Connector

Use the following compose file in order to let ChatGPT discover all the tools under Docker MCP Catalog:

services:
  gateway:
    image: docker/mcp-gateway
    command:
      - --catalog=/root/.docker/mcp/catalogs/docker-mcp.yaml
      - --servers=context7,firecrawl,github-official,node-code-sandbox,sequentialthinking,sqlite-mcp-server,stripe
      - --transport=streaming
      - --port=8811
    environment:
      - DOCKER_MCP_IN_CONTAINER=1
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ~/.docker/mcp:/root/.docker/mcp:ro
    ports:
      - "8811:8811"


By now, you should be able to view all the MCP tools under ChatGPT Developer Mode.

Chatgpt MCP image

Let’s Test it Out

Now we give ChatGPT its intelligence. Copy this system prompt and paste it into your ChatGPT conversation:

You are a Competitive Repricing Agent that monitors competitor prices, automatically adjusts your Stripe product prices, and provides strategic recommendations using 7 MCP servers: Firecrawl (web scraping), SQLite (database), Stripe (price management), GitHub (reports), Node.js Sandbox (calculations), Context7 (documentation), and Sequential Thinking (complex reasoning).

DATABASE SCHEMA

Products table: id (primary key), sku (unique), name, category, brand, stripe_product_id, stripe_price_id, current_price, created_at
Price_history table: id (primary key), product_id, competitor, price, original_price, discount_percent, in_stock, url, scraped_at
Price_alerts table: id (primary key), product_id, competitor, alert_type, old_price, new_price, change_percent, created_at
Repricing_log table: id, product_name, competitor_triggered, competitor_price, old_stripe_price, new_stripe_price, repricing_strategy, stripe_price_id, triggered_at, status

Indexes: idx_price_history_product on (product_id, scraped_at DESC), idx_price_history_competitor on (competitor)

WORKFLOW

On-demand check: Scrape (Firecrawl) → Store (SQLite) → Analyze (Node.js) → Report (GitHub)
Competitive repricing: Scrape (Firecrawl) → Compare to your price → Update (Stripe) → Log (SQLite) → Report (GitHub)

STRIPE REPRICING WORKFLOW

When competitor price drops below your current price:
1. list_products - Find your existing Stripe product
2. list_prices - Get current price for the product
3. create_price - Create new price to match/beat competitor (prices are immutable in Stripe)
4. update_product - Set the new price as default
5. Log the repricing decision to SQLite

Price strategies:
- "match": Set price equal to lowest competitor
- "undercut": Set price 1-2% below lowest competitor
- "margin_floor": Never go below your minimum margin threshold

Use Context7 when: Writing scripts with new libraries, creating visualizations, building custom scrapers, or needing latest API docs

Use Sequential Thinking when: Making complex pricing strategy decisions, planning repricing rules, investigating market anomalies, or creating strategic recommendations requiring deep analysis

EXTRACTION SCHEMAS

Amazon: title, price, list_price, rating, reviews, availability
Walmart: name, current_price, was_price, availability  
Best Buy: product_name, sale_price, regular_price, availability

RESPONSE FORMAT

Price Monitoring: Products scraped, competitors covered, your price vs competitors
Repricing Triggers: Which competitor triggered, price difference, strategy applied
Price Updated: New Stripe price ID, old vs new price, margin impact
Audit Trail: GitHub commit SHA, SQLite log entry, timestamp

TOOL ORCHESTRATION PATTERNS

Simple price check: Firecrawl → SQLite → Response
Trend analysis: SQLite → Node.js → Response
Strategy analysis: SQLite → Sequential Thinking → Response
Competitive repricing: Firecrawl → Compare → Stripe → SQLite → GitHub
Custom tool development: Context7 → Node.js → GitHub
Full intelligence report: Firecrawl → SQLite → Node.js → Sequential Thinking → GitHub

KEY USAGE PATTERNS

Use Stripe for: Listing products, listing prices, creating new prices, updating product default prices

Use Sequential Thinking for: Pricing strategy decisions (match, undercut, or hold), market anomaly investigations (why did competitor prices spike), multi-factor repricing recommendations

Use Context7 for: Getting documentation before coding, learning new libraries on-the-fly, ensuring code uses latest API conventions

Use Node.js for: Statistical calculations (moving averages, standard deviation, volatility), chart generation, margin calculations

BEST PRACTICES

Space web scraping requests 2 seconds apart to respect rate limits
Calculate price difference as (your_price - competitor_price)
Trigger repricing when competitor drops below your current price
Log all repricing decisions to SQLite with Stripe IDs for audit trail
Push pricing reports to GitHub for compliance
Always use Context7 before writing code with unfamiliar libraries
Respect margin floors—never reprice below minimum acceptable margin

COMMAND RECOGNITION PATTERNS

"Check X prices and stay competitive" → Full repricing pipeline: scrape → compare → if competitor lower: Stripe update + SQLite + GitHub
"Match competitor price for X" → Stripe: list_products → list_prices → create_price (matching)
"Undercut competitors on X" → Stripe: create_price (1-2% below lowest)
"Show price history" → SQLite query → format results
"Analyze pricing strategy for X" → Sequential Thinking analysis
"Why did competitor prices change" → Query data → Sequential Thinking investigation

CORE PRINCIPLE

When competitors change prices, don't just report—ACT. Update your Stripe prices to stay competitive, log decisions to SQLite, push audit records to GitHub. Transform competitor intelligence into automated repricing. All data stays on YOUR infrastructure.



Here’s the prompt:

Set up a competitive repricing agent:
1. I sell MacBook Air M3 on my store - current Stripe price is $549.99
2. Monitor competitor prices on Amazon, Walmart, Best Buy
3. When ANY competitor drops below my price:
   - Automatically update my Stripe product price to match or beat them
   - Use "undercut" strategy (price 1% below lowest competitor)
   - Log the repricing decision to SQLite
   - Push pricing change report to GitHub

Check prices now and reprice if needed.

Here’s how ChatGPT responds – orchestrating all 7 MCP servers in a single workflow:

Competitive Repricing Agent – Execution Complete

Within 3 minutes, ChatGPT orchestrated all 7 MCP servers to analyse the market and automatically reprice your product.

Repricing Triggered and Executed:

Metrics

Before

After

Your Price

$549.99

$504.99

Price Change

-$45.00 (-8.2%)

Market Position

3rd (behind Best Buy)

#1 Lowest

Competitor Price Scan Results:

Retailer

Price

Vs. Your New Price

Your Store

$504.99

Market Leader

Best Buy

$509.99

+$5.00 (you beat by 1%)

Walmart

$669.00

+$164.01 higher

Amazon

$699.00

+$194.01 higher

What the Agent did (6 Steps):

  1. Installed SQLite3 and created database schema with 4 tables
  2. Created Stripe product (prod_TZaK0ARRJ5OJJ8) with initial $549.99 price 
  3. Scraped live competitor prices via Firecrawl from Amazon, Best Buy, and Walmart 
  4. Analysed pricing strategy with Sequential Thinking — detected Best Buy at $509.99 below your price
  5. Executed repricing — created new Stripe price at $504.99 (price_1ScRCVI9l1vmUkzn0hTnrLmW)
  6. Pushed audit report to GitHub (commit `64a488aa`)

All data stored on your infrastructure – not OpenAI’s servers. 

To check prices again, simply ask ChatGPT to ‘check MacBook Air M3 competitor prices’—it will scrape, compare, and reprice automatically. Run this check daily, weekly, or whenever you want competitive intelligence

Explore the Full Demo

View the complete repricing report and audit trail on GitHub: https://github.com/ajeetraina/competitive-repricing-agent-mcp

Want true automation? This demo shows on-demand repricing triggered by conversation. For fully automated periodic checks, you could build a simple scheduler that calls the OpenAI API every few hours to trigger the same workflow—turning this into a hands-free competitive intelligence system.Default houston Paragraph Text

Wrapping Up

You’ve just connected ChatGPT to Docker MCP Toolkit and configured multiple MCP servers. What used to require context-switching between multiple tools, manual query writing, and hours of debugging now happens through natural conversation, safely executed in Docker containers.

This is the new paradigm for AI-assisted development. ChatGPT isn’t just answering questions anymore. It’s querying your databases, managing your repositories, scraping data, and executing code—all while Docker ensures everything stays secure and contained.

Ready to try it? Open Docker Desktop and explore the MCP Catalog. Start with SQLite, add GitHub, experiment with Firecrawl. Each server unlocks new capabilities.

The future of development isn’t writing every line of code yourself. It’s having an AI partner that can execute tasks across your entire stack securely, reproducibly, and at the speed of thought.

Learn More

]]>
Securing the Docker MCP Catalog: Commit Pinning, Agentic Auditing, and Publisher Trust Levels https://www.docker.com/blog/enhancing-mcp-trust-with-the-docker-mcp-catalog/ Wed, 03 Dec 2025 13:21:25 +0000 https://www.docker.com/?p=83505 Trust is the most important consideration when you connect AI assistants to real tools. While MCP containerization provides strong isolation and limits the blast radius of malfunctioning or compromised servers, we’re continuously strengthening trust and security across the Docker MCP solutions to further reduce exposure to malicious code. As the MCP ecosystem scales from hundreds to tens of thousands of servers (and beyond), we need stronger mechanisms to prove what code is running, how it was built, and why it’s trusted.

To strengthen trust across the entire MCP lifecycle, from submission to maintenance to daily use, we’ve introduced three key enhancements:

  1. Commit Pinning: Every Docker-built MCP server in the Docker MCP Registry (the source of truth for the MCP Catalog) is now tied to a specific Git commit, making each release precisely attributable and verifiable.
  2. Automated, AI-Audited Updates: A new update workflow keeps submitted MCP servers current, while agentic reviews of incoming changes make vigilance scalable and traceable.
  3. Publisher Trust Levels: We’ve introduced clearer trust indicators in the MCP Catalog, so developers can easily distinguish between official, verified servers and community-contributed entries.

These updates raise the bar on transparency and security for everyone building with and using MCP at scale with Docker.

Commit pins for local MCP servers

Local MCP servers in the Docker MCP Registry are now tied to a specific Git commit with source.commit. That commit hash is a cryptographic fingerprint for the exact revision of the server code that we build and publish. Without this pinning, a reference like latest or a branch name would build whatever happens to be at that reference right now, making builds non-deterministic and vulnerable to supply chain attacks if an upstream repository is compromised. Even Git tags aren’t really immutable since they can be deleted and recreated to point to another commit. By contrast, commit hashes are cryptographically linked to the content they address, making the outcome of an audit of that commit a persistent result.

To make things easier, we’ve updated our authoring tools (like the handy MCP Registry Wizard) to automatically add this commit pin when creating a new server entry, and we now enforce the presence of a commit pin in our CI pipeline (missing or malformed pins will fail validation). This enforcement is deliberate: it’s impossible to accidentally publish a server without establishing clear provenance for the code being distributed. We also propagate the pin into the MCP server image metadata via the org.opencontainers.image.revision label for traceability.

Here’s an example of what this looks like in the registry:

# servers/aws-cdk-mcp-server/server.yaml
name: aws-cdk-mcp-server
image: mcp/aws-cdk-mcp-server
type: server
meta:
  category: devops
  tags:
    - aws-cdk-mcp-server
    - devops
about:
  title: AWS CDK
  description: AWS Cloud Development Kit (CDK) best practices, infrastructure as code patterns, and security compliance with CDK Nag.
  icon: https://avatars.githubusercontent.com/u/3299148?v=4
source:
  project: https://github.com/awslabs/mcp
  commit: 7bace1f81455088b6690a44e99cabb602259ddf7
  directory: src/cdk-mcp-server

And here’s an example of how you can verify the commit pin for a published MCP server image:

$ docker image inspect mcp/aws-core-mcp-server:latest \
    --format '{{index .Config.Labels "org.opencontainers.image.revision"}}'
7bace1f81455088b6690a44e99cabb602259ddf7

In fact, if you have the cosign and jq commands available, you can perform additional verifications:

$ COSIGN_REPOSITORY=mcp/signatures cosign verify mcp/aws-cdk-mcp-server --key https://raw.githubusercontent.com/docker/keyring/refs/heads/main/public/mcp/latest.pub | jq -r ' .[].optional["org.opencontainers.image.revision"] '

Verification for index.docker.io/mcp/aws-cdk-mcp-server:latest --
The following checks were performed on each of these signatures:
  - The cosign claims were validated
  - Existence of the claims in the transparency log was verified offline
  - The signatures were verified against the specified public key
7bace1f81455088b6690a44e99cabb602259ddf7

Keeping in sync

Once a server is in the registry, we don’t want maintainers needing to hand‑edit pins every time they merge something into their upstream repos (they have better things to do with their time), so a new automated workflow scans upstreams nightly, bumping source.commit when there’s a newer revision, and opening an auditable PR in the registry to track the incoming upstream changes.  This gives you the security benefits of pinning (immutable references to reviewed code) without the maintenance toil. Updates still flow through pull requests, so you get a review gate and approval trail showing exactly what new code is entering your supply chain. The update workflow operates on a per-server basis, with each server update getting its own branch and pull request.

This raises the question, though: how do we know that the incoming changes are safe?

AI in the review loop, humans in charge

Every proposed commit pin bump (and any new local server) will now be subject to an agentic AI security review of the incoming upstream changes. The reviewers (Claude Code and OpenAI Codex) analyze MCP server behavior, flagging risky or malicious code, adding structured reports to the PR, and offering standardized labels such as security-risk:high or security-blocked. Humans remain in the loop for final judgment, but the agents are relentless and scalable.

The challenge: untrusted code means untrusted agents

When you run AI agents in CI to analyze untrusted code, you face a fundamental problem: the agents themselves become attack vectors. They’re susceptible to prompt injection through carefully crafted code comments, file names, or repository structure. A malicious PR could attempt to manipulate the reviewing agent into approving dangerous changes, exfiltrating secrets, or modifying the review process itself.

We can’t trust the code under review, but we also can’t fully trust the agents reviewing it.

Isolated agents

Our Compose-based security reviewer architecture addresses this trust problem by treating the AI agents as untrusted components. The agents run inside heavily isolated Docker containers with tightly controlled inputs and outputs:

  • The code being audited is mounted read-only — The agent can analyze code but never modify it. Moreover, the code it audits is just a temporary copy of the upstream repository, but the read-only access means that the agent can’t do something like modify a script that might be accidentally executed outside the container.
  • The agent can only write to an isolated output directory — Once the output is written, the CLI wrapper for the agent only extracts specific files (a Markdown report and a text file of labels, both with fixed names), meaning any malicious scripts or files that might be written to that directory are deleted.
  • The agent lacks direct Internet access — the reviewer container cannot reach external services.
  • CI secrets and API credentials never enter the reviewer container — Instead, a lightweight reverse proxy on a separate Docker network accepts requests from the reviewer, injects inference provider API keys on outbound requests, and shields those keys from the containerized code under review.

All of this is encapsulated in a Docker Compose stack and wrapped by a convenient CLI that allows running the agent both locally and in CI.

Most importantly, this architecture ensures that even if a malicious PR successfully manipulates the agent through prompt injection, the damage is contained: the agent cannot access secrets, cannot modify code, and cannot communicate with external attackers.

CI integration and GitHub Checks

The review workflow is automatically triggered when a PR is opened or updated. We still maintain some control over these workflows for external PRs, requiring manual triggering to prevent malicious PRs from exhausting inference API credits. These reviews surface directly as GitHub Status Checks, with each server being reviewed receiving dedicated status checks for any analyses performed.

The resulting check status maps to the associated risk level determined by the agent: critical findings result in a failed check that blocks merging, high and medium findings produce neutral warnings, while low and info findings pass. We’re still tuning these criteria (since we’ve asked the agents to be extra pedantic) and currently reviewing the reports manually, but eventually we’ll have the heuristics tuned to a point where we can auto-approve and merge most updated PRs. In the meantime, these reports serve as a scalable “canary in the coal mine”, alerting Docker MCP Registry maintainers to incoming upstream risks — both malicious and accidental.

It’s worth noting that the agent code in the MCP Registry repository is just an example (but a functional one available under an MIT License). The actual security review agent that we run lives in a private repository with additional isolation, but it follows the same architecture.

Reports and risk labels

Here’s an example of a report our automated reviewers produced:

# Security Review Report

## Scope Summary
- **Review Mode:** Differential
- **Repository:** /workspace/input/repository (stripe)
- **Head Commit:** 4eb0089a690cb60c7a30c159bd879ce5c04dd2b8
- **Base Commit:** f495421c400748b65a05751806cb20293c764233
- **Commit Range:** f495421c400748b65a05751806cb20293c764233...4eb0089a690cb60c7a30c159bd879ce5c04dd2b8
- **Overall Risk Level:** MEDIUM

## Executive Summary

This differential review covers 23 commits introducing significant changes to the Stripe Agent Toolkit repository, including: folder restructuring (moving tools to a tools/ directory), removal of evaluation code, addition of new LLM metering and provider packages, security dependency updates, and GitHub Actions workflow permission hardening.

...

The reviewers can produce both differential analyses (looking at the changes brought in by a specific set of upstream commits) as well as full analyses (looking at entire codebases). We intend to run both differential for PRs and full analyses regularly.

Why behavioral analysis matters

Traditional scanners remain essential, but they tend to focus on things like dependencies with CVEs, syntactical errors (such as a missing break in a switch statement), or memory safety issues (such as dereferencing an uninitialized pointer) — MCP requires us to also examine code’s behavior. Consider the recent malicious postmark-mcp package impersonation: a one‑line backdoor quietly BCC’d outgoing emails to an attacker. Events like this reinforce why our registry couples provenance with behavior‑aware reviews before updates ship.

Real-world results

In our scans so far, we’ve already found several real-world issues in upstream projects (stay tuned for a follow-up blog post), both in MCP servers and with a similar agent in our Docker Hardened Images pipeline. We’re happy to say that we haven’t run across anything malicious so far, just logic errors with security implications, but the granularity and subtlety of issues that these agents can identify is impressive.

Trust levels in the Docker MCP Catalog

In addition to the aforementioned technical changes, we’ve also introduced publisher trust levels in the Docker MCP Catalog, exposing them in both the Docker MCP Toolkit in Docker Desktop and on Docker MCP Hub. Each server will now have an associated icon indicating whether the server is from a “known publisher” or maintained by the community. In both cases, we’ll still subject the code to review, but these indicators should provide additional context on the origin of the MCP server.

MCP Trust 1

Figure 1: Here’s an example of an MCP server, the AWS Terraform MCP published by a known, trusted publisher

MCP Trust 2

Figure 2: The Fetch MCP server, an example of an MCP community server

What does this mean for the community?

Publishers now benefit from a steady stream of upstream improvements, backed by a documented, auditable trail of code changes. Commit pins make each release precisely attributable, while the nightly updater keeps the catalog current with no extra effort from publishers or maintainers. AI-powered reviewers scale our vigilance, freeing up human reviewers to focus on the edge cases that matter most.

At the same time, developers using MCP servers get clarity about a server’s publisher, making it easier to distinguish between official, community, and third-party contributions. These enhancements strengthen trust and security for everyone contributing to or relying on MCP servers in the Docker ecosystem.

Submit your MCP servers to Docker by following the submission guidance here!

Learn more

]]>
Building AI agents shouldn’t be hard. According to theCUBE Research, Docker makes it easy https://www.docker.com/blog/building-ai-agents-shouldnt-be-hard-according-to-thecube-research-docker-makes-it-easy/ Tue, 02 Dec 2025 15:00:00 +0000 https://www.docker.com/?p=83488 For most developers, getting started with AI is still too complicated. Different models, tools, and platforms don’t always play nicely together. But with Docker, that’s changing fast.

Docker is emerging as essential infrastructure for standardized, portable, and scalable AI environments. By bringing composability, simplicity, and GPU accessibility to the agentic era, Docker is helping developers and the enterprises they support move faster, safer, and with far less friction. 

Real results: Faster AI delivery with Docker

The platform is accelerating innovation: According to the latest report from theCUBE Research, 88% of respondents reported that Docker reduced the time-to-market for new features or products, with nearly 40% achieving efficiency gains of more than 25%. Docker is playing an increasingly vital role in AI development as well. 52% of respondents cut AI project setup time by over 50%, while 97% report increased speed for new AI product development.

Reduced AI project failures and delays

Reliability remains a key performance indicator for AI initiatives, and Docker is proving instrumental in minimizing risk. 90% of respondents indicated that Docker helped prevent at least 10% of project failures or delays, while 16% reported prevention rates exceeding 50%. Additionally, 78% significantly improved testing and validation of AI models. These results highlight how Docker’s consistency, isolation, and repeatability not only speed development but also reduce costly rework and downtime, strengthening confidence in AI project delivery.

Build, share, and run agents with Docker, easily and securely

Docker’s mission for AI is simple: make building and running AI and agentic applications as easy, secure, and shareable as any other kind of software.

Instead of wrestling with fragmented tools, developers can now rely on Docker’s trusted, container-based foundation with curated catalogs of verified models and tools, and a clean, modular way to wire them together. Whether you’re connecting an LLM to a database or linking services into a full agentic workflow, Docker makes it plug-and-play.

With Docker Model Runner, you can pull and run large language models locally with GPU acceleration. The Docker MCP Catalog and Toolkit connect agents to over 300 MCP servers from partners like Stripe, Elastic, and GitHub. And with Docker Compose, you can define the whole AI stack of models, tools, and services in a single YAML file that runs the same way locally or in the cloud. Cagent, our open-source agent builder, lets you easily build, run, and share AI agents, with behavior, tools, and persona all defined in a single YAML file. And with Docker Sandboxes, you can run coding agents like Claude Code in a secure, local environment, keeping your workflows isolated and your data protected.

Conclusion 

Docker’s vision is clear: to make AI development as simple and powerful as the workflows developers already know and love. And it’s working: theCUBE reports 52% of users cut AI project setup time by more than half, while 87% say they’ve accelerated time-to-market by at least 26%.

Learn more

theCUBE docker banner
]]>
Connect to Remote MCP Servers with OAuth in Docker https://www.docker.com/blog/connect-to-remote-mcp-servers-with-oauth/ Tue, 11 Nov 2025 13:53:27 +0000 https://www.docker.com/?p=82247 In just a year, the Model Context Protocol (MCP) has become the standard for connecting AI agents to tools and external systems. The Docker MCP Catalog now hosts hundreds of containerized local MCP servers, enabling developers to quickly experiment and prototype locally.

We have now added support for remote MCP servers to the Docker MCP Catalog. These servers function like local MCP servers but run over the internet, making them easier to access from any environment without the need for local configuration.

With the latest update, the Docker MCP Toolkit now supports remote MCP servers with OAuth, making it easier than ever to securely connect to external apps like Notion and Linear, right from your Docker environment. Plus, the Docker MCP Catalog just grew by 60+ new remote MCP servers, giving you an even wider range of integrations to power your workflows and accelerate how you build, collaborate, and automate.

As remote MCP servers gain popularity, we’re excited to make this capability available to millions of developers building with Docker.

In this post, we’ll explore what this means for developers, why OAuth support is a game-changer, and how you can get started with remote MCP servers with just two simple commands.

Connect to Remote MCP Servers- Securely, Easily, Seamlessly

Goodbye Manual Setup, Hello OAuth Magic

Figuring out how to find and generate API tokens for a service is often tedious, especially for beginners. Tokens also tend to expire unpredictably, breaking existing MCP connections and require reconfiguration.

With OAuth built directly into Docker MCP, you’ll no longer need to juggle tokens or manually configure connections. You can securely connect to remote MCP servers in seconds – all while keeping your credentials safe. 

60+ New Remote MCP Servers, Instantly Available

From project management to documentation and issue tracking, the expanded MCP Catalog now includes integrations for Notion, Linear, and dozens more. Whatever tools your team depends on, they’re now just a command away. We will continue to expand the catalog as new remote servers become available.

Remote MCP server 1

Figure 1: Some examples of remote MCP servers that are now part of the Docker MCP Catalog

Easy to use via the CLI or Docker Desktop 

No new setup. No steep learning curve. Just use your existing Docker CLI and get going. Enabling and authorizing remote MCP servers is fully integrated into the familiar command-line experience you already love. You can also install servers via one-click with Docker Desktop.

Two Commands to Connect and Authorize Remote MCP Servers- It’s That Simple

Using Docker CLI

Step 1: Enable Your Remote MCP Server

Pick your server, and enable it with one line:

docker mcp server enable notion-remote

This registers the remote server and prepares it for OAuth authorization.

Step 2: Authorize Securely with OAuth

Next, authorize your connection with:

docker mcp oauth authorize notion-remote

This launches your browser with an OAuth authorization page.

Using Docker Desktop

Step 1: Enable Your Remote MCP Server

If you prefer to use Docker Desktop instead of the command line, open the Catalog tab and search for the server you want to use. The cloud icon indicates that it’s a remote server. Click the “+” button to enable the server.

Remote MCP server 2

Figure 2: Enabling the Linear remote MCP server is just one click.

Step 2: Authorize Securely with OAuth

Open the OAuth tab and click the “Authorize” button next to the MCP Server you want to authenticate with.

Remote MCP server 3

Figure 3: Built-in OAuth flow for Linear remote MCP servers. 

Once authorized, your connection is live. You can now interact with Notion, Linear, or any other supported MCP server directly through your Docker MCP environment.

Why This Update Matters for Developers

Unified Access Across Your Ecosystem

Developers rely on dozens of tools every day across different MCP clients. The Docker MCP Toolkit brings them together under one secure, unified interface – helping you move faster without manually configuring each MCP client. This means you don’t need to log in to the same service multiple times across Cursor, Claude Code, and other clients you may use.

Unlock AI-Powered Workflows

Remote MCP servers make it really easy to bridge data, tools, and AI. They are always up to date with the latest tools and are faster to use as they don’t run any code on your computer. With OAuth support, your connected apps can now securely provide context to AI models unlocking powerful new automation possibilities.

Building the Future of Developer Productivity

This update is more than just an integration boost – it’s the foundation for a more connected, intelligent, and automated developer experience. And this is only the beginning.

Conclusion

The addition of OAuth for remote MCP servers makes Docker MCP Toolkit the most powerful way to securely connect your tools, workflows, and AI-powered automations.

With 60+ new remote servers now available and growing, developers can bring their favorite services – like Notion and Linear, directly into Docker MCP Toolkit.

Learn more

]]>
Dynamic MCPs with Docker: Stop Hardcoding Your Agents’ World https://www.docker.com/blog/dynamic-mcps-stop-hardcoding-your-agents-world/ Thu, 06 Nov 2025 20:51:39 +0000 https://www.docker.com/?p=81836 The MCP protocol is almost one year old and during that time, developers have built thousands of new MCP servers. Thinking back to MCP demos from six months ago, most developers were using one or two local MCP servers, each contributing just a handful of tools. Six months later and we have access to thousands of tools, and a new set of issues.

  1. Which MCP servers do we trust?
  2. How do we avoid filling our context with tool definitions that we won’t end up needing?
  3. How do agents discover, configure, and use tools efficiently and autonomously?

With the latest features in Docker MCP Gateway, including Smart Search and Tool Composition, we’re shifting from “What do I need to configure?” to “What can I empower agents to do?” 

This week, Anthropic also released a post about building more efficient agents, and they have called out many of the same issues that we’ll be discussing in this post. Now that we’ve made progress towards having tools, we can start to think more about effectively using tools.

With dynamic MCPs, agents don’t just search for or add tools, but write code to compose new ones within a secure sandbox, improving both tool efficiency and token usage.

Enabling Agents to Find, Add, and Configure MCPs Dynamically with Smart Search 

If you think about how we configure MCPs today, the process is not particularly agentic. Typically, we leave the agent interface entirely, do some old-school configuration hacking (usually editing a JSON file of some kind), and then restart our agent session to check if the MCPs have become available. As the number of MCP servers grows, is this going to work?

So what prevents our agents from doing more to help us discover useful MCP servers?

We think that Docker’s OSS MCP gateway can help here. As the gateway manages the interface between an agent and any of the MCP servers in the gateway’s catalog, there is an opportunity to mediate that relationship in new ways. 

Out of the box, the gateway ships with a default catalog, the Docker MCP Catalog,  including over 270 curated servers and of course the ability to curate your own private catalogs (e.g. using servers from the community registry). And because it runs on Docker, you can pull and run any of them with minimal setup. That directly tackles the first friction point: discovery of trusted MCP servers. 

Dynamic tools fig1

Figure 1: The Docker MCP Gateway now includes mcp-find and mcp-add, new Smart Search features that let agents discover and connect to trusted MCP servers in the Docker MCP Catalog, enabling secure, dynamic tool usage.

However, the real key to dynamic MCPs is a small but crucial adjustment to the agent’s MCP session. The gateway provides a small set of primordial tools that the agent uses to search the catalog and to either add or remove servers from the current session. Just as in the post from Anthropic, which suggests a search_tools tool, we have added new tools to help the agent manage their MCP servers.

  • mcp-find: Find MCP servers in the current catalog by name or description. Return matching servers with their details.
  • mcp-add: Add a new MCP server to the session. The server must exist in the catalog.

With this small tweak, the agent can now help us negotiate a new MCP session. To make this a little more concrete, we’ll show an agent connected to the gateway asking for the DuckDuckGo MCP and then performing a search.

Dynamic tools fig2

Figure 2: A demo of using mcp-find and mcp-add to connect to the DuckDuckGo MCP server and run a search

Configuring MCP Servers with Agent-Led Workflows

In the example above, we started by connecting our agent to the catalog of MCPs (see docker mcp client connect –help for options). The agent then adds a new MCP server to the current session. To be clear, the duckduckgo MCP server is quite simple. Since it does not require any configuration, all we needed to do was search the catalog, pull the image from a trusted registry, and spin up the MCP server in the local docker engine.

However, some MCP servers will require inputs before they can start up. For example, remote MCP servers might require that the user go through an OAuth flow. In the next example, the gateway responds by requesting that we authorize this new MCP server. Now that MCP supports elicitations, and frameworks like mcp-ui allow MCPs to render UI elements into the chat, we have begun to optimize these flows based on client-side capabilities.

Dynamic tools fig3

Figure 3: Using mcp-find and mcp-add to connect to the Notion MCP server, including an OAuth flow

Avoid An Avalanche of Tools: Dynamic Tool Selection

In the building more efficient agents post, the authors highlight two ways that tools currently make token consumption less efficient.

  1. Tool definitions in the context window
  2. Intermediate tool results

The result is the same in both cases. Too many tokens are not being sent to the model. It takes surprisingly few tools for the context window to accumulate hundreds of thousands of tokens of nothing but tool definition.

Again, this is something we can improve. In the mcp gateway project, we’ve started distinguishing between tools that are available to a find tool, and ones that are added to the context window. Just as we’re giving agents tools for server selection, we can give them new ways to select tools.

Dynamic tools fig4

Figure 4: Dynamic Tools in action: Tools can now be actively selected, avoiding the need to load all available tools into every LLM request.

The idea is conceptually simple. We are providing an option to allow agents to add servers that do not automatically put their tools into the context window. With today’s agents, this means adding MCP servers that don’t return tool definitions in tools/list requests, but still make them available to find tool calls. This is easy to do because we have an MCP gateway to mediate tools/list requests and to inject new task-oriented find tools. New primordial tools like mcp-exec and mcp-find provide agents with new ways to discover and use MCP server tools.

Once we start to think about tool selection differently, it opens up a range of possibilities.

Using Tools in a new way: From Tool Calls to Tool Composition with code-mode

The idea of “code mode” has been getting a lot of attention since CloudFlare posted about a better way to use Tools several weeks ago. The idea actually dates back to a paper “CodeAct: Your LLM Agent Acts Better when Generating Code“, which proposed that LLMs could improve agent-oriented tasks by first consolidating agent actions into code. The recent post from Anthropic also frames code mode as a way to improve agent efficiency by reducing the number of tool definitions and tool outputs in the context window.

We’re really excited by this idea. By making it possible for agents to “code” directly against MCP tool interfaces, we can provide agents with “code-mode” tools that use the tools in our current MCP catalog in new ways. By combining mcp-find with code-mode, the agent can still access a large, and dynamic, set of available tools while putting just one or two new tools into the context window. Our current code-mode tool writes javascript and takes available MCP servers as parameters.

code-mode: Create a JavaScript-enabled tool that can call tools from any of the servers listed in the servers parameter.

However, this is still code written by an agent. If we’re going to run this code, we’re going to want it to run in a sandbox. Our MCP servers are already running in Docker containers, and the code mode sandbox is no different. In fact, it’s an ideal case because this container only needs access to other MCP servers! The permissions for accessing external systems are already managed at the MCP layer.

This approach offers three key benefits:

  • Secure by Design: The agent stays fully contained within a sandbox. We do not give up any of the benefits of sandboxing. The code-mode tool uses only containerized MCP servers selected from the catalog.
  • Token and Tool Efficiency: The tools it uses do not have to be sent to the model on every request. On subsequent turns, the model just needs to know about one new code-mode tool. In practice, this can result in hundreds of thousands of fewer tokens being sent to the model on each turn.
  • State persistence: Using volumes to manage how state is managed across tool calls, and to track intermediate results that need not, or even should not be sent to the model.

A popular illustration of this pattern is building a code mode tool using the GitHub official MCP servers. The GitHub server happens to ship with a large number of tools, so code-mode will have a dramatic impact. In the example below, we’re prompting an agent to create a new code-mode tool out of the Github-official and markdownify MCP servers.

Dynamic tools fig5

Figure 5: Using the MCP code-mode to write code to call tools from the GitHub Official and Markdownify MCP servers

The combination of Smart Search and Tool Composition unlocks dynamic, secure use of MCPs. Agents can now go beyond simply finding or adding tools; they can write code to compose new tools, and run them safely in a secure sandbox. 

The result: faster tool discovery, lower token usage, fewer manual steps, and more focused time for developers.

Workflow

Before: Static MCP setup

After: Dynamic MCPs via Docker MCP Gateway

Impact

Tool discovery

Manually browse the MCP servers

mcp-find searches a Docker MCP Catalog (230+ servers) by name/description

Faster discovery

Adding tools

Enable the MCP servers manually

mcp-add pulls only the servers an agent needs into the current session

Zero manual config; just-in-time tooling

Authentication

Configure the MCP servers ahead of time

Prompt user to complete OAuth when a remote server requires it

Some clients starting to support things like mcp elicitations and UX like mcp-ui for smoother onboarding flows

Tool composition

Agent generated tool calls; tool definitions are sent to the model

With code-mode , agents write code that use  multiple MCP tools

Multi-tool workflows and unified outputs

Context size

Load lots of unused tool definitions

Keep only the tools actually required for the task

Lower token usage and latency

Future-proofing

Static integrations

Dynamic, composable tools with sandboxed scripting

Ready for evolving agent behaviors and catalogs

Developer involvement

Constant context switching and config hacking

Agents self-serve: discover, authorize, and orchestrate tools

Fewer manual steps; better focus time

Table 1: Summary of Benefits from Docker’s Smart Search and Tool Composition for Dynamic MCPs 

From Docker to Your Editor: Running dynamic MCP tools with cagent and ACP

Another new component of the Docker platform is cagent, our open source agent builder & runtime, which provides a simple way to build and distribute new agents. The latest version of cagent now supports the Agent Client Protocol which allows developers to add custom agents to ACP-enabled editors like neovim, or Zed, and then to share these agents by pushing them to or pulling them from Docker Hub.

This means that we can now build agents that know how to use features like smart search tools or code mode, and then embed these agents in ACP-powered editors using cagent. Here’s an example agent, running in neovim, that helps us discover new tools relevant to whatever project we are currently editing.

Dynamic tools fig6

Figure 6: Running Dynamic MCPs in Neovim via Agent Client Protocol and a custom agent built with cagent, preconfigured with MCP server knowledge

In their section on state persistence and skills, the folks at Anthropic also hint at the idea that dynamic tools and code mode execution bring us closer to a world where over time, agents accumulate code and tools that work well together. Our current code-mode tool does not yet save the code it writes to the project but we’ll be working on this here.

For the neovim example above, we have used ACP support in the code companion plugin. Also, please check out the cagent adapter in this repo. For Zed, see their doc on adding custom agents and of course, try out cagent acp agent.yaml with your own custom agent.yaml file.

Getting Started with Dynamic MCPs Using Smart Search and Tool Composition

Dynamic tools are now available in the mcp gateway project.  Unless you are running the gateway with an explicit set of features (using the existing –servers flag), then these tools are available to your agent by default. The dynamic tools feature can also be disabled using docker mcp feature disable dynamic-tools. This is a feature that we’re actively developing so please try it out and let us know what you think by opening an issue, or starting a discussion in our repo. 

Get started by connecting your favorite client to the MCP gateway using docker mcp client connect, or by adding a connection using the “Clients” tab in the Docker Desktop MCP Toolkit panel.

Summary

The Docker MCP Toolkit combines a trusted runtime (the docker engine), with catalogs of MCP servers. Beginning with Docker Desktop 4.50, we are now extending the mcp gateway interface with new tools like mcp-find, mcp-add, and code-mode, to enable agents to discover MCP servers more effectively, and even to use these servers in new ways.

Whether it’s searching or pulling from a trusted catalog, initiating an OAuth flow, or scripting multi-tool workflows in a sandboxed runtime, agents can now do more on their own. And that takes us a big step closer to the agentic future we’ve been promised! 

Got feedback? Open an issue or start a discussion in our repo.

Learn more

  • Explore the MCP Gateway Project: Visit the GitHub repository for code, examples, and contribution guidelines.
  • Dive into Smart Search and Tool Composition: Read the full documentation to understand how these features enable dynamic, efficient agent workflows.
  • Learn more about Docker’s MCP Solutions
]]>
Your Org, Your Tools: Building a Custom MCP Catalog https://www.docker.com/blog/build-custom-mcp-catalog/ Fri, 24 Oct 2025 19:07:39 +0000 https://www.docker.com/?p=79282 I’m Mike Coleman, a staff solutions architect at Docker. In this role, I spend a lot of time talking to enterprise customers about AI adoption. One thing I hear over and over again is that these companies want to ensure appropriate guardrails are in place when it comes to deploying AI tooling. 

For instance, many organizations want tighter control over which tools developers and AI assistants can access via Docker’s Model Context Protocol (MCP) tooling. Some have strict security policies that prohibit pulling images directly from Docker Hub. Others simply want to offer a curated set of trusted MCP servers to their teams or customers.

In this post, we walk through how to build your own MCP catalog. You’ll see how to:

  • Fork Docker’s official MCP catalog
  • Host MCP server images in your own container registry
  • Publish a private catalog
  • Use MCP Gateway to expose those servers to clients

Whether you’re pulling existing MCP servers from Docker’s MCP Catalog or building your own, you’ll end up with a clean, controlled MCP environment that fits your organization.

Introducing Docker’s MCP Tooling

Docker’s MCP ecosystem has three core pieces:

MCP Catalog

A YAML-based index of MCP server definitions. These describe how to run each server and what metadata (description, image, repo) is associated with it. The MCP Catalog hosts over 220+ containerized MCP servers, ready to run with just a click. 

The official docker-mcp catalog is read-only. But you can fork it, export it, or build your own.

MCP Gateway

The MCP Gateway connects your clients to your MCP servers. It doesn’t “host” anything — the servers are just regular Docker containers. But it provides a single connection point to expose multiple servers from a catalog over HTTP SSE or STDIO.

Traditionally, with X servers and Y clients, you needed X * Y configuration entries. MCP Gateway reduces that to just Y entries (one per client). Servers are managed behind the scenes based on your selected catalog.

You can start the gateway using a specific catalog:

docker mcp gateway run –catalog my-private-catalog

MCP Gateway is open source: https://github.com/docker/mcp-gateway

Untitled presentation

Figure 1: The MCP Gateway provides a single connection point to expose multiple MCP servers

MCP Toolkit (GUI)

Built into Docker Desktop, the MCP Toolkit provides a graphical way to work with the MCP Catalog and MCP Gateway. This allows you to:

  • Access to Docker’s MCP Catalog via a rich GUI
  • Secure handling of secrets (like GitHub tokens)
  • Easily enable MCP servers
  • Connect your selected MCP servers with one click to a variety of clients like Claude code, Claude Desktop, Codex, Cursor, Continue.dev, and Gemini CLI

Workflow Overview

The workflow below will show you the steps necessary to create and use a custom MCP catalog. 

The basic steps are:

  1. Export the official MCP Catalog to inspect its contents
  2. Fork the Catalog so you can edit it
  3. Create your own private catalog
  4. Add specific server entries
  5. Pull (or rebuild) images and push them to your registry
  6. Update your catalog to use your images
  7. Run the MCP Gateway using your catalog
  8. Connect clients to it

Step-by-Step Guide: Creating and Using a Custom MCP Catalog

We start by setting a few environment variables to make this process repeatable and easy to modify later.

For the purpose of this example, assume we are migrating an existing MCP server (DuckDuckGo) to a private registry (ghcr.io/mikegcoleman). You can also add your own custom MCP server images into the catalog, and we mention that below as well. 

export MCP_SERVER_NAME="duckduckgo"
export GHCR_REGISTRY="ghcr.io"
export GHCR_ORG="mikegcoleman"
export GHCR_IMAGE="${GHCR_REGISTRY}/${GHCR_ORG}/${MCP_SERVER_NAME}:latest"
export FORK_CATALOG="my-fork"
export PRIVATE_CATALOG="my-private-catalog"
export FORK_EXPORT="./my-fork.yaml"
export OFFICIAL_DUMP="./docker-mcp.yaml"
export MCP_HOME="${HOME}/.docker/mcp"
export MCP_CATALOG_FILE="${MCP_HOME}/catalogs/${PRIVATE_CATALOG}.yaml"

Step 1: Export the official MCP Catalog 

Exporting the official Docker MCPCatalog gives you a readable local YAML file listing all servers. This makes it easy to inspect metadata like images, descriptions, and repository sources outside the CLI.

docker mcp catalog show docker-mcp --format yaml &gt; "${OFFICIAL_DUMP}"

Step 2: Fork the official MCP Catalog

Forking the official catalog creates a copy you can modify. Since the built-in Docker catalog is read-only, this fork acts as your editable version.

docker mcp catalog fork docker-mcp "${FORK_CATALOG}"
docker mcp catalog ls

Step 3: Create a new catalog

Now create a brand-new catalog that will hold only the servers you explicitly want to support. This ensures your organization runs a clean, controlled catalog that you fully own.

docker mcp catalog create "${PRIVATE_CATALOG}"

Step 4: Add specific server entries

Export your forked catalog to a file so you can copy over just the entries you want. Here we’ll take only the duckduckgo server and add it to your private catalog.

docker mcp catalog export "${FORK_CATALOG}" "${FORK_EXPORT}"
docker mcp catalog add "${PRIVATE_CATALOG}" "${MCP_SERVER_NAME}" "${FORK_EXPORT}"

Step 5: Pull (or rebuild) images and push them to your registry

At this point you have two options:

If you are able to pull from Docker Hub, find the image key for the server you’re interested in by looking at the YAML file you exported earlier. Then pull that image down to your local machine. After you’ve pulled it down, retag it for whatever repository it is you want to use. 

Example for duckduckgo

vi "${OFFICIAL_DUMP}" # look for the duckduck go entry and find the image: key which will look like this:
# image: mcp/duckduckgo@sha256:68eb20db6109f5c312a695fc5ec3386ad15d93ffb765a0b4eb1baf4328dec14f

# pull the image to your machine
docker pull \
mcp/duckduckgo@sha256:68eb20db6109f5c312a695fc5ec3386ad15d93ffb765a0b4eb1baf4328dec14f 

# tag the image with the appropriate registry
docker image tag mcp/duckduckgo@sha256:68eb20db6109f5c312a695fc5ec3386ad15d93ffb765a0b4eb1baf4328dec14f  ${GHCR_IMAGE}

# push the  image
docker push ${GHCR_IMAGE}

At this point you can move on to editing the MCP Catalog file in the next section.

 
If you cannot download from Docker Hub you can always rebuild the MCP server from its GitHub repo. To do this, open the exported YAML and look for your target server’s GitHub source repository. You can use tools like vi, cat, or grep to find it — it’s usually listed under a source key. 

Example for duckduckgo:
source: https://github.com/nickclyde/duckduckgo-mcp-server/tree/main

export SOURCE_REPO="https://github.com/nickclyde/duckduckgo-mcp-server.git"

Next, you’ll rebuild the MCP server image from the original GitHub repository and push it to your own registry. This gives you full control over the image and eliminates dependency on Docker Hub access.

echo "${GH_PAT}" | docker login "${GHCR_REGISTRY}" -u "${GHCR_ORG}" --password-stdin

docker buildx build \
  --platform linux/amd64,linux/arm64 \
  "${SOURCE_REPO}" \
  -t "${GHCR_IMAGE}" \
  --push


Step 6: Update your catalog 

After publishing the image to GHCR, update your private catalog so it points to that new image instead of the Docker Hub version. This step links your catalog entry directly to the image you just built.

vi "${MCP_CATALOG_FILE}"

# Update the image line for the duckduckgo server to point to the image you created in the previous step (e.g. ghcr.io/mikegcoleman/duckduckgo-mcp)

Remove the forked version of the catalog as you no longer need it

docker mcp catalog rm "${FORK_CATALOG}"

Step 7: Run the MCP Gateway 

Enabling the server activates it within your MCP environment. Once enabled, the gateway can load it and make it available to connected clients. You will get warnings about “overlapping servers” that is because the same servers are listed in two places (your catalog and the original catalog)

docker mcp server enable "${MCP_SERVER_NAME}"
docker mcp server list

Step 8: Connect to popular clients 

Now integrate the MCP Gateway with your chosen client. The raw command to run the gateway is: 

docker mcp gateway run --catalog "${PRIVATE_CATALOG}"

But that just runs an instance on your local machine, when what you probably want is to integrate with some client application. 

To do this you need to format the raw command so that it works for the client you wish to use. For example, with VS Code you’d want to update the mcp.json as follows:

"servers": {
    "docker-mcp-gateway-private": {
        "type": "stdio",
        "command": "docker",
        "args": [
            "mcp",
           "gateway",
            "run",
            "--catalog",
            "my-private-catalog"
        ]
    }
}

Finally, verify that the gateway is using your new GHCR image and that the server is properly enabled. This quick check confirms everything is configured as expected before connecting clients.

docker mcp server inspect "${MCP_SERVER_NAME}" | grep -E 'name|image'

Summary of Key Commands

You might find the following CLI commands handy:

docker mcp catalog show docker-mcp --format yaml &gt; ./docker-mcp.yaml
docker mcp catalog fork docker-mcp my-fork
docker mcp catalog export my-fork ./my-fork.yaml
docker mcp catalog create my-private-catalog
docker mcp catalog add my-private-catalog duckduckgo ./my-fork.yaml
docker buildx build --platform linux/amd64,linux/arm64 https://github.com/nickclyde/duckduckgo-mcp-server.git \
  -t ghcr.io/mikegcoleman/duckduckgo:latest --push
docker mcp server enable duckduckgo
docker mcp gateway run --catalog my-private-catalog

Conclusion

By using Docker’s MCP Toolkit, Catalog, and Gateway, you can fully control the tools available to your developers, customers, or AI agents. No more one-off setups, scattered images, or cross-client connection headaches.

Your next steps:

  • Add more servers to your catalog
  • Set up CI to rebuild and publish new server images
  • Share your catalog internally or with customers

Docs:

Happy curating. 

We’re working on some exciting enhancements to make creating custom catalogs even easier. Stay tuned for updates!

Learn more

]]>