Company – Docker https://www.docker.com Tue, 10 Feb 2026 14:00:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.docker.com/app/uploads/2024/02/cropped-docker-logo-favicon-32x32.png Company – Docker https://www.docker.com 32 32 Hardened Images Are Free. Now What? https://www.docker.com/blog/hardened-images-free-now-what/ Tue, 10 Feb 2026 14:00:00 +0000 https://www.docker.com/?p=85125 Docker Hardened Images are now free, covering Alpine, Debian, and over 1,000 images including databases, runtimes, and message buses. For security teams, this changes the economics of container vulnerability management.

DHI includes security fixes from Docker’s security team, which simplifies security response. Platform teams can pull the patched base image and redeploy quickly. But free hardened images raise a question: how should this change your security practice? Here’s how our thinking is evolving at Docker.

What Changes (and What Doesn’t)

DHI gives you a security “waterline.” Below the waterline, Docker owns vulnerability management. Above it, you do. When a scanner flags something in a DHI layer, it’s not actionable by your team. Everything above the DHI boundary remains yours.

The scope depends on which DHI images you use. A hardened Python image covers the OS and runtime, shrinking your surface to application code and direct dependencies. A hardened base image with your own runtime on top sets the boundary lower. The goal is to push your waterline as high as possible.

Vulnerabilities don’t disappear. Below the waterline, you need to pull patched DHI images promptly. Above it, you still own application code, dependencies, and anything you’ve layered on top.

Supply Chain Isolation

DHI provides supply chain isolation beyond CVE remediation.

Community images like python:3.11 carry implicit trust assumptions: no compromised maintainer credentials, no malicious layer injection via tag overwrite, no tampering since your last pull. The Shai Hulud campaign(s) demonstrated the consequences when attackers exploit stolen PATs and tag mutability to propagate through the ecosystem.

DHI images come from a controlled namespace where Docker rebuilds from source with review processes and cooldown periods. Supply chain attacks that burn through community images stop at the DHI boundary. You’re not immune to all supply chain risk, but you’ve eliminated exposure to attacks that exploit community image trust models.

This is a different value proposition than CVE reduction. It’s isolation from an entire class of increasingly sophisticated attacks.

The Container Image as the Unit of Assessment

Security scanning is fragmented. Dependency scanning, SAST, and SCA all run in different contexts, and none has full visibility into how everything fits together at deployment time.

The container image is where all of this converges. It’s the actual deployment artifact, which makes it the checkpoint where you can guarantee uniform enforcement from developer workstation to production. The same evaluation criteria you run locally after docker build can be identical to what runs in CI and what gates production deployments.

This doesn’t need to replace earlier pipeline scanning altogether. It means the image is where you enforce policy consistency and build a coherent audit trail that maps directly to what you’re deploying.

Policy-Driven Automation

Every enterprise has a vulnerability management policy. The gap is usually between policy (PDFs and wikis) and practice (spreadsheets and Jira tickets).

DHI makes that gap easier to close by dramatically reducing the volume of findings that require policy decisions in the first place. When your scanner returns 50 CVEs instead of 500, even basic severity filtering becomes a workable triage system rather than an overwhelming backlog.

A simple, achievable policy might include the following:

  • High and critical severity vulnerabilities require remediation or documented exception
  • Medium and lower severity issues are accepted with periodic review
  • CISA KEV vulnerabilities are always in scope

Most scanning platforms support this level of filtering natively, including Grype, Trivy, Snyk, Wiz, Prisma Cloud, Aqua, and Docker Scout. You define your severity thresholds, apply them automatically, and surface only what requires human judgment.

For teams wanting tighter integration with DHI coverage data, Docker Scout evaluates policies against DHI status directly. Third-party scanners can achieve similar results through pipeline scripting or by exporting DHI coverage information for comparison.

The goal isn’t perfect automation but rather reducing noise enough that your existing policy becomes enforceable without burning out your engineers.

VEX: What You Can Do Today

Docker Hardened Images ship with VEX attestations that suppress CVEs Docker has assessed as not exploitable in context. The natural extension is for your teams to add their own VEX statements for application-layer findings.

Here’s what your security team can do today:

Consume DHI VEX data. Grype (v0.65+), Trivy, Wiz, and Docker Scout all ingest DHI VEX attestations automatically or via flags. Scanners without VEX support can still use extracted attestations to inform manual triage.

Write your own VEX statements. OpenVEX provides the JSON format. Use vexctl to generate and sign statements.

Attach VEX to images. Docker recommends docker scout attestation add for attaching VEX to images already in a registry:

docker scout attestation add \
  --file ./cve-2024-1234.vex.json \
  --predicate-type https://openvex.dev/ns/v0.2.0 \
  <image>

Alternatively, COPY VEX documents into the image filesystem at build time, though this prevents updates without rebuilding.

Configure scanner VEX ingestion. The workflow: scan, identify investigated findings, document as VEX, feed back into scanner config. Future scans automatically suppress assessed vulnerabilities.

Compliance: What DHI Actually Provides

Compliance frameworks such as ISO 27001, SOC 2, and the EU Cyber Resilience Act require systematic, auditable vulnerability management. DHI addresses specific control requirements:

Vulnerability management documentation (ISO 27001  A.8.8. , SOC 2 CC7.1). The waterline model provides a defensible answer to “how do you handle base image vulnerabilities?” Point to DHI, explain the attestation model, show policy for everything above the waterline.

Continuous monitoring evidence. DHI images rebuild and re-scan on a defined cadence. New digests mean current assessments. Combined with your scanner’s continuous monitoring, you demonstrate ongoing evaluation rather than point-in-time checks.

Remediation traceability. VEX attestations create machine-readable records of how each CVE was handled. When auditors ask about specific CVEs in specific deployments, you have answers tied to image digests and timestamps.

CRA alignment. The Cyber Resilience Act requires “due diligence” vulnerability handling and SBOMs. DHI images include SBOM attestations, and VEX aligns with CRA expectations for exploitability documentation.

This won’t satisfy every audit question, but it provides the foundation most organizations lack.

What to Do After You Read This Post

  1. Identify high-volume base images. Check Docker Hub’s Hardened Images catalog (My Hub → Hardened Images → Catalog) for coverage of your most-used images (Python, Node, Go, Alpine, Debian).
  2. Swap one image. Pick a non-critical service, change the FROM line to DHI equivalent, rebuild, scan, compare results.
  3. Configure policy-based filtering. Set up your scanner to distinguish DHI-covered vulnerabilities from application-layer findings. Use Docker Scout or Wiz for native VEX integration, or configure Grype/Trivy ignore policies based on extracted VEX data.
  4. Document your waterline. Write down what DHI covers and what remains your responsibility. This becomes your policy reference and audit documentation.
  5. Start a VEX practice. Convert one informally-documented vulnerability assessment into a VEX statement and attach it to the relevant image.

DHI solves specific, expensive problems around base image vulnerabilities and supply chain trust. The opportunity is building a practice around it that scales.

The Bigger Picture

DHI coverage is expanding. Today it might cover your OS layer, tomorrow it extends through runtimes and into hardened libraries. Build your framework to be agnostic to where the boundary sits. The question is always the same, though, namely —  what has Docker attested to, and what remains yours to assess?

The methodology Docker uses for DHI (policy-driven assessment, VEX attestations, auditable decisions) extends into your application layer. We can’t own your custom code, but we can provide the framework for consistent practices above the waterline. Whether you use Scout, Wiz, Grype, Trivy, or another scanner, the pattern is the same. You can let DHI handle what it covers, automate policy for what remains, and document decisions in formats that travel with artifacts.

At Docker, we’re using DHI internally to build this vulnerability management model. The framework stays constant regardless of how much of our stack is hardened today versus a year from now. Only the boundary moves.

The hardened images are free. The VEX attestations are included. What’s left is integrating these pieces into a coherent security practice where the container is the unit of truth, policy drives automation, and every vulnerability decision is documented by default.

For organizations that require stronger guarantees, FIPS-enabled and STIG-ready images, and customizations, DHI Enterprise is tailor made for those use cases. Get in touch with the Docker team if you would like a demo. If you’re still exploring, take a look at the catalog (no-signup needed) or take DHI Enterprise for a spin with a free trial.

]]>
2025 Recap: The Year Software Development Changed Shape https://www.docker.com/blog/2025-recap-the-year-software-development-changed-shape/ Sat, 20 Dec 2025 13:45:00 +0000 https://www.docker.com/?p=84255 2025 was the year software teams stopped optimizing models and started optimizing systems.

By December, a few truths were impossible to ignore.

1. Developer Productivity Became the Real Competitive Advantage

By mid-year, every major AI lab had cleared the “good enough reasoning” bar. With model quality converging, the differentiator was no longer raw intelligence. It was how fast teams could ship.

The fastest teams used systems that were:

  • Declarative: automation defined in YAML and config, not code
  • Composable: agents calling tools with minimal glue
  • Observable: evaluated, traced, and versioned 
  • Reproducible: identical behavior every run

Productivity became a platform problem, not a talent problem.

2. Security Went From “Filters” to “Blast Radius”

The real problem wasn’t what models say. It was what they could do.

Once agents can act, blast radius matters more than the prompt.

Production incidents across the industry made it clear:

  • Agents leaking internal data within minutes
  • Malicious plugins shipping ransomware
  • Supply-chain bugs in AI tooling 
  • Agents deleting repos or months of work

Smart teams adopted the same guardrails they use for privileged system processes:

  • Sandboxed runtimes
  • Containerized toolchains
  • Signed artifacts
  • Policies in front of tool calls
  • Hardened bases and reproducible builds

The industry stopped filtering danger out of the model. They focused on containing it.

3. Agents Stopped Being Demos 

Agents became good enough to do real jobs.

At the start of the year, “agent” meant a clever prototype. By the end, agents were doing operational work: updating infrastructure, resolving customer issues, moving money, managing SaaS tools.

Two shifts unlocked this:

  • Reasoning took a leap.
    OpenAI’s o3 solved 25% of FrontierMath, problems that take researchers hours or days. DeepSeek sent waves with their R1 model, proving that the frontier moved from model size to compute at inference time.
  • Tools became standardized.
    MCP became the USB-C port of AI, a universal way for agents to safely access tools, data, and workflows. Once the ecosystem aligned on a common port, everything accelerated.

4. Containers Quietly Remained the Execution Layer for Everything

Containers continued doing the quiet work of powering every stack.

More than 90% of companies used containers as the default environment where:

  • Applications run
  • Build systems operate
  • Agents execute real tasks
  • Infrastructure is tested before hitting production

Even in an agent-driven world, developers need environments that act the same way every time. Containers remained a universal, stable execution surface.

5. Hardened Images Became the New Starting Point

You can’t trust the system if you don’t trust the base image. 

Docker Hardened Images (DHI) solved the first question every team had to ask: “What are we actually running?” 

DHI answered that with:

  • A known, verified base image
  • A transparent bill of materials
  • Reproducible builds
  • Signed artifacts

When hardened images became free, the cost of doing the right thing dropped to zero. Teams no longer layered security patches on top of unknown upstream risk. They began from a secure, trusted baseline.

What’s Next for 2026

The race for raw model intelligence is over. What separates winners in 2026 will be everything around the model.

  1. Agents become a standard runtime target. Versioned and deployed like services.
  2. Security frameworks treat agents as users. With permissions, onboarding, and monitoring.
  3. Ecosystem gravity increases. MCP is the start. The dominant tool interface becomes the center of the agent economy.
  4. Trust becomes infrastructure. Signed models to verified tools to hardened bases. Winners will have the smallest blast radius when things break.
  5. The term “AI engineer” fades. What remains is what has always been: software engineers who build secure, governable, and resilient systems.

In 2023, we learned to talk to models. In 2024, we learned to chain them. In 2025, we gave them real power.

2026 will be about earning the right to keep it.

]]>
Highlights from AWS re:Invent: Supercharging Kiro with Docker Sandboxes and MCP Catalog https://www.docker.com/blog/aws-reinvent-kiro-docker-sandboxes-mcp-catalog/ Fri, 12 Dec 2025 20:22:05 +0000 https://www.docker.com/?p=83901 At the recent AWS re:Invent, Docker focused on a very real developer problem: how to run AI agents locally without giving them access to your machine, credentials, or filesystem.

With AWS introducing Kiro, Docker demonstrated how Docker Sandboxes and MCP Toolkit allow developers to run agents inside isolated containers, keeping host environments and secrets out of reach. The result is a practical setup where agents can write code, run tests, and use tools safely, while you stay focused on building, not cleaning up accidental damage.

Local AI Agents, Isolation, and Docker at AWS re:Invent

Two weeks ago, a Reddit user posted how their filesystem was accidentally deleted by Google Antigravity. And the top comment?

Alright no more antigravity outside of a container

And another user’s home directory was recently wiped using Claude Code this past week. And yet another top comment:

That’s exactly why Claude code should be used only inside an isolated container or vm

We agree that this should never happen and that containers provide the proper isolation and segmentation.

At AWS re:Invent 2025, we were able to show off this vision using Kiro running in our new Docker sandboxes, using MCP servers provided by the Docker MCP Toolkit. 

If you weren’t able to attend or visit us at the booth, fear not! I’ll share the demo with you.

image1

Jim Clark, one of Docker’s Principal Engineers, providing a demo of running an secured AI development environment using Docker’s sandboxes and MCP Toolkit

Giving Kiro safety guardrails

Docker Sandboxes provide the ability to run an agent inside an isolated environment using containers. In this environment, the agent has no access to credentials stored on the host and can only access the files of the specified project directory.

As an example, I have some demo AWS credentials on my machine:

> cat ~/.aws/credentials
[default]
aws_access_key_id=demo_access_key
aws_secret_access_key=demo_secret_key

Now, I’m going to clone the Catalog Service demo project and start a sandbox using Kiro:

git clone https://github.com/dockersamples/catalog-service-node.git
cd catalog-service-node
docker sandbox run --mount-docker-socket kiro

The --mount-docker-socket flag is added to give the sandbox the Docker socket, which will allow the agent to run my integration tests that use Testcontainers.

On the first launch, I will be required to authenticate. After that’s done, I will ask Kiro to tell me about the AWS credentials it has access to:

     ⢀⣴⣶⣶⣦⡀⠀⠀⠀⢀⣴⣶⣦⣄⡀⠀⠀⢀⣴⣶⣶⣦⡀⠀⠀⢀⣴⣶⣶⣶⣶⣶⣶⣶⣶⣶⣦⣄⡀⠀⠀⠀⠀⠀⠀⢀⣠⣴⣶⣶⣶⣶⣶⣦⣄⡀⠀⠀⠀
    ⢰⣿⠋⠁⠈⠙⣿⡆⠀⢀⣾⡿⠁⠀⠈⢻⡆⢰⣿⠋⠁⠈⠙⣿⡆⢰⣿⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠈⠙⠻⣦⠀⠀⠀⠀⣴⡿⠟⠋⠁⠀⠀⠀⠈⠙⠻⢿⣦⠀⠀
    ⢸⣿⠀⠀⠀⠀⣿⣇⣴⡿⠋⠀⠀⠀⢀⣼⠇⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⢠⣤⣤⣤⣤⣄⠀⠀⠀⠀⣿⡆⠀⠀⣼⡟⠀⠀⠀⠀⣀⣀⣀⠀⠀⠀⠀⢻⣧⠀
    ⢸⣿⠀⠀⠀⠀⣿⡿⠋⠀⠀⠀⢀⣾⡿⠁⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⢸⣿⠉⠉⠉⣿⡇⠀⠀⠀⣿⡇⠀⣼⡟⠀⠀⠀⣰⡿⠟⠛⠻⢿⣆⠀⠀⠀⢻⣧
    ⢸⣿⠀⠀⠀⠀⠙⠁⠀⠀⢀⣼⡟⠁⠀⠀⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⢸⣿⣶⣶⡶⠋⠀⠀⠀⠀⣿⠇⢰⣿⠀⠀⠀⢰⣿⠀⠀⠀⠀⠀⣿⡆⠀⠀⠀⣿⡆
    ⢸⣿⠀⠀⠀⠀⠀⠀⠀⠀⠹⣷⡀⠀⠀⠀⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⣼⠟⠀⢸⣿⠀⠀⠀⢸⣿⠀⠀⠀⠀⠀⣿⡇⠀⠀⠀⣿⡇
    ⢸⣿⠀⠀⠀⠀⠀⣠⡀⠀⠀⠹⣷⡄⠀⠀⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⠀⣤⣄⠀⠀⠀⠀⠹⣿⡅⠀⠀⠸⣿⠀⠀⠀⠸⣿⠀⠀⠀⠀⠀⣿⠇⠀⠀⠀⣿⠇
    ⢸⣿⠀⠀⠀⠀⣾⡟⣷⡀⠀⠀⠘⣿⣆⠀⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⠀⣿⡟⣷⡀⠀⠀⠀⠘⣿⣆⠀⠀⢻⣧⠀⠀⠀⠹⣷⣦⣤⣤⣾⠏⠀⠀⠀⣼⡟
    ⢸⣿⠀⠀⠀⠀⣿⡇⠹⣷⡀⠀⠀⠈⢻⡇⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⠀⣿⡇⠹⣷⡀⠀⠀⠀⠈⢻⡇⠀⠀⢻⣧⠀⠀⠀⠀⠉⠉⠉⠀⠀⠀⠀⣼⡟
    ⠸⣿⣄⡀⢀⣠⣿⠇⠀⠙⣷⡀⠀⢀⣼⠇⠀⠸⣿⣄⡀⢀⣠⣿⠇⠸⣿⣄⡀⢀⣠⣿⠇⠀⠙⣷⡀⠀⠀⢀⣼⠇⠀⠀⠀⠻⣷⣦⣄⡀⠀⠀⠀⢀⣠⣴⣾⠟
    ⠀⠈⠻⠿⠿⠟⠁⠀⠀⠀⠈⠻⠿⠿⠟⠁⠀⠀⠈⠻⠿⠿⠟⠁⠀⠀⠈⠻⠿⠿⠟⠁⠀⠀⠀⠈⠻⠿⠿⠟⠁⠀⠀⠀⠀⠀⠈⠙⠻⠿⠿⠿⠿⠟⠋⠁
Model: Auto (/model to change) | Plan: KIRO FREE (/usage for more detail)

!> Tell me about the AWS credentials you have access to

From here, Kiro will search the typical places AWS credentials are configured. But, finally, it reaches the following conclusion:

Currently, there are no AWS credentials configured on your system

And why is this? The credentials on the host are not accessible inside the sandbox environment. The agent is in the isolated environment and only has access to the current project directory.

Giving Kiro secure tools with the MCP Toolkit

If we take a step back and think about it, the only credential an agent should have access to is to authenticate with the model provider. All other credentials belong to the tools (or MCP servers) around the agent.

And that’s where the MCP Toolkit comes in!

Sandboxes don’t yet have an automatic way to connect to the MCP Toolkit (it’s coming soon!). Until that’s available I will start a MCP Gateway with the following command:

docker mcp gateway run --transport=streaming

There are a variety of ways to configure Kiro with MCP servers, but the project-level configuration provides an easy way that also works with sandboxes.

In the project, I will create a .kiro/settings/mcp.json file with the following contents:

{
  "mcpServers": {
    "docker-mcp-toolkit": {
      "type": "http",
      "url": "http://host.docker.internal:8811/"
    }
  }
}

After restarting Kiro, I can ask it about the available tools:

/tools

The output then shows the following tools coming from the MCP Gateway:

docker-mcp-toolkit (MCP)
- code-mode             trusted
- mcp-add               trusted
- mcp-config-set        trusted
- mcp-create-profile    trusted
- mcp-exec              trusted
- mcp-find              trusted
- mcp-remove            trusted

These tools highlight the ability to dynamically add and remove MCP servers through the Gateway. 

By using an AGENTS.md file, I can provide to the agent on which MCP servers to use and give an additional nudge to remove those servers when they’re no longer needed (which will reduce my context size and costs). This is what my file looks like:

# Special instructions

- When info is needed from GitHub, use the `github-official` MCP server. This will ensure proper auth tokens are used.
- When adding MCP servers, be sure to activate all tools.
- When you have the required info from any MCP server, remove the MCP server to reduce the number of tools in the context.

Before Kiro can use the GitHub MCP server, I need to ensure I’ve configured the MCP server with my OAuth credentials.

image3 3

Screenshot of the Docker Desktop dashboard showing the GitHub Official MCP server listing within the MCP Toolkit. Specifically, it shows the OAuth configuration has been completed.

Putting it all together

With all of the setup completed, now I’m ready to have Kiro do some work for me. I’ll simply ask it to complete one of the issues on GitHub. I can simply ask:

> Can you help me complete the work for issue #64?

Watching the output, I’ll first see Kiro add the github-official MCP server by using the mcp-add tool.

Running tool mcp-add with the param (from mcp server: docker-mcp-toolkit)
 ⋮  {
 ⋮    "name": "github-official",
 ⋮    "activate": true
 ⋮  }
 - Completed in 0.609s

From there, I’ll see that Kiro will check the git remotes to determine the GitHub organization and repository being used:

> I need to determine the repository owner and name. Let me check the current directory for git information:
I will run the following command: git remote -v (using tool: shell)
Purpose: Get git remote information to identify the repository

origin  https://github.com/dockersamples/catalog-service-node.git (fetch)
origin  https://github.com/dockersamples/catalog-service-node.git (push)
 - Completed in 0.6s

Then, it will use the GitHub MCP server to get the issue details:

> Now I can get the issue details:
Running tool issue_read with the param (from mcp server: docker-mcp-toolkit)
 ⋮  {
 ⋮    "method": "get",
 ⋮    "owner": "dockersamples",
 ⋮    "repo": "catalog-service-node",
 ⋮    "issue_number": 64
 ⋮  }
 - Completed in 0.613s

> Perfect! Issue #64 is about adding product metadata to S3 objects.

From here, Kiro will go and make the changes to my source code, update the integration tests, and even run the integration tests to ensure it all works. After it’s completed, I have the following output:

I've successfully completed the work for issue #64. Here's what was implemented:

Changes Made:

1. StorageService.js: 
   - Added Tagging parameter to S3 PutObjectCommand with product_id=${id}
   - Added Metadata parameter with product_name: productName
   - Added new AWS SDK imports for GetObjectTaggingCommand and HeadObjectCommand
   - Created helper functions getObjectTags() and getObjectMetadata()

2. ProductService.js:
   - Modified uploadProductImage() to fetch product name from database and pass it to uploadFile()
   - Exposed new functions getProductImageTags() and getProductImageMetadata()

3. Integration Test:
   - Added validation to verify S3 objects have correct tags (product_id with the product ID)
   - Added validation to verify S3 objects have correct metadata (product_name with the product name)

Test Results:
All integration tests pass, confirming that:
- S3 objects are tagged with the product identifier
- S3 objects include metadata with the product name
- The existing functionality remains intact

And notably, right before that, I see that Kiro automatically removed the GitHub MCP server to remove its tools from my context:

Let me clean up by removing the GitHub MCP server since we're done with it:
Running tool mcp-remove with the param (from mcp server: docker-mcp-toolkit)
 ⋮  {
 ⋮    "name": "github-official"
 ⋮  }
 - Completed in 0.2s

With those tools removed from the context, the model has less to tokenize and process which means faster responses and less cost.

Highlighting what’s important

Taking a step back at what we ran, we have the following:

  • An agent in an isolated environment. With the agent running in a container, it’s unable to access and leak credentials stored on my host machine. And rogue requests to delete my filesystem are limited to the containerized environment where it’s running as a non-root user.
  • Isolated and containerized MCP servers. Each MCP server runs in its isolated container, preventing host access. In addition, I don’t have to spend any time worrying about runtime environments or configuration. With a container, “it just works!”
  • API credentials only where they’re needed. The only component that needs access to my GitHub credential is the GitHub MCP server, where it is securely injected. This approach further prevents potential leaks and exposures.

In other words, we have a microserviced architecture where each component runs in its own container and follows least privilege by having access to only the things it needs access to.

image2 4

Looking forward

Here at Docker, we’re quite excited about this architecture and there’s still a lot to do. Two items I’m excited about include:

  • A network boundary for agentic workloads. This boundary would limit network access to only authorized hostnames. Then, if a prompt injection tries to send sensitive information to evildomain.com, that request is blocked.
  • Governance and control for organizations. With this, your organization can authorize the MCP servers that are used and even create its own custom catalogs and rule sets.

If you want to try out Sandboxes, you can do so by enabling the Experimental Feature in Docker Desktop 4.50+. We’d love to hear your feedback and thoughts!

Learn more 

]]>
How Docker Hardened Images Patches Vulnerabilities in 24 hours https://www.docker.com/blog/how-docker-hardened-images-patch-cves-in-24-hours/ Fri, 21 Nov 2025 18:40:39 +0000 https://www.docker.com/?p=83152 On November 19, 2025, the Golang project published two Common Vulnerabilities and Exposures (CVEs) affecting the widely-used golang.org/x/crypto/ssh package. While neither vulnerability received a critical CVSS score, both presented real risks to applications using SSH functionality in Go-based containers.

image2 1

CVE-2025-58181 affects SSH servers parsing GSSAPI authentication requests. The vulnerability allows attackers to trigger unbounded memory consumption by exploiting the server’s failure to validate the number of mechanisms specified in authentication requests. CVE-2025-47914 impacts SSH Agent servers that fail to validate message sizes when processing identity requests, potentially causing system panics when malformed messages arrive. (These two vulnerabilities came just days after CVE-2025-47913, a high-severity vulnerability affecting the same Golang component that Docker also quickly patched)

For teams running Go applications with SSH functionality in their containers, leaving these vulnerabilities unpatched creates exposure to denial-of-service attacks and potential system instability.

How Docker achieves lightning fast vulnerability response

image1 3

When these CVEs hit the Golang project’s security feed, Docker Hardened Images customers had patched versions available in less than 24 hours. This rapid response stems from Docker Scout’s continuous monitoring architecture and DHI’s automated remediation pipeline.

Here’s how it works:

Continuous CVE ingestion: Unlike vulnerability scanning that runs on batch schedules, Docker Scout continuously ingests CVE information from upstream sources including GitHub security advisories, the National Vulnerability Database, and project-specific feeds. The moment CVE data becomes available, Scout begins analysis.

Instant impact assessment: Within seconds of CVE ingestion, Scout identifies which Docker Hardened Images are affected based in Scout’s comprehensive SBOM database. This immediate notification allows the remediation process to start without delay.

Automated patching workflow: Depending on the vulnerability and package, Docker either patches automatically or triggers a manual review process for complex changes. For these Golang SSH vulnerabilities, the team initiated builds immediately after upstream patches became available.

Cascading builds: Once the patched Golang package builds successfully, the system automatically triggers rebuilds of all dependent packages and images. Every Docker Hardened Image containing the affected golang.org/x/crypto/ssh package gets rebuilt with the security fix.

The entire process, from CVE disclosure to patched images available to customers, was completed in under 24 hours. Customers using Docker Scout received immediate notifications about the vulnerabilities and the availability of patched versions.

Why Docker’s Security Response Is Different

One of Docker’s key differentiators is its continuous, real-time monitoring, rather than periodic batch scanning. Traditional vulnerability management relies on daily or weekly scans, leaving containers exposed to known vulnerabilities for hours or even days.

With Docker Scout’s real-time CVE ingestion, detection starts the moment a vulnerability is published, enabling remediation within seconds and minimizing exposure.

This foundation powers Docker Hardened Images (DHI), where packages and dependencies are continuously tracked and automatically updated when issues arise. For example, when vulnerabilities were found in the golang.org/x/crypto library, all affected images were rebuilt and released within a day. Customers simply pull the latest tags to stay secure, no manual patching, emergency maintenance, or impact triage required.

But continuous monitoring is just the foundation. What truly sets Docker apart is how that real-time intelligence flows into an automated, transparent, and trusted remediation pipeline, built on over a decade of experience securing and maintaining the Docker Official Images program.These are the same images trusted and used by millions of developers and organizations worldwide, forming the foundation of countless production environments. That long-standing operational experience in continuously maintaining, rebuilding, and distributing secure images at global scale gives Docker a proven track record in delivering reliability, consistency, and trust few others can match.

Beyond automation, Docker’s AI guardrails add yet another layer of protection. Purpose-built for the Hardened Images pipeline, these AI systems continuously analyze upstream code changes, flag risky patterns, and prevent flawed dependencies from entering the supply chain. Unlike standard coding assistants, Docker’s AI guardrails are informed by manual, project-specific reviews, blending human expertise with adaptive intelligence. When the system detects a high-confidence issue such as an inverted error check, ignored failure, or resource mismanagement, it halts the release until a Docker engineer verifies and applies the fix. This human-in-the-loop model ensures vulnerabilities are caught long before they can reach customers, turning AI into a force multiplier for safety, not a replacement for human judgment.

Another critical differentiator is complete transparency. Consider what happens when a security scanner still flags a vulnerability even after you’ve pulled a patched image. With DHI, every image includes a comprehensive and accurate Software Bill of Materials (SBOM) that provides definitive visibility into what’s actually inside your container. When a scanner reports a supposedly remediated image as vulnerable, teams can verify the exact package versions and patch status directly from the SBOM instead of relying on scanner heuristics.

This transparency also extends to how Docker Scout handles CVE data. Docker relies entirely on independent, third-party sources for vulnerability decisions and prioritization, including the National Vulnerability Database (NVD), GitHub Security Advisories, and upstream project maintainers. This approach is essential because traditional scanners often depend on pattern matching and heuristics that can produce false positives. They may miss vendor-specific patches, overlook backported fixes, or flag vulnerabilities that have already been remediated due to database lag. In some cases, even vendor-recommended scanners fail to detect unpatched vulnerabilities, creating a false sense of security.

Without an accurate SBOM and objective CVE data, teams waste valuable time chasing phantom vulnerabilities or debating false positives with compliance auditors. Docker’s approach eliminates that uncertainty. Because the SBOM is generated directly from the build process, not inferred after the fact, it provides definitive evidence of what’s inside each image and why certain CVEs do or don’t apply. This transforms vulnerability management from guesswork and debate into objective, verifiable security assurance, backed by transparent, third-party data.

CVEs don’t have to disrupt your week

Managing vulnerabilities consumes significant engineering time. When critical CVEs drop, teams rush to assess impact, test patches, and coordinate deployments. Docker Hardened Images eliminate this overhead by continuously updating base images with complete transparency into their contents with rapid turnarounds to reduce your exposure window.

If you’re tired of vulnerability whack-a-mole disrupting your team’s roadmap, Docker Hardened Images offers a better path forward. Learn more about how Docker Scout and Hardened Images can reduce your vulnerability management burden, or contact our team to discuss your specific security requirements.

]]>
Docker Engine v29: Foundational Updates for the Future https://www.docker.com/blog/docker-engine-version-29/ Tue, 11 Nov 2025 15:16:19 +0000 https://www.docker.com/?p=81944 This post is for Linux users running Docker Engine (Community Edition) directly on their hosts. Docker Desktop users don’t need to take any action — Engine updates are included automatically in future Desktop releases.

Docker Engine v29 is a foundational release that sets the stage for the future of the Docker platform. While it may not come with flashy new features, it introduces a few significant under-the-hood changes that simplify our architecture and improve ecosystem alignment:

  • Minimum API version update
  • The Containerd image store is now the default for new installations.
  • Migration to Go modules
  • Experimental Support for NFTables

These changes improve maintainability, developer experience, and interoperability across the container ecosystem.

Minimum API Version Update

Docker versions older than v25 are now end of life, and as such, we have increased the Minimum API version to 1.44 (Moby v25)

If you are getting the following error, you will need to update to a newer client or follow the mitigation steps to override the min-version

Error response from daemon: client version 1.43 is too old.
Minimum supported API version is 1.44, please upgrade your client to a newer version

Override the minimum API version

There are two methods to launch dockerd with a lower minimum API version. Additional information can be found on docs.docker.com

Using flags when starting dockerd

Launch dockerd with the DOCKER_MIN_API_VERSION set to the previous value. For example:

DOCKER_MIN_API_VERSION=1.24 dockerd

Using a JSON configuration file — daemon.json

Set min-api-version in your daemon.json file.

{
  "min-api-version": "1.24"
}

Containerd Image Store Becomes the Default

Why We Made This Change

The Containerd runtime originated as a core component of Docker Engine and was later split out and donated to the Cloud Native Computing Foundation (CNCF). It now serves as the industry-standard container runtime, powering Kubernetes and many other platforms.

While Docker introduced containerd for container execution years ago, we continued using the graph driver storage backend for managing image layers. Meanwhile, containerd evolved its own image content store and snapshotter framework, designed for modularity, performance, and ecosystem alignment.

To ensure stability, Docker has been gradually migrating to the containerd image store over time. Docker Desktop has already used the containerd image store as the default for most of the past year. With Docker Engine v29, this migration takes the next step by becoming the default in the Moby engine.

What it is

  • As of Docker Engine v29, the containerd image store becomes the default for image layer and content management for new installs.
  • Legacy graph drivers are still available, but are now deprecated. New installs can still opt out of Containerd image store if there is any issue.

Why This Matters

  • Simplified architecture: Both execution and storage now use containerd, reducing duplication and internal complexity
  • Unlock new feature possibilities, such as:
    • Snapshotter innovations
    • Lazy pulling of image content
    • Remote content stores
    • Peer-to-peer distribution
  • Ecosystem alignment: Brings Docker Engine in sync with containerd-based platforms, like Kubernetes, improving interoperability.
  • Future-proofing: Enables faster innovation in image layer handling and runtime behaviour

We appreciate that this change may cause some disruption, as the Containerd image store takes a different approach to content and layer management compared to the existing storage drivers.

However, this shift is a positive one. It enables a more consistent, modular, and predictable container experience.

Migration Path

To be clear, these changes only impact new installs; existing users will not be forced to containerd. However, you can start your migration now and opt-in.

We are working on a migration guide to help teams transition and move their existing content to the containerd image store.

What’s next

  • The graph driver backend will be removed in a future release.
  • Docker will continue evolving the image store experience, leveraging the full capabilities of containerd’s ecosystem.
  • Expect to see enhanced content management, multi-snapshotter support, and faster pull/push workflows in the future.

Moby Migrates to Go Modules

Why We Made This Change

Go modules have been the community standard since 2019, but until now, the Moby project used a legacy vendoring system. Avoiding Go modules was creating:

  • Constant maintenance churn to work around tooling assumptions
  • Confusing workflows for contributors
  • Compatibility issues with newer Go tools and ecosystem practices

Simply put, continuing to resist Go modules was making life harder for everyone.

What It Is

  • The Moby codebase is now fully module-aware using go.mod.
  • This means cleaner dependency management and better interoperability for tools and contributors.
  • External clients, API libraries, and SDKs will find the Moby codebase easier to consume and integrate with.

What It’s Not

  • This is not a user-facing feature—you won’t see a UI or command change.
  • However, it does affect developers who consume Docker’s Go APIs.

Important for Go Developers

If you’re consuming the Docker client or API packages in your own Go projects:

  • The old module path github.com/docker/docker will no longer receive updates.
  • To stay current with Docker Engine releases, you must switch to importing from github.com/moby/moby.

Experimental support for nftables

Why We Made This Change

For bridge and overlay networks on Linux, Docker Engine currently creates firewall rules using “iptables” and “ip6tables”.

In most cases, these commands are linked to “iptables-nft” and “ip6tables-nft”. So, Docker’s rules are translated to nftables behind the scenes.

However, OS distributions are beginning to deprecate support for iptables. It’s past time for Docker Engine to create its own nftables rules directly.

What It Is

Opt-in support for creating nftables rules instead of iptables.

The rules are functionally equivalent, but there are some differences to be aware of, particularly if you make use of the “DOCKER-USER” chain in iptables.

On a host that uses “firewalld”, iptables rules are created via firewalld’s deprecated “direct” interface. That’s not necessary for nftables because rules are organised into separate tables, each with its own base chains. Docker will still set up firewalld zones and policies for its devices, but it creates nftables rules directly, just as it does on a host without firewalld.

What It’s Not

In this initial version, nftables support is “experimental”. Please be cautious about deploying it in a production environment.

Swarm support is planned for a future release. At present, it’s not possible to enable Docker Engine’s nftables support on a node with Swarm enabled.

In a future release, nftables will become the default firewall backend and iptables support will be deprecated.

Future Work

In addition to adding planned Swarm support, there’s scope for efficiency improvements.

For example, the rules themselves could make more use of nftables features, particularly sets of ports.

These changes will be prioritised based on the feedback received. If you would like to contribute, do let us know!

Try It Out

Start dockerd with option --firewall-backend=nftables to enable nftables support.
After a reboot, you may find you need to enable IP Forwarding on the host. If you’re using the “DOCKER-USER” iptables chain, it will need to be migrated. For more information, see https://docs.docker.com/engine/network/firewall-nftables
We’re looking for feedback. If you find issues, let us know at https://github.com/moby/moby/issues.

Getting Started with Engine v29

As mentioned, this post is for Linux users running Docker Engine (Community Edition) directly on their hosts. Docker Desktop users don’t need to take any action — Engine updates are included automatically in the upcoming Desktop releases.

To install Docker Engine on your host or update an existing installation, please follow the guide for your specific OS.

For additional information about this release:

]]>
Help Define the Future of Development – Take the Docker State of Application Development Survey 2025 https://www.docker.com/blog/state-of-application-development-survey-2025/ Mon, 10 Nov 2025 14:00:00 +0000 https://www.docker.com/?p=81847 Welcome to the fourth annual Docker State of Application Development survey!

Please help us better understand and serve the application development  community with just 20 minutes of your time. We want to know where you’re focused, what you’re working on, and what is most important to you. Your thoughts and feedback will help us build the best products and experiences for you.

image1

We believe insights should be shared. That’s why we published the 2024 State of Application Development Survey report. Thanks to the community’s engagement, we revealed the challenges developers face, the tools they love, and the trends they’re eager about. We’re using it to guide the features we build.

Take the Docker State of Application Development survey now!

By taking part in the survey, you’ll be entered into a raffle for a chance to win* one of the following prizes:

  • 1 laptop computer (Apple M4 Macbook Pro 16″)
  • 1 PlayStation 5 game console with VR headset
  • 5 $300 Amazon.com gift cards
  • 50 exclusive Docker swag sets

In addition, the first 300 respondents to complete the survey will receive an exclusive pair of Docker socks!

The survey is open from November 10th, 2025 (7AM PST) to December 10th, 2025 (11:59PM PST)

We’ll choose the winners randomly in accordance with the promotion official rules.* Winners will be notified via email by February 1, 2026.

The Docker State of Application Development Survey only takes about 20 minutes to complete. We appreciate your time and insights – every contribution makes a difference!




*Data will be reported publicly only in aggregate and without personally identifying information. 
** Docker State of Application Development Promotion Official Rules.

]]>
theCUBE Research economic validation of Docker’s development platform https://www.docker.com/blog/thecube-research-economic-validation-of-docker-development-platform/ Thu, 30 Oct 2025 11:46:28 +0000 https://www.docker.com/?p=79874 Docker’s ROI and impact on agentic AI, security, and developer productivity.

theCUBE Research surveyed ~400 IT and AppDev professionals at leading global enterprises to investigate Docker’s ROI and impact on agentic AI development, software supply chain security, and developer productivity.  The industry context is that enterprise developers face mounting pressure to rapidly ship features, build agentic AI applications, and maintain security, all while navigating a fragmented array of development tools and open source code that require engineering cycles and introduce security risks. Docker transformed software development through containers and DevSecOps workflows, and is now doing the same for agentic AI development and software supply chain security.  theCUBE Research quantified Docker’s impact: teams build agentic AI apps faster, achieve near-zero CVEs, remediate vulnerabilities before exploits, ship modern cloud-native applications, save developer hours, and generate financial returns.

Keep reading for key highlights and analysis. Download theCUBE Research report and ebook to take a deep dive.

Agentic AI development streamlined using familiar technologies

Developers can build, run, and share agents and compose agentic systems using familiar Docker container workflows. To do this, developers can build agents safely using Docker MCP Gateway, Catalog, and Toolkit; run agents securely with Docker Sandboxes; and run models with Docker Model Runner. These capabilities align with theCUBE Research findings that 87% of organizations reduced AI setup time by over 25% and 80% report accelerating AI time-to-market by at least 26%.  Using Docker’s modern and secure software delivery practices, development teams can implement AI feature experiments faster and in days test agentic AI capabilities that previously took months. Nearly 78% of developers experienced significant improvement in the standardization and streamlining of AI development workflows, enabling better testing and validation of AI models. Docker helps enterprises generate business advantages through deploying new customer experiences that leverage agentic AI applications. This is phenomenal, given the nascent stage of agentic AI development in enterprises.

Software supply chain security and innovation can move in lockstep

Security engineering and vulnerability remediation can slow development to a crawl. Furthermore, checkpoints or controls may be applied too late in the software development cycle, or after dangerous exploits, creating compounded friction between security teams seeking to mitigate vulnerabilities and developers seeking to rapidly ship features. Docker embeds security directly into development workflows through vulnerability analysis and continuously-patched certified container images. theCUBE Research analysis supports these Docker security capabilities: 79% of organizations find Docker extremely or very effective at maintaining security & compliance, while 95% of respondents reported that Docker improved their ability to identify and remediate vulnerabilities. By making it very simple for developers to use secure images as a default, Docker enables engineering teams to plan, build, and deploy securely without sacrificing feature velocity or creating deployment bottlenecks. Security and innovation can move in lockstep because Docker concurrently secures software supply chains and eliminates vulnerabilities.

Developer productivity becomes a competitive advantage

Consistent container environments eliminate friction, accelerate software delivery cycles, and enable teams to focus on building features rather than overcoming infrastructure challenges. When developers spend less time on environment setup and troubleshooting, they ship more features. Application features that previously took months now reach customers in weeks. The research demonstrates Docker’s ability to increase developer productivity. 72% of organizations reported significant productivity gains in development workflows, while 75% have transformed or adopted DevOps practices when using Docker. Furthermore, when it comes to AI and supply chain security, the findings mentioned above further support how Docker unlocks developer productivity.

Financial returns exceed expectations

CFOs demand quantifiable returns for technology investments, and Docker delivers them. 95% of organizations reported substantial annual savings, with 43% reporting $50,000-$250,000 in cost reductions from infrastructure efficiency, reduced rework, and faster time-to-market. The ROI story is equally compelling: 69% of organizations report ROI exceeding 101%, with many achieving ROI above 500%. When factoring in faster feature delivery, improved developer satisfaction, and reduced security incidents, the business case for Docker becomes even more tangible. The direct costs of a security breach can surpass $500 million, so mitigating even a fraction of this cost provides a compelling financial justification for enterprises to deploy Docker to every developer.

Modernization and cloud native apps remain top of mind

For enterprises who maintain extensive legacy systems, Docker serves as a proven catalyst for cloud-native transformation at scale. Results show that nearly nine in ten (88%) of organizations report Docker has enabled modernization of at least 10% of their applications, with half achieving modernization across 31-60% of workloads and another 20% modernizing over 60%. Docker accelerates the shift from monolithic architectures to modern containerized cloud-native environments while also delivering substantial business value.  For example, 37% of organizations report 26% to >50% faster product time-to-market, and 72% report annual cost savings ranging from $50,000 to over $1 million.

Learn more about Docker’s impact on enterprise software development

Docker has evolved from a containerization suite into a development platform for testing, building, securing, and deploying modern software, including agentic AI applications. Docker enables enterprises to apply proven containerization and DevSecOps practices to agentic AI development and software supply chain security.

Download (below) the full report and the ebook from theCUBE Research analysis to learn Docker’s impact on developer productivity, software supply chain security, agentic AI application development, CI/CD and DevSecOps, modernization, cost savings, and ROI.  Learn how enterprises leverage Docker to transform application development and win in markets where speed and innovation determine success.

theCUBE Research economic validation of Docker’s development platform

> Download the Report

> Download the eBook

theCUBE docker banner

]]>
Docker + E2B: Building the Future of Trusted AI https://www.docker.com/blog/docker-e2b-building-the-future-of-trusted-ai/ Thu, 23 Oct 2025 13:59:00 +0000 https://www.docker.com/?p=78991
Trusted Software Starts Here

The era of agents is here. Some teams are experimenting, others are just getting started, and a few are already running agents in production. But one challenge stands out: trust. Trust that your agents will act securely. Over 20 million developers already rely on Docker to build and ship software safely and fast. Now, we’re helping you build what’s next.

Over the past few months, Docker has continued to make significant investments in driving developer productivity and building a trusted foundation for AI, with a focus on simplicity and portability, from how you build and run agents to how you secure them. We’ve shipped new capabilities: a toolkit for MCP, support for running open-weight models locally, and a catalog of 200+ MCP servers

Today, we’re taking that commitment to the next level through a new partnership with E2B, a company that provides secure cloud sandboxes for AI agents. Together, we’re giving developers fast, secure access to hundreds of real-world tools, without sacrificing safety or speed.

The Next Frontier of AI: Trust

Agents can code, connect, and act, but without control, that power creates risk in two areas.

First, when agents run code. AI-generated code often executes in live environments without safeguards, accessing files or APIs it shouldn’t. Teams are learning that “auto-run” doesn’t mean “safe to run.”

Second, when agents connect to real-world tools. Integrating with external tools can create security trade-offs. And the challenge keeps growing. With the rise of the Model Context Protocol (MCP), every connected tool becomes its own potential risk surface with hundreds of servers and tokens to manage. Today, developers use a variety of DIY isolation techniques, custom-built sandboxes, and resource limits, but all of them tend to slow productivity.

Developers have seen this before. Before containers, developers struggled with messy dependencies and inconsistent environments when building applications. 

Today’s AI builders face a similar challenge. Models need access to APIs and databases, but enabling that access slows them down and introduces complexity and risk. Just as containers standardized how applications run, Docker is now standardizing how agents connect and act.

This is where E2B and Docker come together. E2B secures agent-generated code execution with cloud sandboxes. Docker secures tool access through the MCP Gateway and Catalog, offering direct connectivity to 200+ real-world tools. Each MCP server is curated by Docker and automatically audited for exploits and malicious behavior.

Docker + E2B: Secure Access to Hundreds of MCP Tools

Starting today, every E2B sandbox includes direct access to Docker’s MCP Catalog, a collection of 200+ tools such as GitHub, Perplexity, Browserbase, and ElevenLabs, all enabled by the Docker MCP Gateway. Developers can now run and connect agents more confidently, without slowing down.

Try it here.

What’s Next 

This is only the beginning. Today’s partnership is the first step in a larger journey. Together, our goal is to build the future of a secure AI stack, where every agent runs securely, every connection is verifiable, and developers can move fast without compromise.

From Docker’s perspective, our goal is to ensure developers can trust the MCP servers they pull, validate them through Docker, and govern their AI stack with the same visibility and confidence they already have for containers today.

The next wave of AI development will be built on trust. Trust in the tools, the data, and the environments behind them. At Docker, we’re building that foundation.

]]>
Docker Model Runner on the new NVIDIA DGX Spark: a new paradigm for developing AI locally https://www.docker.com/blog/new-nvidia-dgx-spark-docker-model-runner/ Tue, 14 Oct 2025 00:16:21 +0000 https://www.docker.com/?p=78571 We’re thrilled to bring NVIDIA DGX™ Spark support to Docker Model Runner. The new NVIDIA DGX Spark delivers incredible performance, and Docker Model Runner makes it accessible. With Model Runner, you can easily run and iterate on larger models right on your local machine, using the same intuitive Docker experience you already trust.

In this post, we’ll show how DGX Spark and Docker Model Runner work together to make local model development faster and simpler, covering the unboxing experience, how to set up Model Runner, and how to use it in real-world developer workflows.

What is NVIDIA DGX Spark

NVIDIA DGX Spark is the newest member of the DGX family: a compact, workstation-class AI system, powered by the Grace Blackwell GB10 Superchip  that delivers incredible  performance for local model development. Designed for researchers and developers, it makes prototyping, fine-tuning, and serving large models fast and effortless, all without relying on the cloud.

Here at Docker, we were fortunate to get a preproduction version of  DGX Spark. And yes, it’s every bit as impressive in person as it looks in NVIDIA’s launch materials.

Why Run Local AI Models and How Docker Model Runner and NVIDIA DGX Spark Make It Easy 

Many of us at Docker and across the broader developer community are experimenting with local AI models. Running locally has clear advantages:

  • Data privacy and control: no external API calls; everything stays on your machine
  • Offline availability: work from anywhere, even when you’re disconnected
  •  Ease of customization: experiment with prompts, adapters, or fine-tuned variants without relying on remote infrastructure

But there are also familiar tradeoffs:

  • Local GPUs and memory can be limiting for large models
  • Setting up CUDA, runtimes, and dependencies often eats time
  • Managing security and isolation for AI workloads can be complex

This is where DGX Spark and Docker Model Runner (DMR) shine. DMR provides an easy and secure way to run AI models in a sandboxed environment, fully integrated with Docker Desktop or Docker Engine. When combined with the DGX Spark’s NVIDIA AI software stack and large 128GB unified memory, you get the best of both worlds: plug-and-play GPU acceleration and Docker-level simplicity.

Unboxing NVIDIA DGX Spark

The device arrived well-packaged, sleek, and surprisingly small, resembling more a mini-workstation than a server.

Setup was refreshingly straightforward: plug in power, network, and peripherals, then boot into NVIDIA DGX OS, which includes NVIDIA drivers, CUDA, and AI software stack pre-installed.

Nividia 1

Once on the network, enabling SSH access makes it easy to integrate the Spark into your existing workflow.

This way, the DGX Spark becomes an AI co-processor for your everyday development environment, augmenting, not replacing, your primary machine.

Getting Started with Docker Model Runner on NVIDIA DGX Spark

Installing Docker Model Runner on the DGX Spark is simple and can be done in a matter of minutes.

1. Verify Docker CE is Installed

DGX OS comes with Docker Engine (CE) preinstalled. Confirm you have it:

docker version

If it’s missing or outdated, install following the regular Ubuntu installation instructions.

2. Install the Docker Model CLI Plugin

The Model Runner CLI is distributed as a Debian package via Docker’s apt repository. Once the repository is configured (see linked instructions above) install via the following commands:

sudo apt-get update
sudo apt-get apt-get install docker-model-plugin

Or use Docker’s handy installation script:

curl -fsSL https://get.docker.com | sudo bash

You can confirm it’s installed with:

docker model version

3. Pull and Run a Model

Now that the plugin is installed, let’s pull a model from the Docker Hub AI Catalog. For example, the Qwen 3 Coder model:

docker model pull ai/qwen3-coder

The Model Runner container will automatically expose an OpenAI-compatible endpoint at:

http://localhost:12434/engines/v1

You can verify it’s live with a quick test:

# Test via API

curl http://localhost:12434/engines/v1/chat/completions   -H 'Content-Type: application/json'   -d 
'{"model":"ai/qwen3-coder","messages":[{"role":"user","content":"Hello!"}]}'

# Or via CLI
docker model run ai/qwen3-coder

GPUs are allocated to the Model Runner container via nvidia-container-runtime and the Model Runner will take advantage of any available GPUs automatically. To see GPU usage:

nvidia-smi

4. Architecture Overview

Here’s what’s happening under the hood:

[ DGX Spark Hardware (GPU + Grace CPU) ]

             │

     (NVIDIA Container Runtime)

             │

     [ Docker Engine (CE) ]

             │

     [ Docker Model Runner Container ]

             │

     OpenAI-compatible API :12434

The NVIDIA Container Runtime bridges the NVIDIA GB10 Grace Blackwell Superchip drivers and Docker Engine, so containers can access CUDA directly. Docker Model Runner then runs inside its own container, managing the model lifecycle and providing the standard OpenAI API endpoint. (For more info on Model Runner architecture, see this blog).

From a developer’s perspective, interact with models similarly to any other Dockerized service — docker model pull, list, inspect, and run all work out of the box.

Using Local Models in Your Daily Workflows

If you’re using a laptop or desktop as your primary machine, the DGX Spark can act as your remote model host. With a few SSH tunnels, you can both access the Model Runner API and monitor GPU utilization via the DGX dashboard, all from your local workstation.

1. Forward the DMR Port (for Model Access)

To access the DGX Spark via SSH first set up an SSH server:

Using Local Models in Your Daily Workflows
If you’re using a laptop or desktop as your primary machine, the DGX Spark can act as your remote model host. With a few SSH tunnels, you can both access the Model Runner API and monitor GPU utilization via the DGX dashboard, all from your local workstation.

sudo apt install openssh-server
sudo systemctl enable --now ssh

Run the following command to access Model Runner via your local machine. Replace user with the username you configured when you first booted the DGX Spark and replace dgx-spark.local with the IP address of the DGX Spark on your local network or a hostname configured in /etc/hosts. 

ssh -N -L localhost:12435:localhost:12434 user@dgx-spark.local


This forwards the Model Runner API from the DGX Spark to your local machine.
Now, in your IDE, CLI tool, or app that expects an OpenAI-compatible API, just point it to:

http://localhost:12435/engines/v1/models

Set the model name (e.g. ai/qwen3-coder) and you’re ready to use local inference seamlessly.

2. Forward the DGX Dashboard Port (for Monitoring)

The DGX Spark exposes a lightweight browser dashboard showing real-time GPU, memory, and thermal stats, usually served locally at:

http://localhost:11000

You can forward it through the same SSH session or a separate one:

ssh -N -L localhost:11000:localhost:11000 user@dgx-spark.local

Then open http://localhost:11000 in your browser on your main workstation to monitor the DGX Spark performance while running your models.

Nividia 2



This combination makes the DGX Spark feel like a remote, GPU-powered extension of your development environment. Your IDE or tools still live on your laptop, while model execution and resource-heavy workloads happen securely on the Spark.

Example application: Configuring Opencode with Qwen3-Coder


Let’s make this concrete.

Suppose you use OpenCode, an open-source, terminal-based AI coding agent.

Once your DGX Spark is running Docker Model Runner with ai/qwen3-coder pulled and the port is forwarded, you can configure OpenCode by adding the following to ~/.config/opencode/opencode.json

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "dmr": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Docker Model Runner",
      "options": {
        "baseURL": "http://localhost:12435/engines/v1"   // DMR’s OpenAI-compatible base
      },
      "models": {
        "ai/qwen3-coder": { "name": "Qwen3 Coder" }
      }
    }
  },
  "model": "ai/qwen3-coder"
}


Now run opencode and select Qwen3 Coder with the /models command:

Nividia 3


That’s it! Completions and chat requests will be routed through Docker Model Runner on your DGX Spark, meaning Qwen3-Coder now powers your agentic development experience locally.

Nividia 4 1


You can verify that the model is running by opening http://localhost:11000 (the DGX dashboard) to watch GPU utilization in real time while coding.
This setup lets you:

  • Keep your laptop light while leveraging the DGX Spark GPUs
  • Experiment with custom or fine-tuned models through DMR
  • Stay fully within your local environment for privacy and cost-control

Summary

Running Docker Model Runner on the NVIDIA DGX Spark makes it remarkably easy to turn powerful local hardware into a seamless extension of your everyday Docker workflow.

You install one plugin and use familiar Docker commands (docker model pull, docker model run).
You get full GPU acceleration through NVIDIA’s container runtime.
You can forward both the model API and monitoring dashboard to your main workstation for effortless development and visibility.

This setup bridges the gap between developer productivity and AI infrastructure, giving you the speed, privacy, and flexibility of local execution with the reliability and simplicity Docker provides.

As local model workloads continue to grow, the DGX Spark + Docker Model Runner combo represents a practical, developer-friendly way to bring serious AI compute to your desk — no data center or cloud dependency required.

Learn more:

  • Read the official announcement of DGX Spark launch on NVIDIA newsroom
  • Check out the Docker Model Runner General Availability announcement
  • Visit our Model Runner GitHub repo. Docker Model Runner is open-source, and we welcome collaboration and contributions from the community! Star, fork and contribute.

]]>
Docker at AI Engineer Paris: Build and Secure AI Agents with Docker https://www.docker.com/blog/ai-engineer-paris-build-secure-ai-agents/ Mon, 06 Oct 2025 13:00:00 +0000 https://www.docker.com/?p=78463 Last week, Docker was thrilled to be part of the inaugural AI Engineer Paris, a spectacular European debut that brought together an extraordinary lineup of speakers and companies. The conference, organized by the Koyeb team, made one thing clear: the days of simply sprinkling ‘AI dust’ on applications are over. Meaningful results demand rigorous engineering, complex data pipelines, focus on distributed systems, understanding compliance and supply chain security of AI.

image2

But the industry’s appetite for automation and effectively working with natural language and unstructured data isn’t going anywhere. It’s clear that AI Agents represent the next, inevitable wave of application development. 

At Docker, we’re dedicated to ensuring that building, sharing, and securing these new AI-powered applications is as simple and portable as containerizing microservices. That was the core message we shared at the event, showcasing how our tools simplify the entire agent lifecycle from local development to secure deployment at scale.

Keynote on democratizing AI Agents

Tushar Jain, Docker’s EVP Engineering & Product, joined a powerful line-up of Europe’s top AI engineering thought leaders including speakers from Mistral, Google DeepMind, Hugging Face, and Neo4j.

image1

Tushar’s session, “Democratizing AI Agents: Building, Sharing, and Securing Made Simple,” focused on a critical challenge: AI agent development can’t stay locked away with a few specialists. To drive real innovation and productivity across an organization, building agents must be democratized. We believe agents need standardized packaging and developers need a simple, secure way to discover and run MCP servers.

image4

Tushar spoke about how over the last decade, Docker made containers and microservices accessible to every developer. Now we see agents following the same trajectory. Just as containers standardized microservices, we need new tooling and trusted ecosystems to standardize agents. By developing standardized agent packaging and building the MCP Toolkit & Catalog for secure, discoverable tools, Docker is laying the groundwork for the next era of agent-based development.

image3

Hands-On: Building Collaborative Multi-Agent Teams

To guide attendees to understanding this in practice, we followed this vision with a hands-on workshop, Building Intelligent Multi-Agent Systems with Docker cagent: From Solo AI to Collaborative Teams. And it was a massive hit! Attendees had a perfect way to connect with the cagent team and to learn how to package and distribute agents as easily as building and pushing Docker images. 

The workshop focused on recently open-sourced cagent and how to use it for common tasks in agent development: 

  • Orchestrate specialized AI agent teams that collaborate and delegate tasks intelligently.
  • using cagent to easily package, share, and run existing multi-agent systems created by community 
  • and of course how to integrate external tools through the Model Context Protocol (MCP), ensuring agents have access to the data and can affect changes in the real world. 

If you want to try it yourself, the self-paced version of the workshop is available online: https://cagent-workshop.rumpl.dev/README.html

image5

At the end of the day during a breakout session, we followed that up with another reality-inspired message in my talk Building AI workflows: from local experiments to serving users. Whatever technologies you pick for your AI agent implementation: AI applications are distributed systems. They are a combination of the model, external tools, and your prompts. This means that if you ever aim to move from prototypes to production, you shouldn’t develop agents as simple prompts in AI assistants UI. Instead, treat them as you would any other complex architecture: containerize the individual components, factor in security and compliance, and architect for deployment complexity from the start.

Next Steps: Build and Secure Your Agents Today!

All in all, we had plenty of fantastic conversations with the AI Engineer community, which reinforced that developers are looking for tools that offer simplicity, portability, and security for this new wave of applications.

Docker is committed to simplifying agent development and securing MCP deployments at scale.

Learn More

]]>
Company Archives | Docker nonadult