Docker Hardened Images – Docker https://www.docker.com Tue, 03 Mar 2026 20:30:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.docker.com/app/uploads/2024/02/cropped-docker-logo-favicon-32x32.png Docker Hardened Images – Docker https://www.docker.com 32 32 Announcing Docker Hardened System Packages https://www.docker.com/blog/announcing-docker-hardened-system-packages/ Tue, 03 Mar 2026 20:30:00 +0000 https://www.docker.com/?p=85825 Your Package Manager, Now with a Security Upgrade

Last December, we made Docker Hardened Images (DHI) free because we believe secure, minimal, production-ready images should be the default. Every developer deserves strong security at no cost. It should not be complicated or locked behind a paywall.

From the start, flexibility mattered just as much as security. Unlike opaque, proprietary hardened alternatives, DHI is built on trusted open source foundations like Alpine and Debian. That gives teams true multi-distro flexibility without forcing change. If you run Alpine, stay on Alpine. If Debian is your standard, keep it. DHI strengthens what you already use. It does not require you to replace it.

Today, we are extending that philosophy beyond images.

With Docker Hardened System Packages, we’re driving security deeper into the stack. Every package is built on the same secure supply chain foundation: source-built and patched by Docker, cryptographically attested, and backed by an SLA.

The best part? Multi-distro support by design.

The result is consistent, end-to-end hardening across environments with the production-grade reliability teams expect.

Since introducing DHI Community (our OSS tier), interest has surged. The DHI catalog has expanded from more than 1,000 to over 2,000 hardened container images. Its openness and ability to meet teams where they are have accelerated adoption across the ecosystem. Companies of all sizes, along with a growing number of open source projects, are making DHI their standard for secure containers.

Just consider this short selection of examples:
  • n8n.io has moved its production infrastructure to DHI, they share why and how in this recent webinar
  • Medplum, an open-source electronic health records platform (managing data of 20+ million patients) has now standardized to DHI
  • Adobe uses DHI because of great alignment with its security posture and developer tooling compatibility
  • Attentive co-authored this e-book with Docker on helping others move from POC to production with DHI

Docker Hardened System Packages: Going deeper into the container

From day one, Docker has built and secured the most critical operating system packages to deliver on our CVE remediation commitments. That’s how we continuously maintain near-zero CVEs in DHI images. At the same time, we recognize that many teams extend our minimal base images with additional upstream packages to meet their specific requirements. To support that reality, we are expanding our catalog with more than 8,000 hardened Alpine packages, with Debian coverage coming soon.

This expansion gives teams greater flexibility without weakening their security posture. You can start with a DHI base image and tailor it to your needs while maintaining the same hardened supply chain guarantees. There is no need to switch distros to get continuous patching, verified builds through a SLSA Build Level 3 pipeline, and enterprise-grade assurances. Your teams can continue working with the Alpine and Debian environments they know, now backed by Docker’s secure build system from base image to system package.

Why this matters for your security posture:

Complete provenance chain. Every package is built from source by Docker, attested, and cryptographically signed. From base image to final container, your provenance stays intact.

Faster vulnerability remediation. When a vulnerability is identified, we patch it at the package level and publish it to the catalog. Not image by image. That means fixes move faster and remediation scales across your entire container fleet.

Extending the near-zero CVE guarantee. DHI images maintain near-zero. Hardened System Packages extend that guarantee more broadly across the software ecosystem, covering packages you add during customization.

Use hardened packages with your containers. DHI Enterprise customers get access to the secure packages repository, making it possible to use Hardened System Packages beyond DHI images. Integrate them into your own pipelines and across Alpine and Debian workloads throughout your environment.

The work we’re doing on our users’ behalf: Maintaining thousands of packages is continuous work. We monitor upstream projects, backport patches, test compatibility, rebuild when dependencies change, and generate attestations for every release. Alpine alone accounts for more than 8,000 packages today, soon approaching 10,000, with Debian next.

Making enterprise-grade security even more accessible

We’re also simplifying how teams access DHI. The full catalog of thousands of open-source images under Apache 2.0 now has a new name: DHI Community. There are no licensing changes, this is just a name change, so all of that free goodness has an easy name to refer to.

For teams that need SLA-backed CVE remediation and customization capabilities at a more accessible price point, we’re announcing a new pricing tier today, DHI Select. This new tier brings enterprise-grade security at a price of $5,000 per repo.

For organizations with more demanding requirements, including unlimited customizations, access to the Hardened System Packages repo, and extended lifecycle coverage for up to five years after upstream EOL, DHI Enterprise and the DHI Extended Lifecycle Support add-on remain available.

More options means more teams can adopt the right level of security for where they are today.

Build with the standard that’s redefining container security

Docker’s momentum in securing the software supply chain is accelerating. We’re bringing security to more layers of the stack, making it easier for teams to build securely by default, for open source-based containers as well as your company’s internally-developed software. We’re also pushing toward a one-day (or shorter) timeline for critical CVE fixes. Each step builds on the last, moving us closer to end-to-end supply chain security for all of your critical applications.

Get started:

  • Join the n8n webinar to see how they’re running production workloads on DHI
  • Start your free trial and get access to the full DHI catalog, now with Docker Hardened System Packages

]]>
How Medplum Secured Their Healthcare Platform with Docker Hardened Images (DHI) https://www.docker.com/blog/medplum-healthcare-docker-hardened-images/ Thu, 19 Feb 2026 14:00:00 +0000 https://www.docker.com/?p=85305 Special thanks to Cody Ebberson and the Medplum team for their open-source contribution and for sharing their migration experience with the community. A real-world example of migrating a HIPAA-compliant EHR platform to DHI with minimal code changes.

Healthcare software runs on trust. When patient data is at stake, security isn’t just a feature but a fundamental requirement. For healthcare platform providers, proving that trust to enterprise customers is an ongoing challenge that requires continuous investment in security posture, compliance certifications, and vulnerability management.

That’s why we’re excited to share how Medplum, an open-source healthcare platform serving over 20 million patients, recently migrated to Docker Hardened Images (DHI). This migration demonstrates exactly what we designed DHI to deliver: enterprise-grade security with minimal friction. Medplum’s team made the switch with just 54 lines of changes across 5 files – a near net-zero code change that dramatically improved their security posture.

Medplum is a headless EHR; the platform handles patient data, clinical workflows, and compliance so developers can focus on building healthcare apps. Built by and for healthcare developers, the platform provides:

  • HIPAA and SOC2 compliance out of the box
  • FHIR R4 API for healthcare data interoperability
  • Self-hosted or managed deployment options
  • Support for 20+ million patients across hundreds of practices

With over 500,000 pulls on Docker Hub for their medplum-server image, Medplum has become a trusted foundation for healthcare developers worldwide. As an open-source project licensed under Apache 2.0, their entire codebase, including Docker configurations, is publicly available onGitHub. This transparency made their DHI migration a perfect case study for the community.

Diagram of Medplum as headless EHR

Caption: Medplum is a headless EHR; the platform handles patient data, clinical workflows, and compliance so developers can focus on building healthcare apps.

Medplum is developer-first. It’s not a plug-and-play low-code tool, it’s designed for engineering teams that want a strong FHIR-based foundation with full control over the codebase.

The Challenge: Vulnerability Noise and Security Toil

Healthcare software development comes with unique challenges. Integration with existing EHR systems, compliance with regulations like HIPAA, and the need for robust security all add complexity and cost to development cycles.

“The Medplum team found themselves facing a challenge common to many high-growth platforms: “Vulnerability Noise.” Even with lean base images, standard distributions often include non-essential packages that trigger security flags during enterprise audits. For a company helping others achieve HIPAA compliance, every “Low” or “Medium” CVE (Common Vulnerability and Exposure) requires investigation and documentation, creating significant “security toil” for their engineering team.”

Reshma Khilnani

CEO, Medplum

Medplum addresses this by providing a compliant foundation. But even with that foundation, their team found themselves facing another challenge common to high-growth platforms: “Vulnerability Noise.”

Healthcare is one of the most security-conscious industries. Medplum’s enterprise customers, including Series C and D funded digital health companies, don’t just ask about security; they actively verify it. These customers routinely scan Medplum’s Docker images as part of their security due diligence.

Even with lean base images, standard distributions often include non-essential packages that trigger security flags during enterprise audits. For a company helping others achieve HIPAA compliance, every “Low” or “Medium” CVE requires investigation and documentation. This creates significant “security toil” for their engineering team.

The First Attempt: Distroless

This wasn’t Medplum’s first attempt at solving the problem. Back in November 2024, the team investigated Google’s distroless images as a potential solution.

The motivations were similar to what DHI would later deliver:

  • Less surface area in production images, and therefore less CVE noise
  • Smaller images for faster deployments
  • Simpler build process without manual hardening scripts

The idea was sound. Distroless images strip away everything except the application runtime: no shell, no package manager, minimal attack surface. On paper, it was exactly what Medplum needed.

But the results were mixed. Image sizes actually increased. Build times went up. There were concerns about multi-architecture support for native dependencies. The PR was closed without merging.

The core problem remained: many CVEs in standard images simply aren’t actionable. Often there isn’t a fix available, so all you can do is document and explain why it doesn’t apply to your use case. And often the vulnerability is in a corner of the image you’re not even using, like Perl, which comes preinstalled on Debian but serves no purpose in a Node.js application.

Fully removing these unused components is the only real answer. The team knew they needed hardened images. They just hadn’t found the right solution yet.

The Solution: Docker Hardened Images

When Docker made Hardened Images freely available under Apache 2.0, Medplum’s team saw an opportunity to simplify their security posture while maintaining compatibility with their existing workflows.

By switching to Docker Hardened Images, Medplum was able to offload the repetitive work of OS-level hardening – like configuring non-root users and stripping out unnecessary binaries – to Docker. This allowed them to provide their users with a “Secure-by-Default” image that meets enterprise requirements without adding complexity to their open-source codebase.

This shift is particularly significant for an open-source project. Rather than maintaining custom hardening scripts that contributors need to understand and maintain, Medplum can now rely on Docker’s expertise and continuous maintenance. The security posture improves automatically with each DHI update, without requiring changes to Medplum’s Dockerfiles.

“By switching to Docker Hardened Images, Medplum was able to offload the repetitive work of OS-level hardening—like configuring non-root users and stripping out unnecessary binaries—to Docker. This allowed their users to provide their users with a “Secure-by-Default” image that meets enterprise requirements without adding complexity to their open-source codebase.”

Cody Ebberson

CTO, Medplum

The Migration: Real Code Changes

The migration was remarkably clean. Previously, Medplum’s Dockerfile required manual steps to ensure security best practices. By moving to DHI, they could simplify their configuration significantly.

Let’s look at what actually changed. Here’s the complete server Dockerfile after the migration:

# Medplum production Dockerfile
# Uses Docker "Hardened Images":
# https://hub.docker.com/hardened-images/catalog/dhi/node/guides

# Supported architectures: linux/amd64, linux/arm64

# Stage 1: Build the application and install production dependencies
FROM dhi.io/node:24-dev AS build-stage
ENV NODE_ENV=production
WORKDIR /usr/src/medplum
ADD ./medplum-server-metadata.tar.gz ./
RUN npm ci --omit=dev && \
  rm package-lock.json

# Stage 2: Create the runtime image
FROM dhi.io/node:24 AS runtime-stage
ENV NODE_ENV=production
WORKDIR /usr/src/medplum
COPY --from=build-stage /usr/src/medplum/ ./
ADD ./medplum-server-runtime.tar.gz ./

EXPOSE 5000 8103

ENTRYPOINT [ "node", "--require", "./packages/server/dist/otel/instrumentation.js", "packages/server/dist/index.js" ]

Notice what’s not there:

  • No groupadd or useradd commands: DHI runs as non-root by default
  • No chown commands: permissions are already correct
  • No USER directive: the default user is already non-privileged

Before vs. After: Server Dockerfile

Before (node:24-slim):

FROM node:24-slim
ENV NODE_ENV=production
WORKDIR /usr/src/medplum

ADD ./medplum-server.tar.gz ./

# Install dependencies, create non-root user, and set permissions
RUN npm ci && \
  rm package-lock.json && \
  groupadd -r medplum && \
  useradd -r -g medplum medplum && \
  chown -R medplum:medplum /usr/src/medplum

EXPOSE 5000 8103

# Switch to the non-root user
USER medplum

ENTRYPOINT [ "node", "--require", "./packages/server/dist/otel/instrumentation.js", "packages/server/dist/index.js" ]

After (dhi.io/node:24):

FROM dhi.io/node:24-dev AS build-stage
ENV NODE_ENV=production
WORKDIR /usr/src/medplum
ADD ./medplum-server-metadata.tar.gz ./
RUN npm ci --omit=dev && rm package-lock.json

FROM dhi.io/node:24 AS runtime-stage
ENV NODE_ENV=production
WORKDIR /usr/src/medplum
COPY --from=build-stage /usr/src/medplum/ ./
ADD ./medplum-server-runtime.tar.gz ./

EXPOSE 5000 8103

ENTRYPOINT [ "node", "--require", "./packages/server/dist/otel/instrumentation.js", "packages/server/dist/index.js" ]

The migration also introduced a cleaner multi-stage build pattern, separating metadata (package.json files) from runtime artifacts.

Before vs. After: App Dockerfile (Nginx)

The web app migration was even more dramatic:

Before (nginx-unprivileged:alpine):

FROM nginxinc/nginx-unprivileged:alpine

# Start as root for permissions
USER root

COPY <<EOF /etc/nginx/conf.d/default.conf
# ... nginx config ...
EOF

ADD ./medplum-app.tar.gz /usr/share/nginx/html
COPY ./docker-entrypoint.sh /docker-entrypoint.sh

# Manual permission setup
RUN chown -R 101:101 /usr/share/nginx/html && \
    chown 101:101 /docker-entrypoint.sh && \
    chmod +x /docker-entrypoint.sh

EXPOSE 3000

# Switch back to non-root
USER 101

ENTRYPOINT ["/docker-entrypoint.sh"]

After (dhi.io/nginx:1):

FROM dhi.io/nginx:1

COPY <<EOF /etc/nginx/nginx.conf
# ... nginx config ...
EOF

ADD ./medplum-app.tar.gz /usr/share/nginx/html
COPY ./docker-entrypoint.sh /docker-entrypoint.sh

EXPOSE 3000

ENTRYPOINT ["/docker-entrypoint.sh"]

Results: Improved Security Posture

After merging the changes, Medplum’s team shared their improved security scan results. The migration to DHI resulted in:

  • Dramatically reduced CVE count – DHI’s minimal base means fewer packages to patch
  • Non-root by default – No manual user configuration required
  • No shell access in production – Reduced attack surface for container escape attempts
  • Continuous patching – All DHI images are rebuilt when upstream security updates are available

For organizations that require stronger guarantees, Docker Hardened Images Enterprise adds SLA-backed remediation timelines, image customizations, and FIPS/STIG variants.

Most importantly, all of this was achieved with zero functional changes to the application. The same tests passed, the same workflows worked, and the same deployment process applied.

CI/CD Integration

Medplum also updated their GitHub Actions workflow to authenticate with the DHI registry:

- name: Login to Docker Hub
  uses: docker/login-action@v2.2.0
  with:
    username: ${{ secrets.DOCKERHUB_USERNAME }}
    password: ${{ secrets.DOCKERHUB_TOKEN }}

- name: Login to Docker Hub Hardened Images
  uses: docker/login-action@v2.2.0
  with:
    registry: dhi.io
    username: ${{ secrets.DOCKERHUB_USERNAME }}
    password: ${{ secrets.DOCKERHUB_TOKEN }}

This allows their CI/CD pipeline to pull hardened base images during builds. The same Docker Hub credentials work for both standard and hardened image registries.

The Multi-Stage Pattern for DHI

One pattern worth highlighting from Medplum’s migration is the use of multi-stage builds with DHI variants:

  1. Build stage: Use dhi.io/node:24-dev which includes npm/yarn for installing dependencies
  2. Runtime stage: Use dhi.io/node:24 which is minimal and doesn’t include package managers

This pattern ensures that build tools never make it into the production image, further reducing the attack surface. It’s a best practice for any containerized Node.js application, and DHI makes it straightforward by providing purpose-built variants for each stage.

Medplum’s Production Architecture

Medplum’s hosted offering runs on AWS using containerized workloads. Their medplum/medplum-server image, built on DHI base images, now deploys to production.

Medplum production architecture

Here’s how the build-to-deploy flow works:

  1. Build time: GitHub Actions pulls dhi.io/node:24-dev and dhi.io/node:24 as base images
  2. Push: The resulting hardened image is pushed to medplum/medplum-server on Docker Hub
  3. Deploy: AWS Fargate pulls medplum/medplum-server:latest and runs the hardened container

The deployed containers inherit all DHI security properties (non-root execution, minimal attack surface, no shell) because they’re built on DHI base images. This demonstrates that DHI works seamlessly with production-grade infrastructure including:

  • AWS Fargate/ECS for container orchestration
  • Elastic Load Balancing for high availability
  • Aurora PostgreSQL for managed database
  • ElastiCache for Redis caching
  • CloudFront for CDN and static assets

No infrastructure changes were required. The same deployment pipeline, the same Fargate configuration, just a more secure base image.

Why This Matters for Healthcare

For healthcare organizations evaluating container security, Medplum’s migration offers several lessons:

1. Eliminating “Vulnerability Noise”

The biggest win from DHI isn’t just security, it’s reducing the operational burden of security. Fewer packages means fewer CVEs to investigate, document, and explain to customers. For teams without dedicated security staff, this reclaimed time is invaluable.

2. Compliance-Friendly Defaults

HIPAA requires covered entities to implement technical safeguards including access controls and audit controls. DHI’s non-root default and minimal attack surface align with these requirements out of the box. For companies pursuing SOC 2 Type 2 certification, which Medplum implemented from Day 1, or HITRUST certification, DHI provides a stronger foundation for the technical controls auditors evaluate.

3. Reduced Audit Surface

When security teams audit container configurations, DHI provides a cleaner story. Instead of explaining custom hardening scripts or why certain CVEs don’t apply, teams can point to Docker’s documented hardening methodology, SLSA Level 3 provenance, and independent security validation by SRLabs. This is particularly valuable during enterprise sales cycles where customers scan vendor images as part of due diligence.

4. Practicing What You Preach

For platforms like Medplum that help customers achieve compliance, using hardened images isn’t just good security, it’s good business. When you’re helping healthcare organizations meet regulatory requirements, your own infrastructure needs to set the example.

5. Faster Security Response

With DHI Enterprise, critical CVEs are patched within 7 days. For healthcare organizations where security incidents can have regulatory implications, this SLA provides meaningful risk reduction and a concrete commitment to share with customers.

Conclusion

Medplum’s migration to Docker Hardened Images demonstrates that improving container security doesn’t have to be painful. With minimal code changes (54 additions and 52 deletions) they achieved:

  • Secure-by-Default images that meet enterprise requirements
  • Automatic non-root execution
  • Dramatically reduced CVE surface
  • Simplified Dockerfiles with no manual hardening scripts
  • Less “security toil” for their engineering team
  • A stronger compliance story for enterprise customers

By offloading OS-level hardening to Docker, Medplum can focus on what they do best: building healthcare infrastructure while their security posture improves automatically with each DHI update.

For a platform with 500,000+ Docker Hub pulls serving healthcare organizations worldwide, this migration shows that DHI is ready for production workloads at scale. More importantly, it shows that security improvements can actually reduce operational burden rather than add to it.

For platforms helping others achieve compliance, practicing what you preach matters. With Docker Hardened Images, that just got a lot easier.

Ready to harden your containers? Explore the Docker Hardened Images documentation or browse the free DHI catalog to find hardened versions of your favorite base images.

Resources

]]>
Hardened Images Are Free. Now What? https://www.docker.com/blog/hardened-images-free-now-what/ Tue, 10 Feb 2026 14:00:00 +0000 https://www.docker.com/?p=85125 Docker Hardened Images are now free, covering Alpine, Debian, and over 1,000 images including databases, runtimes, and message buses. For security teams, this changes the economics of container vulnerability management.

DHI includes security fixes from Docker’s security team, which simplifies security response. Platform teams can pull the patched base image and redeploy quickly. But free hardened images raise a question: how should this change your security practice? Here’s how our thinking is evolving at Docker.

What Changes (and What Doesn’t)

DHI gives you a security “waterline.” Below the waterline, Docker owns vulnerability management. Above it, you do. When a scanner flags something in a DHI layer, it’s not actionable by your team. Everything above the DHI boundary remains yours.

The scope depends on which DHI images you use. A hardened Python image covers the OS and runtime, shrinking your surface to application code and direct dependencies. A hardened base image with your own runtime on top sets the boundary lower. The goal is to push your waterline as high as possible.

Vulnerabilities don’t disappear. Below the waterline, you need to pull patched DHI images promptly. Above it, you still own application code, dependencies, and anything you’ve layered on top.

Supply Chain Isolation

DHI provides supply chain isolation beyond CVE remediation.

Community images like python:3.11 carry implicit trust assumptions: no compromised maintainer credentials, no malicious layer injection via tag overwrite, no tampering since your last pull. The Shai Hulud campaign(s) demonstrated the consequences when attackers exploit stolen PATs and tag mutability to propagate through the ecosystem.

DHI images come from a controlled namespace where Docker rebuilds from source with review processes and cooldown periods. Supply chain attacks that burn through community images stop at the DHI boundary. You’re not immune to all supply chain risk, but you’ve eliminated exposure to attacks that exploit community image trust models.

This is a different value proposition than CVE reduction. It’s isolation from an entire class of increasingly sophisticated attacks.

The Container Image as the Unit of Assessment

Security scanning is fragmented. Dependency scanning, SAST, and SCA all run in different contexts, and none has full visibility into how everything fits together at deployment time.

The container image is where all of this converges. It’s the actual deployment artifact, which makes it the checkpoint where you can guarantee uniform enforcement from developer workstation to production. The same evaluation criteria you run locally after docker build can be identical to what runs in CI and what gates production deployments.

This doesn’t need to replace earlier pipeline scanning altogether. It means the image is where you enforce policy consistency and build a coherent audit trail that maps directly to what you’re deploying.

Policy-Driven Automation

Every enterprise has a vulnerability management policy. The gap is usually between policy (PDFs and wikis) and practice (spreadsheets and Jira tickets).

DHI makes that gap easier to close by dramatically reducing the volume of findings that require policy decisions in the first place. When your scanner returns 50 CVEs instead of 500, even basic severity filtering becomes a workable triage system rather than an overwhelming backlog.

A simple, achievable policy might include the following:

  • High and critical severity vulnerabilities require remediation or documented exception
  • Medium and lower severity issues are accepted with periodic review
  • CISA KEV vulnerabilities are always in scope

Most scanning platforms support this level of filtering natively, including Grype, Trivy, Snyk, Wiz, Prisma Cloud, Aqua, and Docker Scout. You define your severity thresholds, apply them automatically, and surface only what requires human judgment.

For teams wanting tighter integration with DHI coverage data, Docker Scout evaluates policies against DHI status directly. Third-party scanners can achieve similar results through pipeline scripting or by exporting DHI coverage information for comparison.

The goal isn’t perfect automation but rather reducing noise enough that your existing policy becomes enforceable without burning out your engineers.

VEX: What You Can Do Today

Docker Hardened Images ship with VEX attestations that suppress CVEs Docker has assessed as not exploitable in context. The natural extension is for your teams to add their own VEX statements for application-layer findings.

Here’s what your security team can do today:

Consume DHI VEX data. Grype (v0.65+), Trivy, Wiz, and Docker Scout all ingest DHI VEX attestations automatically or via flags. Scanners without VEX support can still use extracted attestations to inform manual triage.

Write your own VEX statements. OpenVEX provides the JSON format. Use vexctl to generate and sign statements.

Attach VEX to images. Docker recommends docker scout attestation add for attaching VEX to images already in a registry:

docker scout attestation add \
  --file ./cve-2024-1234.vex.json \
  --predicate-type https://openvex.dev/ns/v0.2.0 \
  <image>

Alternatively, COPY VEX documents into the image filesystem at build time, though this prevents updates without rebuilding.

Configure scanner VEX ingestion. The workflow: scan, identify investigated findings, document as VEX, feed back into scanner config. Future scans automatically suppress assessed vulnerabilities.

Compliance: What DHI Actually Provides

Compliance frameworks such as ISO 27001, SOC 2, and the EU Cyber Resilience Act require systematic, auditable vulnerability management. DHI addresses specific control requirements:

Vulnerability management documentation (ISO 27001  A.8.8. , SOC 2 CC7.1). The waterline model provides a defensible answer to “how do you handle base image vulnerabilities?” Point to DHI, explain the attestation model, show policy for everything above the waterline.

Continuous monitoring evidence. DHI images rebuild and re-scan on a defined cadence. New digests mean current assessments. Combined with your scanner’s continuous monitoring, you demonstrate ongoing evaluation rather than point-in-time checks.

Remediation traceability. VEX attestations create machine-readable records of how each CVE was handled. When auditors ask about specific CVEs in specific deployments, you have answers tied to image digests and timestamps.

CRA alignment. The Cyber Resilience Act requires “due diligence” vulnerability handling and SBOMs. DHI images include SBOM attestations, and VEX aligns with CRA expectations for exploitability documentation.

This won’t satisfy every audit question, but it provides the foundation most organizations lack.

What to Do After You Read This Post

  1. Identify high-volume base images. Check Docker Hub’s Hardened Images catalog (My Hub → Hardened Images → Catalog) for coverage of your most-used images (Python, Node, Go, Alpine, Debian).
  2. Swap one image. Pick a non-critical service, change the FROM line to DHI equivalent, rebuild, scan, compare results.
  3. Configure policy-based filtering. Set up your scanner to distinguish DHI-covered vulnerabilities from application-layer findings. Use Docker Scout or Wiz for native VEX integration, or configure Grype/Trivy ignore policies based on extracted VEX data.
  4. Document your waterline. Write down what DHI covers and what remains your responsibility. This becomes your policy reference and audit documentation.
  5. Start a VEX practice. Convert one informally-documented vulnerability assessment into a VEX statement and attach it to the relevant image.

DHI solves specific, expensive problems around base image vulnerabilities and supply chain trust. The opportunity is building a practice around it that scales.

The Bigger Picture

DHI coverage is expanding. Today it might cover your OS layer, tomorrow it extends through runtimes and into hardened libraries. Build your framework to be agnostic to where the boundary sits. The question is always the same, though, namely —  what has Docker attested to, and what remains yours to assess?

The methodology Docker uses for DHI (policy-driven assessment, VEX attestations, auditable decisions) extends into your application layer. We can’t own your custom code, but we can provide the framework for consistent practices above the waterline. Whether you use Scout, Wiz, Grype, Trivy, or another scanner, the pattern is the same. You can let DHI handle what it covers, automate policy for what remains, and document decisions in formats that travel with artifacts.

At Docker, we’re using DHI internally to build this vulnerability management model. The framework stays constant regardless of how much of our stack is hardened today versus a year from now. Only the boundary moves.

The hardened images are free. The VEX attestations are included. What’s left is integrating these pieces into a coherent security practice where the container is the unit of truth, policy drives automation, and every vulnerability decision is documented by default.

For organizations that require stronger guarantees, FIPS-enabled and STIG-ready images, and customizations, DHI Enterprise is tailor made for those use cases. Get in touch with the Docker team if you would like a demo. If you’re still exploring, take a look at the catalog (no-signup needed) or take DHI Enterprise for a spin with a free trial.

]]>
Reduce Vulnerability Noise with VEX: Wiz + Docker Hardened Images https://www.docker.com/blog/reduce-vulnerability-noise-with-vex-wiz-docker-hardened-images/ Thu, 05 Feb 2026 23:25:55 +0000 https://www.docker.com/?p=85085 Open source components power most modern applications. A new generation of hardened container images can establish a more secure foundation, but even with hardened images, vulnerability scanners often return dozens or hundreds of CVEs with little prioritization. This noise slows teams down and complicates security triage. The VEX (Vulnerability Exploitability eXchange) standard addresses the problem by providing information on whether a specific vulnerability actually impacts an organization’s application stack and infrastructure.

A new integration between Docker Hardened Images (DHI) and Wiz CLI now gives security and platform teams accurate reachability insights by analyzing VEX data. Wiz worked with Docker to tune its scanners to properly ingest and parse the VEX statements included with every one of the more than 1,000 DHI images in the catalog. The integration helps users cut through vulnerability noise with scan results that deliver clear, actionable insights.

When the Wiz scanner detects a Docker Hardened Image, it pulls from the image’s VEX documents and OSV advisories to filter out false positives. For organizations already using Wiz, this means a simpler path to adopting hardened images across their container fleet. Finally, for organizations pursuing FedRAMP or other compliance certifications that specify VEX coverage, the ability of Wiz to read DHI VEX statements can accelerate compliance, reducing time to deployment and consequently time to revenue.

TL;DR

Integrate Docker with Wiz to:

  • Minimize false positives using VEX and OSV data
  • Identify base images and software components more accurately
  • Provide security teams with clear visibility into software bills of materials (SBOMs)
  • Reduce manual validation efforts by integrating detailed issue summaries into your remediation workflows
  • Better image quality assurance with up-to-date package metadata and SPDX snippets
  • Migrate to Docker Hardened Images with greater confidence

Why VEX?

VEX (Vulnerability Exploitability eXchange) is a machine-readable way for software suppliers to state whether a known vulnerability actually affects a specific product. Instead of inferring risk from dependency lists alone, VEX explicitly declares whether a vulnerability is not affected, affected, fixed, or under investigation. This matters because many scanner findings are not exploitable in real products, leading to false positives, wasted effort, and obscured real risk.
VEX  enables transparent, auditable vulnerability status that security tools and customers can independently verify, unlike proprietary advisory feeds that obscure context and historical risk.

Before you begin

  • Ensure you have access to both your Docker and Wiz organizations;
  • Confirm your are using a Docker Hardened Image
  • Ensure you have SBOM export and scan visibility enabled in Wiz.
  • Identifying Docker Hardened Images via the Integration on Wiz
  • With the integration, Wiz automatically detects Docker Hardened Images. The integration consists of two main functionalities on the Wiz dashboard. First, we will verify how many resources and organizations are using Docker Hardened Images by following these steps: 
  • Navigate to the Wiz Docker integration page and click connect
  • You’ll be prompted to log in to your Wiz dashboard
  • Once logged in, navigate to the “Inventory” section on the left side bar of your dashboard
  • You’ll be redirected to the “Technology” dashboard, where Wiz detects all technologies running on customer environments. Now, look for “Docker Hardened Images” on the search bar
  • Wiz automatically detects the specific operating systems running on each container mounts and flags them as hardened images
wiz1

Checking for vulnerabilities on the Wiz dashboard:

Once you’ve validated that Wiz can identify Docker Hardened Images, you will be able to check for vulnerabilities using Wiz’s security graph and Docker’s container metadata. In order to do that, follow these steps from the technologies tab:

  • Go to inventory/technologies page and filter by operating systems or search for specific technology
  • Click on the OS/technology to view metadata and resource count
  • Click to access the security graph view showing all resources running that technology
  • Add a condition to filter for CVEs detected on those resources. 
  • View all resources with their associated vulnerabilities in table or graph format
wiz2

Final Check

After setup, the vulnerabilities will appear according to your pre-set policies. You’ll be able to get a detailed overview on each CVE listed, including graph visualizations for dependency relationships, severity distribution, and potential exploit paths. These insights will help you prioritize remediation efforts, track resolution progress, and ensure compliance with your organization’s security standards.

Integrating Docker Hardened Images for better software supply chain visibility

The Docker-Wiz integration is more than just a checkbox in your security checklist. It provides:

  • Clarity: VEX documents and accurate base image identification eliminate guesswork, providing clear, contextual vulnerability data.
  • Confidence: Minimized false positives through OSV advisories and Docker-provided metadata ensures security teams can trust what they see.
  • Control: Enhanced visibility into SBOMs and technology usage empowers teams to prioritize and manage remediation effectively.
  • Coverage: Full-stack integration with Wiz surfaces vulnerabilities across all Docker environments, including hardened images and source-built components.
    This partnership helps DevSecOps teams move fast and remain proactive against container vulnerabilities, an essential capability for modern, lean teams managing fast-paced releases, open source risk, and complex cloud-native environments.

Ready to Get Started?

If you’re already using Docker Hardened Images and Wiz, you’re just a few clicks away from reducing false positives, improving SBOM visibility, and making vulnerability data more actionable.

]]>
Securing the software supply chain shouldn’t be hard. According to theCUBE Research, Docker makes it simple https://www.docker.com/blog/securing-the-software-supply-chain-shouldnt-be-hard-according-to-thecube-research-docker-makes-it-simple/ Tue, 25 Nov 2025 14:04:33 +0000 https://www.docker.com/?p=83324 In today’s software-driven economy, securing software supply chains is no longer optional, it’s mission-critical. Yet enterprises often struggle to balance developer speed and security. According to theCUBE Research, 95% of organizations say Docker improved their ability to identify and remediate vulnerabilities, while 79% rate it highly effective at maintaining compliance with security standards. Docker embeds security directly into the developer workflow so that protection happens by default, not as an afterthought.

At the foundation are Docker Hardened Images, which are ultra-minimal, continuously patched containers that cut the attack surface by up to 95% and achieve near-zero CVEs. These images, combined with Docker Scout’s real-time vulnerability analysis, allow teams to prevent, detect, and resolve issues early, keeping innovation and security in sync. The result: 92% of enterprises report fewer application vulnerabilities, and 60% see reductions of 25% or more.

Docker also secures agentic AI development through the MCP Catalog, Toolkit, and Gateway. These tools provide a trusted, containerized way to run Model Context Protocol (MCP) servers that power AI agents, ensuring communication happens in a secure, auditable, and isolated environment. According to theCUBE Research, 87% of organizations reduced AI setup time by over 25%, and 95% improved AI testing and validation, demonstrating that Docker makes AI development both faster and safer.

With built-in Zero Trust principles, role-based access controls, and compliance support for SOC 2, ISO 27001, and FedRAMP, Docker simplifies adherence to enterprise-grade standards without slowing developers down. The payoff is clear: 69% of enterprises report ROI above 101%, driven in part by fewer security incidents, faster delivery, and improved productivity. In short, Docker’s modern approach to DevSecOps enables enterprises to build, ship, and scale software that’s not only fast, but fundamentally secure.

Docker’s impact on software supply chain security

Docker has evolved into a complete development platform that helps enterprises build, secure, and deploy modern and agentic AI applications with trusted DevSecOps and containerization practices. From Docker Hardened Images, which are secure, minimal, and production-ready container images with near-zero CVEs, to Docker Scout’s real-time vulnerability insights and the MCP Toolkit for trusted AI agents, teams gain a unified foundation for software supply chain security.

Every part of the Docker ecosystem is designed to blend in with existing developer workflows while making security affordable, transparent, and universal. Whether you want to explore the breadth of the Docker Hardened Images catalog, analyze your own image data with Docker Scout, or test secure AI integration through the MCP Gateway, it is easy to see how Docker embeds security by default, not as an afterthought.

Review additional resources

theCUBE docker banner
]]>
theCUBE Research economic validation of Docker’s development platform https://www.docker.com/blog/thecube-research-economic-validation-of-docker-development-platform/ Thu, 30 Oct 2025 11:46:28 +0000 https://www.docker.com/?p=79874 Docker’s ROI and impact on agentic AI, security, and developer productivity.

theCUBE Research surveyed ~400 IT and AppDev professionals at leading global enterprises to investigate Docker’s ROI and impact on agentic AI development, software supply chain security, and developer productivity.  The industry context is that enterprise developers face mounting pressure to rapidly ship features, build agentic AI applications, and maintain security, all while navigating a fragmented array of development tools and open source code that require engineering cycles and introduce security risks. Docker transformed software development through containers and DevSecOps workflows, and is now doing the same for agentic AI development and software supply chain security.  theCUBE Research quantified Docker’s impact: teams build agentic AI apps faster, achieve near-zero CVEs, remediate vulnerabilities before exploits, ship modern cloud-native applications, save developer hours, and generate financial returns.

Keep reading for key highlights and analysis. Download theCUBE Research report and ebook to take a deep dive.

Agentic AI development streamlined using familiar technologies

Developers can build, run, and share agents and compose agentic systems using familiar Docker container workflows. To do this, developers can build agents safely using Docker MCP Gateway, Catalog, and Toolkit; run agents securely with Docker Sandboxes; and run models with Docker Model Runner. These capabilities align with theCUBE Research findings that 87% of organizations reduced AI setup time by over 25% and 80% report accelerating AI time-to-market by at least 26%.  Using Docker’s modern and secure software delivery practices, development teams can implement AI feature experiments faster and in days test agentic AI capabilities that previously took months. Nearly 78% of developers experienced significant improvement in the standardization and streamlining of AI development workflows, enabling better testing and validation of AI models. Docker helps enterprises generate business advantages through deploying new customer experiences that leverage agentic AI applications. This is phenomenal, given the nascent stage of agentic AI development in enterprises.

Software supply chain security and innovation can move in lockstep

Security engineering and vulnerability remediation can slow development to a crawl. Furthermore, checkpoints or controls may be applied too late in the software development cycle, or after dangerous exploits, creating compounded friction between security teams seeking to mitigate vulnerabilities and developers seeking to rapidly ship features. Docker embeds security directly into development workflows through vulnerability analysis and continuously-patched certified container images. theCUBE Research analysis supports these Docker security capabilities: 79% of organizations find Docker extremely or very effective at maintaining security & compliance, while 95% of respondents reported that Docker improved their ability to identify and remediate vulnerabilities. By making it very simple for developers to use secure images as a default, Docker enables engineering teams to plan, build, and deploy securely without sacrificing feature velocity or creating deployment bottlenecks. Security and innovation can move in lockstep because Docker concurrently secures software supply chains and eliminates vulnerabilities.

Developer productivity becomes a competitive advantage

Consistent container environments eliminate friction, accelerate software delivery cycles, and enable teams to focus on building features rather than overcoming infrastructure challenges. When developers spend less time on environment setup and troubleshooting, they ship more features. Application features that previously took months now reach customers in weeks. The research demonstrates Docker’s ability to increase developer productivity. 72% of organizations reported significant productivity gains in development workflows, while 75% have transformed or adopted DevOps practices when using Docker. Furthermore, when it comes to AI and supply chain security, the findings mentioned above further support how Docker unlocks developer productivity.

Financial returns exceed expectations

CFOs demand quantifiable returns for technology investments, and Docker delivers them. 95% of organizations reported substantial annual savings, with 43% reporting $50,000-$250,000 in cost reductions from infrastructure efficiency, reduced rework, and faster time-to-market. The ROI story is equally compelling: 69% of organizations report ROI exceeding 101%, with many achieving ROI above 500%. When factoring in faster feature delivery, improved developer satisfaction, and reduced security incidents, the business case for Docker becomes even more tangible. The direct costs of a security breach can surpass $500 million, so mitigating even a fraction of this cost provides a compelling financial justification for enterprises to deploy Docker to every developer.

Modernization and cloud native apps remain top of mind

For enterprises who maintain extensive legacy systems, Docker serves as a proven catalyst for cloud-native transformation at scale. Results show that nearly nine in ten (88%) of organizations report Docker has enabled modernization of at least 10% of their applications, with half achieving modernization across 31-60% of workloads and another 20% modernizing over 60%. Docker accelerates the shift from monolithic architectures to modern containerized cloud-native environments while also delivering substantial business value.  For example, 37% of organizations report 26% to >50% faster product time-to-market, and 72% report annual cost savings ranging from $50,000 to over $1 million.

Learn more about Docker’s impact on enterprise software development

Docker has evolved from a containerization suite into a development platform for testing, building, securing, and deploying modern software, including agentic AI applications. Docker enables enterprises to apply proven containerization and DevSecOps practices to agentic AI development and software supply chain security.

Download (below) the full report and the ebook from theCUBE Research analysis to learn Docker’s impact on developer productivity, software supply chain security, agentic AI application development, CI/CD and DevSecOps, modernization, cost savings, and ROI.  Learn how enterprises leverage Docker to transform application development and win in markets where speed and innovation determine success.

theCUBE Research economic validation of Docker’s development platform

> Download the Report

> Download the eBook

theCUBE docker banner

]]>
Expanding Docker Hardened Images: Secure Helm Charts for Deployments https://www.docker.com/blog/docker-hardened-images-helm-charts-beta/ Mon, 29 Sep 2025 20:02:52 +0000 https://www.docker.com/?p=78309 Development teams are under growing pressure to secure their software supply chains. Teams need trusted images, streamlined deployments, and compliance-ready tooling from partners they can rely on long term. Our customers have made it clear that they’re not just looking for one-off vendors. They’re looking for true security partners across development and deployment.

That’s why we are now offering Helm charts in the Docker Hardened Images (DHI) Catalog. These charts simplify Kubernetes deployments and make Docker a trusted security partner across the development and deployment lifecycle.

Bringing security and simplicity to Helm deployments

Helm charts are the most popular way to package and deploy applications to Kubernetes, with 75% of users preferring to use them, according to CNCF surveys. With security incidents making headlines more often, confidence now depends on having security and traceability built into every deployment.

Helm charts in the DHI Catalog make it simple to deploy hardened images to production Kubernetes environments. Teams no longer need to worry about insecure configurations, unverified sources, or vulnerable dependencies. Each chart is built with our hardened build system, providing signed provenance and clear traceability so you know exactly what you are deploying every time.

Supporting customers in the wake of Broadcom changes

Broadcom recently announced changes to Bitnami’s distribution model. Most images and charts have moved into a commercial subscription, older versions are archived without updates, and only a limited set of :latest tags remain free for use.

For teams affected by this change, Docker offers a clear path forward:

  • Free Docker Official Images, which can be paired with upstream Helm charts for stable, open source deployments
  • Docker Hardened Images with Helm charts in the DHI Catalog for enterprise-grade security and compliance

Many teams have relied on Bitnami for images and charts. Helm charts in the DHI Catalog now give teams the option to partner with Docker for secure, compliant deployments, with consistent coverage from development through deployment.

If your team is evaluating alternatives, we invite you to join the beta program. Sign up through our interest form to test Helm charts in the DHI Catalog and help guide their development.

What Helm charts in the DHI Catalog offer

Helm charts in the DHI Catalog are available today in beta. Beta offerings are early versions of future functionality that give customers the opportunity to test, validate, and share feedback. Your input directly shapes how we refine these charts before general availability.

The Helm charts in the DHI Catalog include:

  • DHI by default: Every chart automatically references Docker Hardened Images, ensuring deployments inherit DHI’s security, compliance, and SLA-backed patching without manual intervention.
  • Regular updates: New upstream versions and DHI CVE fixes automatically flow into chart releases.
  • Enterprise-grade security: Charts are built with our SLSA Level 3 build system and include signed provenance for compliance.
  • Customer-driven roadmap: We are guided by your feedback, so your input has a direct impact on what we prioritize.

Docker’s Trusted Image Catalogs: DHI and more

It’s worth noting that whether you’re looking for community continuity or enterprise-grade assurance, Docker has you covered:

Docker Official Images (DOI)

Docker Hardened Images (DHI)

Free and widely available

Enterprise-ready

Maintained with upstream communities

Minimal, non-root by default, near-zero CVEs

Billions of pulls every month

SLA-backed with fast CVE patching

Stable, trustworthy foundation

Compliance-ready with signed provenance and SBOMs

Together, DOI and DHI give organizations choice: a free, stable foundation for development, or an enterprise-grade hardened catalog with charts for production. If you rely on Docker Official Images, rest assured: they remain free, stable, and community-driven. You can rely on them for a solid foundation for your open source workloads.

Join the beta: Help shape Helm charts in the DHI Catalog

Helm charts in the DHI Catalog are now in invite-only beta as of October 2025. We are working closely with a set of customers to prioritize which charts matter most and ensure migration is smooth.

Participation is open via our interest form, and we welcome your feedback.

Sign up for the beta today! 

]]>
Broadcom’s New Bitnami Restrictions? Migrate Easily with Docker https://www.docker.com/blog/broadcoms-new-bitnami-restrictions-migrate-easily-with-docker/ Sat, 30 Aug 2025 23:19:29 +0000 https://www.docker.com/?p=76119 For years, Bitnami has played a vital role in the open source and cloud-native community, making it easier for developers and operators to deploy popular applications with reliable, prebuilt container images and Helm charts. Countless teams have benefited from their work standardizing installation and updates for everything from WordPress to PostgreSQL. We want to acknowledge and thank Bitnami’s contributors for that important contribution.

Recently, however, Bitnami announced significant changes to how their images are distributed. Starting this month, access to most versioned images will move behind a paid subscription under Bitnami Secure Images (BSI), with only the :latest tags remaining free. Older images are being shifted into a Bitnami Legacy archive that will no longer receive updates. For many teams, this raises real challenges around cost, stability, and compliance.

Docker remains committed to being a trusted partner for developers and enterprises alike. Docker Official Images (DOI) are one of the two most widely used catalogs of open source container images in the world, and by far the most adopted. While Bitnami has been valuable to the community, Docker Official Images see billions of pulls every month and are trusted by developers, maintainers, and enterprises globally. This is the standard foundation teams already rely on.

For production environments that require added security and compliance, Docker Hardened Images (DHI) are a seamless drop-in replacement for DOI. They combine the familiarity and compatibility of DOI with enterprise-ready features: minimal builds, non-root by default, signed provenance, and near-zero-CVE baselines. Unlike Bitnami’s new paid model, DHI is designed to be affordable and transparent, giving organizations the confidence they need without unpredictable costs.

Bitnami’s Access Changes Are Already Underway

On July 16, Broadcom’s Bitnami team announced changes to their container image distribution model, effective September 29. Here’s what’s changing:

  • Freely built and available images and Helm charts are going away. The bitnami organization will be deleted.
  • New Bitnami Secure Images offering. Users that want to use Bitnami images will need to get a paid subscription to a new Binami Secure Images offering, hosted on the Bitnami registry. This provides access to stable tags, version history,
  • Free tier of Bitnami Secure Images. The bitnamisecure org has been created to provide a set of hardened, more secure images. Only the :latest tags will be available and the images are intended for development purposes only.
  • Unsupported legacy fallback. Older images are moved to a “Bitnami Legacy Registry”, available on Docker Hub in the bitnamilegacy org. These images are unsupported, will no longer receive updates or patches, and are intended to be used while making plans for alternatives.
  • Image and Helm chart source still available. While the built artifacts won’t be published, organizations will still be able to access the source code for Debian-based images and Helm charts. They can build and publish these on their own.

The timeline is tight too. Brownouts have already begun, and the public catalog deletion is set for September 29, 2025.

What Bitnami Users Need to Know

For many teams, this means Helm charts, CI/CD pipelines, and Kubernetes clusters relying on Bitnami will soon face broken pulls, compliance risks, or steep new costs.

The community reaction has been strong. Developers and operators voice concerns around:

  • Trust and stability concerns. Many see this as a “bait and switch,” with long-standing free infrastructure suddenly paywalled.
  • Increased operational risk. Losing version pinning or relying on :latest tags introduces deployment chaos, security blind spots, and audit failures.
  • Cost and budget pressure. Early pricing reports suggest that for organizations running hundreds of workloads, Bitnami’s new model could mean six-figure annual costs.

In short: teams depending on Bitnami for reliable, stable images and Helm charts now face an urgent decision.

Your Fastest Path Forward: Docker

At Docker, we believe developers and enterprises deserve choice, stability, and stability. That’s why we continue to offer two strong paths forward:

Docker Official Images – Free and Widely Available

Docker is committed to building and maintaining its Docker Official Image catalog. This catalog:

  • Fully supported with a dedicated team. This team reviews, publishes, and maintains the Docker Official Images.
  • Focused on collaboration. The team works with upstream software maintainers, security experts, and the broader Docker community to ensure images work, are patched, and support the needs of the Docker community.
  • Trusted by millions of developers worldwide. The Docker Official Images are pulled billions of times per month for development, learning, and production.

Docker Hardened Images – Secure, Minimal, Production-Ready

Docker Hardened Images are secure, production-ready container images designed for enterprise use.

  • Smaller near-zero known CVEs. Start with images that are up to 95% smaller, fewer packages, and a much-reduced attack surface.
  • Fast, SLA-backed remediation. Critical and High severity CVEs are patched within 7 days, faster than typical industry response times, and backed by an enterprise-grade SLA.
  • Multi-distro support. Use the distros you’re familiar with, including trusted Linux distros like Alpine and Debian
  • Signed provenance, SBOMs, and VEX data – for compliance confidence.
  • SLSA Level 3 builds, non-root by default, distroless options – following secure-by-default practices.
  • Self-service customization. Add certificates, packages, environment variables, and other configuration right into the build pipelines without forking or secondary patching.
  • Fully integrated into Docker Hub for a familiar developer workflow.

Start Your Move Today

If your organization is affected by the Bitnami changes,we are here to help. Docker offers you a fast path forward:

  1. Audit your Bitnami dependencies. Identify which images you’re pulling.
  2. Choose your path. Explore the Docker Official Images catalog or learn more about Docker Hardened Images. Many of the Bitnami images can be easily swapped with images from either catalog.

Need help?
Contact our sales team to learn how Docker Hardened Images can provide secure, production-ready images at scale.

    ]]>
    Secure by Design: A Proactive Testing Approach with Testcontainers, Docker Scout, and Hardened Images https://www.docker.com/blog/a-shift-left-approach-with-docker/ Thu, 28 Aug 2025 13:00:00 +0000 https://www.docker.com/?p=76011

    In today’s fast-paced world of software development, product teams are expected to move quickly: building features, shipping updates, and reacting to user needs in real-time. But moving fast should never mean compromising on quality or security.

    Thanks to modern tooling, developers can now maintain high standards while accelerating delivery. In a previous article, we explored how Testcontainers supports shift-left testing by enabling fast and reliable integration tests within the inner dev loop. In this post, we’ll look at the security side of this approach and how Docker can help move security earlier in the development lifecycle, using practical examples.

    Testing a Movie Catalog API with Security Built In

    We’ll use a simple demo project to walk through our workflow. This is a Node.js + TypeScript API backed by PostgreSQL and tested with Testcontainers.

    Movie API Endpoints:

    Method

    Endpoint

    Description

    POST

    /movies

    Add a new movie to the catalog

    GET

    /movies

    Retrieve all movies, sorted by title

    GET

    /movies/search?q=…

    Search movies by title or description (fuzzy match)

    Before deploying this app to production, we want to make sure it functions correctly and is free from critical vulnerabilities.

    Testing Code with Testcontainers: Recap

    We verify the application against a real PostgreSQL instance by using Testcontainers to spin up containers for both the database and the application. A key advantage of Testcontainers is that it creates these containers dynamically during test execution. Another feature of the Testcontainers libraries is the ability to start containers directly from a Dockerfile. This allows us to run the containerized application along with any required services, such as databases, effectively reproducing the local environment needed to test the application at the API or end-to-end (E2E) level. This approach provides an additional layer of quality assurance, bringing even more testing into the inner development loop.

    For a more detailed explanation of how Testcontainers enables proactive testing approach into the developer inner loop, refer to the introductory blog post.

    Here’s a beforeAll setup that prepares our test environment, including PostgreSQL and the application under development, started from the Dockerfile :

    beforeAll(async () => {
         const network = await new Network().start();
         // 1. Start Postgres
         db = await new PostgreSqlContainer("postgres:17.4")
         .withNetwork(network)
         .withNetworkAliases("postgres")
         .withDatabase("catalog")
         .withUsername("postgres")
         .withPassword("postgres")
         .withCopyFilesToContainer([
           {
             source: path.join(__dirname, "../dev/db/1-create-schema.sql"),
             target: "/docker-entrypoint-initdb.d/1-create-schema.sql"
           },
         ])
         .start();
         // 2.  Build movie catalog API container from the Dockerfile
         const container = await GenericContainer
           .fromDockerfile("../movie-catalog")
           .withTarget("final")
           .withBuildkit()
           .build();
        // 3. Start movie catalog API container with environment variables for DB connection  
         app = await container
           .withNetwork(network)
           .withExposedPorts(3000)
           .withEnvironment({
               PGHOST: "postgres",
               PGPORT: "5432",
               PGDATABASE: "catalog",
               PGUSER: "postgres",
               PGPASSWORD: "postgres",
             })
           .withWaitStrategy(Wait.forListeningPorts())
           .start();
       }, 120000);
    

    We can now test the movie catalog API:

    it("should create and retrieve a movie", async () => {
         const baseUrl = `http://${app.getHost()}:${app.getMappedPort(3000)}`;
         const payload = {
           title: "Interstellar",
           director: "Christopher Nolan",
           genres: ["sci-fi"],
           releaseYear: 2014,
           description: "Space and time exploration"
         };
    
         const response = await axios.post(`${baseUrl}/movies`, payload);
         expect(response.status).toBe(201);
         expect(response.data.title).toBe("Interstellar");
       }, 120000);
    

    This approach allows us to validate that:

    • The application is properly containerized and starts successfully.
    • The API behaves correctly in a containerized environment with a real database.

    However, that’s just one part of the quality story. Now, let’s turn our attention to the security aspects of the application under development.

    Introducing Docker Scout and Docker Hardened Images 

    To follow modern best practices, we want to containerize the app and eventually deploy it to production. Before doing so, we must ensure the image is secure by using Docker Scout.

    Our Dockerfile takes a multi-stage build approach and is based on the node:22-slim image.

    ###########################################################
    # Stage: base
    # This stage serves as the base for all of the other stages.
    # By using this stage, it provides a consistent base for both
    # the dev and prod versions of the image.
    ###########################################################
    FROM node:22-slim AS base
    WORKDIR /usr/local/app
    RUN useradd -m appuser && chown -R appuser /usr/local/app
    USER appuser
    COPY --chown=appuser:appuser package.json package-lock.json ./
    
    ###########################################################
    # Stage: dev
    # This stage is used to run the application in a development
    # environment. It installs all app dependencies and will
    # start the app in a dev mode that will watch for file changes
    # and automatically restart the app.
    ###########################################################
    FROM base AS dev
    ENV NODE_ENV=development
    RUN npm ci --ignore-scripts
    COPY --chown=appuser:appuser ./src ./src
    EXPOSE 3000
    CMD ["npx", "nodemon", "src/app.js"]
    
    ###########################################################
    # Stage: final
    # This stage serves as the final image for production. It
    # installs only the production dependencies.
    ###########################################################
    # Deps: install only prod deps
    FROM base AS prod-deps
    ENV NODE_ENV=production
    RUN npm ci --production --ignore-scripts && npm cache clean --force
    # Final: clean prod image
    FROM base AS final
    WORKDIR /usr/local/app
    COPY --from=prod-deps /usr/local/app/node_modules ./node_modules
    COPY ./src ./src
    EXPOSE 3000
    CMD [ "node", "src/app.js" ]
    

    Let’s build our image with SBOM and provenance metadata. First, make sure that the containerd image store is enabled in Docker Desktop. We’ll also use the buildx command ( a Docker CLI plugin that extends the docker build) with the –provenance=true  and –sbom=true flags. These options attach build attestations to the image, which Docker Scout uses to provide more detailed and accurate security analysis.

    docker buildx build --provenance=true --sbom=true -t movie-catalog-service:v1 .
    

    Then set up a Docker organization with security policies and scan the image with Docker Scout: 

    docker scout config organization demonstrationorg
    docker scout quickview movie-catalog-service:v1 
    
    Docker Scout cli quickview output for node:22 based movie-catalog-service image

    Figure 1: Docker Scout cli quickview output for node:22 based movie-catalog-service image


    Docker Scout also offers a visual analysis via Docker Desktop.

    Image layers and CVEs view in Docker Desktop for node:22 based movie-catalog-service image

    Figure 2: Image layers and CVEs view in Docker Desktop for node:22 based movie-catalog-service image


    In this example, no vulnerabilities were found in the application layer. However, several CVEs were introduced by the base node:22-slim image, including a high-severity CVE-2025-6020, a vulnerability present in Debian 12. This means that any Node.js image based on Debian 12 inherits this vulnerability. A common way to address this is by switching to an Alpine-based Node image, which does not include this CVE. However, Alpine uses musl libc instead of glibc, which can lead to compatibility issues depending on your application’s runtime requirements and deployment environment.

    So, what’s a more secure and compatible alternative?

    That’s where Docker Hardened Images (DHI) come in. These images follow a distroless philosophy, removing unnecessary components to significantly reduce the attack surface. The result? Smaller images that pull faster, run leaner, and provide a secure-by-default foundation for production workloads:

    • Near-zero exploitable CVEs: Continuously updated, vulnerability-scanned, and published with signed attestations to minimize patch fatigue and eliminate false positives.
    • Seamless migration: Drop-in replacements for popular base images, with -dev variants available for multi-stage builds.
    • Up to 95% smaller attack surface: Unlike traditional base images that include full OS stacks with shells and package managers, distroless images retain only the essentials needed to run your app.
    • Built-in supply chain security: Each image includes signed SBOMs, VEX documents, and SLSA provenance for audit-ready pipelines.

    For developers, DHI means fewer CVE-related disruptions, faster CI/CD pipelines, and trusted images you can use with confidence.

    Making the Switch to Docker Hardened Images

    Switching to a Docker Hardened Image is straightforward. All we need to do is replace the base image node:22-slim with a DHI equivalent.

    Docker Hardened Images come in two variants:

    • Dev variant (demonstrationorg/dhi-node:22-dev) – includes a shell and package managers, making it suitable for building and testing.
    • Runtime variant (demonstrationorg/dhi-node:22) – stripped down to only the essentials, providing a minimal and secure footprint for production.

    This makes them perfect for use in multi-stage Dockerfiles. We can build the app in the dev image, then copy the built application into the runtime image, which will serve as the base for production.

    Here’s what the updated Dockerfile would look like:

    ###########################################################
    # Stage: base
    # This stage serves as the base for all of the other stages.
    # By using this stage, it provides a consistent base for both
    # the dev and prod versions of the image.
    ########################################################### 
    # Changed node:22 to dhi-node:22-dev
    FROM demonstrationorg/dhi-node:22-dev AS base
    WORKDIR /usr/local/app
    # DHI comes with nonroot user built-in. 
    COPY --chown=nonroot package.json package-lock.json ./
    
    ###########################################################
    # Stage: dev
    # This stage is used to run the application in a development
    # environment. It installs all app dependencies and will
    # start the app in a dev mode that will watch for file changes
    # and automatically restart the app.
    ###########################################################
    FROM base AS dev
    ENV NODE_ENV=development
    RUN npm ci --ignore-scripts
    # DHI comes with nonroot user built-in.
    COPY --chown=nonroot ./src ./src
    EXPOSE 3000
    CMD ["npx", "nodemon", "src/app.js"]
    
    ###########################################################
    # Stage: final
    # This stage serves as the final image for production. It
    # installs only the production dependencies.
    ###########################################################
    # Deps: install only prod deps
    FROM base AS prod-deps
    ENV NODE_ENV=production
    RUN npm ci --production --ignore-scripts && npm cache clean --force
    # Final: clean prod image
    # Changed base to dhi-node:22
    FROM demonstrationorg/dhi-node:22 AS final
    WORKDIR /usr/local/app
    COPY --from=prod-deps /usr/local/app/node_modules ./node_modules
    COPY ./src ./src
    EXPOSE 3000
    CMD [ "node", "src/app.js" ]
    

    Let’s rebuild and scan the new image:

    docker buildx build --provenance=true --sbom=true -t movie-catalog-service-dhi:v1 .
    docker scout quickview movie-catalog-service-dhi:v1 
    
    Docker Scout cli quickview output for dhi-node:22 based movie-catalog-service image

    Figure 3: Docker Scout cli quickview output for dhi-node:22 based movie-catalog-service image

    As you can see, all critical and high CVEs are gone, thanks to the clean and minimal footprint of the Docker Hardened Image.

    One of the key benefits of using DHI is the security SLA it provides. If a new CVE is discovered, the DHI team commits to resolving:

    • Critical and high vulnerabilities within 7 days of a patch becoming available,
    • Medium and low vulnerabilities within 30 days.

    This means you can significantly reduce your CVE remediation burden and give developers more time to focus on innovation and feature development instead of chasing vulnerabilities.

    Comparing images with Docker Scout

    Let’s also look at the image size and package count advantages of using distroless Hardened Images.

    Docker Scout offers a helpful command docker scout compare , that allows you to analyze and compare two images. We’ll use it to evaluate the difference in size and package footprint between node:22-slim and dhi-node:22 based images.

    docker scout compare local://movie-catalog-service:v1 --to local://movie-catalog-service-dhi:v1 
    
    Comparison of the node:22 and dhi-node:22 based movie-catalog-service images

    Figure 4: Comparison of the node:22 and dhi-node:22 based movie-catalog-service images

    As you can see, the original node:22-slim based image was 80 MB in size and included 427 packages, while the dhi-node:22 based image is just 41 MB with only 123 packages. 

    By switching to a Docker Hardened Image, we reduced the image size by nearly 50 percent and cut down the number of packages by more than three times, significantly reducing the attack surface.

    Final Step: Validate with local API tests

    Last but not least, after migrating to a DHI base image, we should verify that the application still functions as expected.

    Since we’ve already implemented Testcontainers-based tests, we can easily ensure that the API remains accessible and behaves correctly.

    Let’s run the tests using the npm test command. 

    Local API test execution results

    Figure 5: Local API test execution results

    As you can see, the container was built and started successfully. In less than 20 seconds, we were able to verify that the application functions correctly and integrates properly with Postgres.

    At this point, we can push the changes to the remote repository, confident that the application is both secure and fully functional, and move on to the next task. 

    Further integration with external security tools

    In addition to providing a minimal and secure base image, Docker Hardened Images include a comprehensive set of attestations. These include a Software Bill of Materials (SBOM), which details all components, libraries, and dependencies used during the build process, as well as Vulnerability Exploitability eXchange (VEX). VEX offers contextual insights into vulnerabilities, specifying whether they are actually exploitable in a given environment, helping teams prioritize remediation.

    Let’s say you’ve committed your code changes, built the application, and pushed a container image. Now you want to verify the security posture using an external scanning tool you already use, such as Grype or Trivy. That requires vulnerability information in a compatible format, which Docker Scout can generate for you.

    First, you can view the list of available attestations using the docker scout attest command:

    docker scout attest list demonstrationorg/movie-catalog-service-dhi:v1 --platform linux/arm64
    

    This command returns a detailed list of attestations bundled with the image. For example, you might see two OpenVEX files: one for the DHI base image and another for any custom exceptions (like no-dsa) specific to your image.

    Then, to integrate this information with external tools, you can export the VEX data into a vex.json file. Starting Docker Scout v1.18.3 you can use the docker scout vex get command to get the merged VEX document from all VEX attestations:

    docker scout vex get demonstrationorg/movie-catalog-service-dhi:v1 --output vex.json
    

    This generates a vex.json file containing all VEX statements for the specified image. Tools that support VEX can then use this file to suppress known non-exploitable CVEs.

    To use the VEX information with Grype or Trivy, pass the –vex flag during scanning:

    trivy image demonstrationorg/movie-catalog-service-dhi:v1 --vex vex.json
    

    This ensures your security scanning results are consistent across tools, leveraging the same set of vulnerability contexts provided by Docker Scout.

    Conclusion

    Shifting left is about more than just early testing. It’s a proactive mindset for building secure, production-ready software from the beginning.

    This proactive approach combines:

    • Real infrastructure testing using Testcontainers
    • End-to-end supply chain visibility and actionable insights with Docker Scout
    • Trusted, minimal base images through Docker Hardened Images

    Together, these tools help catch issues early, improve compliance, and reduce security risks in the software supply chain.

    Learn more and request access to Docker Hardened Images!

    ]]>
    The Supply Chain Paradox: When “Hardened” Images Become a Vendor Lock-in Trap https://www.docker.com/blog/hardened-container-images-security-vendor-lock-in/ Wed, 20 Aug 2025 13:10:09 +0000 https://www.docker.com/?p=75825 supply chain paradox

    The market for pre-hardened container images is experiencing explosive growth as security-conscious organizations pursue the ultimate efficiency: instant security with minimal operational overhead. The value proposition is undeniably compelling—hardened images with minimal dependencies promise security “out of the box,” enabling teams to focus on building and shipping applications rather than constantly revisiting low-level configuration management.

    For good reason, enterprises are adopting these pre-configured images to reduce attack surface area and simplify security operations. In theory, hardened images deliver reduced setup time, standardized security baselines, and streamlined compliance validation with significantly less manual intervention.

    Yet beneath this attractive surface lies a fundamental contradiction. While hardened images can genuinely reduce certain categories of supply chain risk and strengthen security posture, they simultaneously create a more subtle form of vendor dependency than traditional licensing models. Organizations are unknowingly building critical operational dependencies on a single vendor’s design philosophy, build processes, institutional knowledge, responsiveness, and long-term market viability.

    The paradox is striking: in the pursuit of supply chain independence, many organizations are inadvertently creating more concentrated dependencies and potentially weakening their security through stealth vendor lock-in that becomes apparent only when it’s costly to reverse.

    The Mechanics of Modern Vendor Lock-in

    Unfamiliar Base Systems Create Switching Friction

    The first layer of lock-in emerges from architectural choices that seem benign during initial evaluation but become problematic at scale. Some hardened image vendors deviate from mainstream distributions, opting to bake their own Linux variants rather  than offering widely-adopted options like Debian, Alpine, or Ubuntu. This deviation creates immediate friction for platform engineering teams who must develop vendor-specific expertise to effectively manage these systems. Even if the differences are small, this raises the spectre of edge-cases – the bane of platform teams. Add enough edge cases and teams will start to fear adoption.

    While vendors try to standardize their approach to hardening, in reality, it remains a bespoke process. This can create differences from image to image across different open source versions, up and down the stack – even from the same vendor. In larger organizations, platform teams may need to offer hardened images from multiple vendors. This creates further compounding complexity. In the end, teams find themselves managing a heterogeneous environment that requires specialized knowledge across multiple proprietary approaches. This increases toil, adds risk, increases documentation requirements and raises the cost of staff turnover.

    Compatibility Barriers and Customization Constraints

    More problematic is how hardened images often break compatibility with standard tooling and monitoring systems that organizations have already invested in and optimized. Open source compatibility gaps emerge when hardened images introduce modifications that prevent seamless integration with established DevOps workflows, forcing organizations to either accept reduced functionality or invest in vendor-specific alternatives.

    Security measures, while well-intentioned, can become so restrictive they prevent necessary business customizations. Configuration lockdown reaches levels where platform teams cannot implement organization-specific requirements without vendor consultation or approval, transforming what should be internal operational decisions into external dependencies.

    Perhaps most disruptive is how hardened images force changes to established CI/CD pipelines and operational practices. Teams discover that their existing automation, deployment scripts, and monitoring configurations require substantial modification to accommodate the vendor’s approach to security hardening.

    The Hidden Migration Tax

    The vendor lock-in trap becomes most apparent when organizations attempt to change direction. While vendors excel at streamlining initial adoption—providing migration tools, professional services, and comprehensive onboarding support—they systematically downplay the complexity of eventual exit scenarios.

    Organizations accumulate sunk costs through investments in training and vendor-specific tooling that create psychological and financial barriers to switching providers. More critically, expertise about these systems becomes concentrated within vendor organizations rather than distributed among internal teams. Platform engineers find themselves dependent on vendor documentation, support channels, and institutional knowledge to troubleshoot issues and implement changes.

    The Open Source Transparency Problem

    The hardened image industry leverages the credibility of open source. But it can also undermine the spirit of open source transparency by creating almost a kind of fork but without the benefits of community.. While vendors may provide source code access, this availability doesn’t guarantee system understanding or maintainability. The knowledge required to comprehend complex hardening processes often remains concentrated within small vendor teams, making independent verification and modification practically impossible.

    Heavily modified images become difficult for internal teams to audit and troubleshoot. Platform engineers encounter systems that appear familiar on the surface but behave differently under stress or during incident response, creating operational blind spots that can compromise security during critical moments.

    Trust and Verification Gaps

    Transparency is only half the equation. Security doesn’t end at a vendor’s brand name or marketing claims. Hardened images are part of your production supply chain and should be scrutinized like any other critical dependency. Questions platform teams should ask include:

    • How are vulnerabilities identified and disclosed? Is there a public, time-bound process, and is it tied to upstream commits and advisories rather than just public CVEs?
    • Could the hardening process itself introduce risks through untested modifications?
    • Have security claims been independently validated through audits, reproducible builds, or public attestations?
    • Does your SBOM meta-data accurately reflect the full context of your hardened image? 

    Transparency plus verification and full disclosure builds durable trust. Without both, hardened images can be difficult to audit, slow to patch, and nearly impossible to replace. Not providing easy-to-understand and easy-to-consume verification artefacts and answers functions as a form of lock-in forcing the customer to trust but not allowing them to verify.

    Building Independence: A Strategic Framework

    For platform teams that want to benefit from the security gains of hardened images and reap ease of use while avoiding lock-in, taking a structured approach to hardened vendor decision making is critical.

    Distribution Compatibility as Foundation

    Platform engineering leaders must establish mainstream distribution adherence as a non-negotiable requirement. Hardened images should be built from widely-adopted distributions like Debian, Ubuntu, Alpine, or RHEL rather than vendor-specific variants that introduce unnecessary complexity and switching costs.

    Equally important is preserving compatibility with standard package managers and maintaining adherence to the Filesystem Hierarchy Standard (FHS) to preserve tool compatibility and operational familiarity across teams. Key requirements include:

    • Package manager preservation: Compatibility with standard tools (apt, yum, apk) for independent software installation and updates 
    • File system layout standards: Adherence to FHS for seamless integration with existing tooling
    • Library and dependency compatibility: No proprietary dependencies that create additional vendor lock-in

    Enabling Rapid Customization Without Security Compromise

    Security enhancements should be architected as modular, configurable layers rather than baked-in modifications that resist change. This approach allows organizations to customize security posture while maintaining the underlying benefits of hardened configurations.

    Built-in capability to modify security settings through standard configuration management tools preserves existing operational workflows and prevents the need for vendor-specific automation approaches. Critical capabilities include:

    • Modular hardening layers: Security enhancements as removable, configurable components
    • Configuration override mechanisms: Integration with standard tools (Ansible, Chef, Puppet)
    • Whitelist-based customization: Approved modifications without vendor consultation
    • Continuous validation: Continuous verification that customizations don’t compromise security baselines

    Community Integration and Upstream Collaboration

    Organizations should demand that hardened image vendors contribute security improvements back to original distribution maintainers. This requirement ensures that security enhancements benefit the broader community and aren’t held hostage by vendor business models.

    Evaluating vendor participation in upstream security discussions, patch contributions, and vulnerability disclosure processes provides insight into their long-term commitment to community-driven security rather than proprietary advantage. Essential evaluation criteria include:

    • Upstream contribution requirements: Active contribution of security improvements to distribution maintainers
    • True community engagement: Participation in security discussions and vulnerability disclosure processes
    • Compatibility guarantees: Contractual requirements for backward and forward compatibility with official distributions

    Intelligent Migration Tooling and Transparency

    AI-powered Dockerfile conversion capabilities should provide automated translation between vendor hardened images and standard distributions, handling complex multi-stage builds and dependency mappings without requiring manual intervention.

    Migration tooling must accommodate practical deployment patterns including multi-service containers and legacy application constraints rather than forcing organizations to adopt idealized single-service architectures. Essential tooling requirements include:

    • Automated conversion capabilities: AI-powered translation between hardened images and standard distributions
    • Transparent migration documentation: Open source tools that generate equivalent configurations for different providers
    • Bidirectional conversion: Tools that work equally well for migrating to and away from hardened images
    • Real-world architecture support: Accommodation of practical deployment patterns rather than forcing idealized architectures

    Practical Implementation Framework

    Standardized compatibility testing protocols should verify that hardened images integrate seamlessly with existing toolchains, monitoring systems, and operational procedures before deployment at scale. Self-service customization interfaces for common modifications eliminate dependency on vendor support for routine operational tasks.

    Advanced image merging capabilities allow organizations to combine hardened base images with custom application layers while maintaining security baselines, providing flexibility without compromising protection. Implementation requirements include:

    • Compatibility testing protocols: Standardized verification of integration with existing toolchains and monitoring systems
    • Self-service customization:: User-friendly tools for common modifications (CA certificates, custom files, configuration overlays)
    • Image merging capabilities: Advanced tooling for combining hardened bases with custom application layers
    • Vendor SLAs: Service level agreements for maintaining compatibility and providing migration support

    Conclusion: Security Without Surrendering Control

    The real question platform teams must ask is this. Does my hardened image vendor strengthen or weaken my own control of my supply chain? The risks of lock-in aren’t theoretical. All of the factors described above can turn security into an unwanted operational constraint. Platform teams can demand hardened images and hardening process built for independence from the start— rooted in mainstream distributions, transparent in their build processes, modular in their security layers, supported by strong community involvement, and butressed by tooling that makes migration a choice, not a crisis.

    When security leaders adopt hardened images that preserve compatibility, encourage upstream collaboration, and fit seamlessly into existing workflows, they protect more than just their containers. They protect their ability to adapt and they minimize lock-in while actually improving their security posture. The most secure organizations will be the ones that can harden without handcuffing themselves.

    ]]>