DHI – Docker https://www.docker.com Thu, 19 Feb 2026 15:16:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.docker.com/app/uploads/2024/02/cropped-docker-logo-favicon-32x32.png DHI – Docker https://www.docker.com 32 32 How Medplum Secured Their Healthcare Platform with Docker Hardened Images (DHI) https://www.docker.com/blog/medplum-healthcare-docker-hardened-images/ Thu, 19 Feb 2026 14:00:00 +0000 https://www.docker.com/?p=85305 Special thanks to Cody Ebberson and the Medplum team for their open-source contribution and for sharing their migration experience with the community. A real-world example of migrating a HIPAA-compliant EHR platform to DHI with minimal code changes.

Healthcare software runs on trust. When patient data is at stake, security isn’t just a feature but a fundamental requirement. For healthcare platform providers, proving that trust to enterprise customers is an ongoing challenge that requires continuous investment in security posture, compliance certifications, and vulnerability management.

That’s why we’re excited to share how Medplum, an open-source healthcare platform serving over 20 million patients, recently migrated to Docker Hardened Images (DHI). This migration demonstrates exactly what we designed DHI to deliver: enterprise-grade security with minimal friction. Medplum’s team made the switch with just 54 lines of changes across 5 files – a near net-zero code change that dramatically improved their security posture.

Medplum is a headless EHR; the platform handles patient data, clinical workflows, and compliance so developers can focus on building healthcare apps. Built by and for healthcare developers, the platform provides:

  • HIPAA and SOC2 compliance out of the box
  • FHIR R4 API for healthcare data interoperability
  • Self-hosted or managed deployment options
  • Support for 20+ million patients across hundreds of practices

With over 500,000 pulls on Docker Hub for their medplum-server image, Medplum has become a trusted foundation for healthcare developers worldwide. As an open-source project licensed under Apache 2.0, their entire codebase, including Docker configurations, is publicly available onGitHub. This transparency made their DHI migration a perfect case study for the community.

Diagram of Medplum as headless EHR

Caption: Medplum is a headless EHR; the platform handles patient data, clinical workflows, and compliance so developers can focus on building healthcare apps.

Medplum is developer-first. It’s not a plug-and-play low-code tool, it’s designed for engineering teams that want a strong FHIR-based foundation with full control over the codebase.

The Challenge: Vulnerability Noise and Security Toil

Healthcare software development comes with unique challenges. Integration with existing EHR systems, compliance with regulations like HIPAA, and the need for robust security all add complexity and cost to development cycles.

“The Medplum team found themselves facing a challenge common to many high-growth platforms: “Vulnerability Noise.” Even with lean base images, standard distributions often include non-essential packages that trigger security flags during enterprise audits. For a company helping others achieve HIPAA compliance, every “Low” or “Medium” CVE (Common Vulnerability and Exposure) requires investigation and documentation, creating significant “security toil” for their engineering team.”

Reshma Khilnani

CEO, Medplum

Medplum addresses this by providing a compliant foundation. But even with that foundation, their team found themselves facing another challenge common to high-growth platforms: “Vulnerability Noise.”

Healthcare is one of the most security-conscious industries. Medplum’s enterprise customers, including Series C and D funded digital health companies, don’t just ask about security; they actively verify it. These customers routinely scan Medplum’s Docker images as part of their security due diligence.

Even with lean base images, standard distributions often include non-essential packages that trigger security flags during enterprise audits. For a company helping others achieve HIPAA compliance, every “Low” or “Medium” CVE requires investigation and documentation. This creates significant “security toil” for their engineering team.

The First Attempt: Distroless

This wasn’t Medplum’s first attempt at solving the problem. Back in November 2024, the team investigated Google’s distroless images as a potential solution.

The motivations were similar to what DHI would later deliver:

  • Less surface area in production images, and therefore less CVE noise
  • Smaller images for faster deployments
  • Simpler build process without manual hardening scripts

The idea was sound. Distroless images strip away everything except the application runtime: no shell, no package manager, minimal attack surface. On paper, it was exactly what Medplum needed.

But the results were mixed. Image sizes actually increased. Build times went up. There were concerns about multi-architecture support for native dependencies. The PR was closed without merging.

The core problem remained: many CVEs in standard images simply aren’t actionable. Often there isn’t a fix available, so all you can do is document and explain why it doesn’t apply to your use case. And often the vulnerability is in a corner of the image you’re not even using, like Perl, which comes preinstalled on Debian but serves no purpose in a Node.js application.

Fully removing these unused components is the only real answer. The team knew they needed hardened images. They just hadn’t found the right solution yet.

The Solution: Docker Hardened Images

When Docker made Hardened Images freely available under Apache 2.0, Medplum’s team saw an opportunity to simplify their security posture while maintaining compatibility with their existing workflows.

By switching to Docker Hardened Images, Medplum was able to offload the repetitive work of OS-level hardening – like configuring non-root users and stripping out unnecessary binaries – to Docker. This allowed them to provide their users with a “Secure-by-Default” image that meets enterprise requirements without adding complexity to their open-source codebase.

This shift is particularly significant for an open-source project. Rather than maintaining custom hardening scripts that contributors need to understand and maintain, Medplum can now rely on Docker’s expertise and continuous maintenance. The security posture improves automatically with each DHI update, without requiring changes to Medplum’s Dockerfiles.

“By switching to Docker Hardened Images, Medplum was able to offload the repetitive work of OS-level hardening—like configuring non-root users and stripping out unnecessary binaries—to Docker. This allowed their users to provide their users with a “Secure-by-Default” image that meets enterprise requirements without adding complexity to their open-source codebase.”

Cody Ebberson

CTO, Medplum

The Migration: Real Code Changes

The migration was remarkably clean. Previously, Medplum’s Dockerfile required manual steps to ensure security best practices. By moving to DHI, they could simplify their configuration significantly.

Let’s look at what actually changed. Here’s the complete server Dockerfile after the migration:

# Medplum production Dockerfile
# Uses Docker "Hardened Images":
# https://hub.docker.com/hardened-images/catalog/dhi/node/guides

# Supported architectures: linux/amd64, linux/arm64

# Stage 1: Build the application and install production dependencies
FROM dhi.io/node:24-dev AS build-stage
ENV NODE_ENV=production
WORKDIR /usr/src/medplum
ADD ./medplum-server-metadata.tar.gz ./
RUN npm ci --omit=dev && \
  rm package-lock.json

# Stage 2: Create the runtime image
FROM dhi.io/node:24 AS runtime-stage
ENV NODE_ENV=production
WORKDIR /usr/src/medplum
COPY --from=build-stage /usr/src/medplum/ ./
ADD ./medplum-server-runtime.tar.gz ./

EXPOSE 5000 8103

ENTRYPOINT [ "node", "--require", "./packages/server/dist/otel/instrumentation.js", "packages/server/dist/index.js" ]

Notice what’s not there:

  • No groupadd or useradd commands: DHI runs as non-root by default
  • No chown commands: permissions are already correct
  • No USER directive: the default user is already non-privileged

Before vs. After: Server Dockerfile

Before (node:24-slim):

FROM node:24-slim
ENV NODE_ENV=production
WORKDIR /usr/src/medplum

ADD ./medplum-server.tar.gz ./

# Install dependencies, create non-root user, and set permissions
RUN npm ci && \
  rm package-lock.json && \
  groupadd -r medplum && \
  useradd -r -g medplum medplum && \
  chown -R medplum:medplum /usr/src/medplum

EXPOSE 5000 8103

# Switch to the non-root user
USER medplum

ENTRYPOINT [ "node", "--require", "./packages/server/dist/otel/instrumentation.js", "packages/server/dist/index.js" ]

After (dhi.io/node:24):

FROM dhi.io/node:24-dev AS build-stage
ENV NODE_ENV=production
WORKDIR /usr/src/medplum
ADD ./medplum-server-metadata.tar.gz ./
RUN npm ci --omit=dev && rm package-lock.json

FROM dhi.io/node:24 AS runtime-stage
ENV NODE_ENV=production
WORKDIR /usr/src/medplum
COPY --from=build-stage /usr/src/medplum/ ./
ADD ./medplum-server-runtime.tar.gz ./

EXPOSE 5000 8103

ENTRYPOINT [ "node", "--require", "./packages/server/dist/otel/instrumentation.js", "packages/server/dist/index.js" ]

The migration also introduced a cleaner multi-stage build pattern, separating metadata (package.json files) from runtime artifacts.

Before vs. After: App Dockerfile (Nginx)

The web app migration was even more dramatic:

Before (nginx-unprivileged:alpine):

FROM nginxinc/nginx-unprivileged:alpine

# Start as root for permissions
USER root

COPY <<EOF /etc/nginx/conf.d/default.conf
# ... nginx config ...
EOF

ADD ./medplum-app.tar.gz /usr/share/nginx/html
COPY ./docker-entrypoint.sh /docker-entrypoint.sh

# Manual permission setup
RUN chown -R 101:101 /usr/share/nginx/html && \
    chown 101:101 /docker-entrypoint.sh && \
    chmod +x /docker-entrypoint.sh

EXPOSE 3000

# Switch back to non-root
USER 101

ENTRYPOINT ["/docker-entrypoint.sh"]

After (dhi.io/nginx:1):

FROM dhi.io/nginx:1

COPY <<EOF /etc/nginx/nginx.conf
# ... nginx config ...
EOF

ADD ./medplum-app.tar.gz /usr/share/nginx/html
COPY ./docker-entrypoint.sh /docker-entrypoint.sh

EXPOSE 3000

ENTRYPOINT ["/docker-entrypoint.sh"]

Results: Improved Security Posture

After merging the changes, Medplum’s team shared their improved security scan results. The migration to DHI resulted in:

  • Dramatically reduced CVE count – DHI’s minimal base means fewer packages to patch
  • Non-root by default – No manual user configuration required
  • No shell access in production – Reduced attack surface for container escape attempts
  • Continuous patching – All DHI images are rebuilt when upstream security updates are available

For organizations that require stronger guarantees, Docker Hardened Images Enterprise adds SLA-backed remediation timelines, image customizations, and FIPS/STIG variants.

Most importantly, all of this was achieved with zero functional changes to the application. The same tests passed, the same workflows worked, and the same deployment process applied.

CI/CD Integration

Medplum also updated their GitHub Actions workflow to authenticate with the DHI registry:

- name: Login to Docker Hub
  uses: docker/login-action@v2.2.0
  with:
    username: ${{ secrets.DOCKERHUB_USERNAME }}
    password: ${{ secrets.DOCKERHUB_TOKEN }}

- name: Login to Docker Hub Hardened Images
  uses: docker/login-action@v2.2.0
  with:
    registry: dhi.io
    username: ${{ secrets.DOCKERHUB_USERNAME }}
    password: ${{ secrets.DOCKERHUB_TOKEN }}

This allows their CI/CD pipeline to pull hardened base images during builds. The same Docker Hub credentials work for both standard and hardened image registries.

The Multi-Stage Pattern for DHI

One pattern worth highlighting from Medplum’s migration is the use of multi-stage builds with DHI variants:

  1. Build stage: Use dhi.io/node:24-dev which includes npm/yarn for installing dependencies
  2. Runtime stage: Use dhi.io/node:24 which is minimal and doesn’t include package managers

This pattern ensures that build tools never make it into the production image, further reducing the attack surface. It’s a best practice for any containerized Node.js application, and DHI makes it straightforward by providing purpose-built variants for each stage.

Medplum’s Production Architecture

Medplum’s hosted offering runs on AWS using containerized workloads. Their medplum/medplum-server image, built on DHI base images, now deploys to production.

Medplum production architecture

Here’s how the build-to-deploy flow works:

  1. Build time: GitHub Actions pulls dhi.io/node:24-dev and dhi.io/node:24 as base images
  2. Push: The resulting hardened image is pushed to medplum/medplum-server on Docker Hub
  3. Deploy: AWS Fargate pulls medplum/medplum-server:latest and runs the hardened container

The deployed containers inherit all DHI security properties (non-root execution, minimal attack surface, no shell) because they’re built on DHI base images. This demonstrates that DHI works seamlessly with production-grade infrastructure including:

  • AWS Fargate/ECS for container orchestration
  • Elastic Load Balancing for high availability
  • Aurora PostgreSQL for managed database
  • ElastiCache for Redis caching
  • CloudFront for CDN and static assets

No infrastructure changes were required. The same deployment pipeline, the same Fargate configuration, just a more secure base image.

Why This Matters for Healthcare

For healthcare organizations evaluating container security, Medplum’s migration offers several lessons:

1. Eliminating “Vulnerability Noise”

The biggest win from DHI isn’t just security, it’s reducing the operational burden of security. Fewer packages means fewer CVEs to investigate, document, and explain to customers. For teams without dedicated security staff, this reclaimed time is invaluable.

2. Compliance-Friendly Defaults

HIPAA requires covered entities to implement technical safeguards including access controls and audit controls. DHI’s non-root default and minimal attack surface align with these requirements out of the box. For companies pursuing SOC 2 Type 2 certification, which Medplum implemented from Day 1, or HITRUST certification, DHI provides a stronger foundation for the technical controls auditors evaluate.

3. Reduced Audit Surface

When security teams audit container configurations, DHI provides a cleaner story. Instead of explaining custom hardening scripts or why certain CVEs don’t apply, teams can point to Docker’s documented hardening methodology, SLSA Level 3 provenance, and independent security validation by SRLabs. This is particularly valuable during enterprise sales cycles where customers scan vendor images as part of due diligence.

4. Practicing What You Preach

For platforms like Medplum that help customers achieve compliance, using hardened images isn’t just good security, it’s good business. When you’re helping healthcare organizations meet regulatory requirements, your own infrastructure needs to set the example.

5. Faster Security Response

With DHI Enterprise, critical CVEs are patched within 7 days. For healthcare organizations where security incidents can have regulatory implications, this SLA provides meaningful risk reduction and a concrete commitment to share with customers.

Conclusion

Medplum’s migration to Docker Hardened Images demonstrates that improving container security doesn’t have to be painful. With minimal code changes (54 additions and 52 deletions) they achieved:

  • Secure-by-Default images that meet enterprise requirements
  • Automatic non-root execution
  • Dramatically reduced CVE surface
  • Simplified Dockerfiles with no manual hardening scripts
  • Less “security toil” for their engineering team
  • A stronger compliance story for enterprise customers

By offloading OS-level hardening to Docker, Medplum can focus on what they do best: building healthcare infrastructure while their security posture improves automatically with each DHI update.

For a platform with 500,000+ Docker Hub pulls serving healthcare organizations worldwide, this migration shows that DHI is ready for production workloads at scale. More importantly, it shows that security improvements can actually reduce operational burden rather than add to it.

For platforms helping others achieve compliance, practicing what you preach matters. With Docker Hardened Images, that just got a lot easier.

Ready to harden your containers? Explore the Docker Hardened Images documentation or browse the free DHI catalog to find hardened versions of your favorite base images.

Resources

]]>
Hardened Images Are Free. Now What? https://www.docker.com/blog/hardened-images-free-now-what/ Tue, 10 Feb 2026 14:00:00 +0000 https://www.docker.com/?p=85125 Docker Hardened Images are now free, covering Alpine, Debian, and over 1,000 images including databases, runtimes, and message buses. For security teams, this changes the economics of container vulnerability management.

DHI includes security fixes from Docker’s security team, which simplifies security response. Platform teams can pull the patched base image and redeploy quickly. But free hardened images raise a question: how should this change your security practice? Here’s how our thinking is evolving at Docker.

What Changes (and What Doesn’t)

DHI gives you a security “waterline.” Below the waterline, Docker owns vulnerability management. Above it, you do. When a scanner flags something in a DHI layer, it’s not actionable by your team. Everything above the DHI boundary remains yours.

The scope depends on which DHI images you use. A hardened Python image covers the OS and runtime, shrinking your surface to application code and direct dependencies. A hardened base image with your own runtime on top sets the boundary lower. The goal is to push your waterline as high as possible.

Vulnerabilities don’t disappear. Below the waterline, you need to pull patched DHI images promptly. Above it, you still own application code, dependencies, and anything you’ve layered on top.

Supply Chain Isolation

DHI provides supply chain isolation beyond CVE remediation.

Community images like python:3.11 carry implicit trust assumptions: no compromised maintainer credentials, no malicious layer injection via tag overwrite, no tampering since your last pull. The Shai Hulud campaign(s) demonstrated the consequences when attackers exploit stolen PATs and tag mutability to propagate through the ecosystem.

DHI images come from a controlled namespace where Docker rebuilds from source with review processes and cooldown periods. Supply chain attacks that burn through community images stop at the DHI boundary. You’re not immune to all supply chain risk, but you’ve eliminated exposure to attacks that exploit community image trust models.

This is a different value proposition than CVE reduction. It’s isolation from an entire class of increasingly sophisticated attacks.

The Container Image as the Unit of Assessment

Security scanning is fragmented. Dependency scanning, SAST, and SCA all run in different contexts, and none has full visibility into how everything fits together at deployment time.

The container image is where all of this converges. It’s the actual deployment artifact, which makes it the checkpoint where you can guarantee uniform enforcement from developer workstation to production. The same evaluation criteria you run locally after docker build can be identical to what runs in CI and what gates production deployments.

This doesn’t need to replace earlier pipeline scanning altogether. It means the image is where you enforce policy consistency and build a coherent audit trail that maps directly to what you’re deploying.

Policy-Driven Automation

Every enterprise has a vulnerability management policy. The gap is usually between policy (PDFs and wikis) and practice (spreadsheets and Jira tickets).

DHI makes that gap easier to close by dramatically reducing the volume of findings that require policy decisions in the first place. When your scanner returns 50 CVEs instead of 500, even basic severity filtering becomes a workable triage system rather than an overwhelming backlog.

A simple, achievable policy might include the following:

  • High and critical severity vulnerabilities require remediation or documented exception
  • Medium and lower severity issues are accepted with periodic review
  • CISA KEV vulnerabilities are always in scope

Most scanning platforms support this level of filtering natively, including Grype, Trivy, Snyk, Wiz, Prisma Cloud, Aqua, and Docker Scout. You define your severity thresholds, apply them automatically, and surface only what requires human judgment.

For teams wanting tighter integration with DHI coverage data, Docker Scout evaluates policies against DHI status directly. Third-party scanners can achieve similar results through pipeline scripting or by exporting DHI coverage information for comparison.

The goal isn’t perfect automation but rather reducing noise enough that your existing policy becomes enforceable without burning out your engineers.

VEX: What You Can Do Today

Docker Hardened Images ship with VEX attestations that suppress CVEs Docker has assessed as not exploitable in context. The natural extension is for your teams to add their own VEX statements for application-layer findings.

Here’s what your security team can do today:

Consume DHI VEX data. Grype (v0.65+), Trivy, Wiz, and Docker Scout all ingest DHI VEX attestations automatically or via flags. Scanners without VEX support can still use extracted attestations to inform manual triage.

Write your own VEX statements. OpenVEX provides the JSON format. Use vexctl to generate and sign statements.

Attach VEX to images. Docker recommends docker scout attestation add for attaching VEX to images already in a registry:

docker scout attestation add \
  --file ./cve-2024-1234.vex.json \
  --predicate-type https://openvex.dev/ns/v0.2.0 \
  <image>

Alternatively, COPY VEX documents into the image filesystem at build time, though this prevents updates without rebuilding.

Configure scanner VEX ingestion. The workflow: scan, identify investigated findings, document as VEX, feed back into scanner config. Future scans automatically suppress assessed vulnerabilities.

Compliance: What DHI Actually Provides

Compliance frameworks such as ISO 27001, SOC 2, and the EU Cyber Resilience Act require systematic, auditable vulnerability management. DHI addresses specific control requirements:

Vulnerability management documentation (ISO 27001  A.8.8. , SOC 2 CC7.1). The waterline model provides a defensible answer to “how do you handle base image vulnerabilities?” Point to DHI, explain the attestation model, show policy for everything above the waterline.

Continuous monitoring evidence. DHI images rebuild and re-scan on a defined cadence. New digests mean current assessments. Combined with your scanner’s continuous monitoring, you demonstrate ongoing evaluation rather than point-in-time checks.

Remediation traceability. VEX attestations create machine-readable records of how each CVE was handled. When auditors ask about specific CVEs in specific deployments, you have answers tied to image digests and timestamps.

CRA alignment. The Cyber Resilience Act requires “due diligence” vulnerability handling and SBOMs. DHI images include SBOM attestations, and VEX aligns with CRA expectations for exploitability documentation.

This won’t satisfy every audit question, but it provides the foundation most organizations lack.

What to Do After You Read This Post

  1. Identify high-volume base images. Check Docker Hub’s Hardened Images catalog (My Hub → Hardened Images → Catalog) for coverage of your most-used images (Python, Node, Go, Alpine, Debian).
  2. Swap one image. Pick a non-critical service, change the FROM line to DHI equivalent, rebuild, scan, compare results.
  3. Configure policy-based filtering. Set up your scanner to distinguish DHI-covered vulnerabilities from application-layer findings. Use Docker Scout or Wiz for native VEX integration, or configure Grype/Trivy ignore policies based on extracted VEX data.
  4. Document your waterline. Write down what DHI covers and what remains your responsibility. This becomes your policy reference and audit documentation.
  5. Start a VEX practice. Convert one informally-documented vulnerability assessment into a VEX statement and attach it to the relevant image.

DHI solves specific, expensive problems around base image vulnerabilities and supply chain trust. The opportunity is building a practice around it that scales.

The Bigger Picture

DHI coverage is expanding. Today it might cover your OS layer, tomorrow it extends through runtimes and into hardened libraries. Build your framework to be agnostic to where the boundary sits. The question is always the same, though, namely —  what has Docker attested to, and what remains yours to assess?

The methodology Docker uses for DHI (policy-driven assessment, VEX attestations, auditable decisions) extends into your application layer. We can’t own your custom code, but we can provide the framework for consistent practices above the waterline. Whether you use Scout, Wiz, Grype, Trivy, or another scanner, the pattern is the same. You can let DHI handle what it covers, automate policy for what remains, and document decisions in formats that travel with artifacts.

At Docker, we’re using DHI internally to build this vulnerability management model. The framework stays constant regardless of how much of our stack is hardened today versus a year from now. Only the boundary moves.

The hardened images are free. The VEX attestations are included. What’s left is integrating these pieces into a coherent security practice where the container is the unit of truth, policy drives automation, and every vulnerability decision is documented by default.

For organizations that require stronger guarantees, FIPS-enabled and STIG-ready images, and customizations, DHI Enterprise is tailor made for those use cases. Get in touch with the Docker team if you would like a demo. If you’re still exploring, take a look at the catalog (no-signup needed) or take DHI Enterprise for a spin with a free trial.

]]>
Your Dependencies Don’t Care About Your FIPS Configuration https://www.docker.com/blog/fips-dependencies-and-prebuilt-binaries/ Thu, 22 Jan 2026 14:00:00 +0000 https://www.docker.com/?p=84608 FIPS compliance is a great idea that makes the entire software supply chain safer. But teams adopting FIPS-enabled container images are running into strange errors that can be challenging to debug. What they are learning is that correctness at the base image layer does not guarantee compatibility across the ecosystem. Change is complicated, and changing complicated systems with intricate dependency webs often yields surprises. We are in the early adaptation phase of FIPS, and that actually provides interesting opportunities to optimize how things work. Teams that recognize this will rethink how they build FIPS and get ahead of the game.

FIPS in practice

FIPS is a U.S. government standard for cryptography. In simple terms, if you say a system is “FIPS compliant,” that means the cryptographic operations like TLS, hashing, signatures, and random number generation are performed using a specific, validated crypto module in an approved mode. That sounds straightforward until you remember that modern software is built not as one compiled program, but as a web of dependencies that carry their own baggage and quirks.

The FIPS crypto error that caught us off guard

We got a ticket recently for a Rails application in a FIPS-enabled container image. On the surface, everything looked right. Ruby was built to use OpenSSL 3.x with the FIPS provider. The OpenSSL configuration was correct. FIPS mode was active.

However, the application started throwing cryptography module errors from the Postgres Rubygem module. Even more confusing, a minimal reproducer of a basic Ruby app and a stock postgres did not reproduce the error and a connection was successfully established. The issue only manifested when using ActiveRecord.

The difference came down to code paths. A basic Ruby script using the pg gem directly exercises a simpler set of operations. ActiveRecord triggers additional functionality that exercises different parts of libpq. The non-FIPS crypto was there all along, but only certain operations exposed it.

Your container image can be carefully configured for FIPS, and your application can still end up using non-FIPS crypto because a dependency brought its own crypto along for the ride. In this case, the culprit was a precompiled native artifact associated with the database stack. When you install pg, Bundler may choose to download a prebuilt binary dependency such as libpq.

Unfortunately those prebuilt binaries are usually built with assumptions that cause problems. They may be linked against a different OpenSSL than the one in your image. They may contain statically embedded crypto code. They may load crypto at runtime in a way that is not obvious.

This is the core challenge with FIPS adoption. Your base image can do everything right, but prebuilt dependencies can silently bypass your carefully configured crypto boundary.

Why we cannot just fix it in the base image yet

The practical fix for the Ruby case was adding this to your Gemfile.

gem "pg", "~> 1.1", force_ruby_platform: true

You also need to install libpq-dev to allow compiling from source. This forces Bundler to build the gem from source on your system instead of using a prebuilt binary. When you compile from source inside your controlled build environment, the resulting native extension is linked against the OpenSSL that is actually in your FIPS image.

Bundler also supports an environment/config knob for the same idea called BUNDLE_FORCE_RUBY_PLATFORM. The exact mechanism matters less than the underlying strategy of avoiding prebuilt native artifacts when you are trying to enforce a crypto boundary.

You might reasonably ask why we do not just add BUNDLE_FORCE_RUBY_PLATFORM to the Ruby FIPS image by default. We discussed this internally, and the answer illustrates why FIPS complexity cascades.

Setting that flag globally is not enough on its own. You also need a C compiler and the relevant libraries and headers in the build stage. And not every gem needs this treatment. If you flip the switch globally, you end up compiling every native gem from source, which drags in additional headers and system libraries that you now need to provide. The “simple fix” creates a new dependency management problem.

Teams adopt FIPS images to satisfy compliance. Then they have to add back build complexity to make the crypto boundary real and verify that every dependency respects it. This is not a flaw in FIPS or in the tooling. It is an inherent consequence of retrofitting a strict cryptographic boundary onto an ecosystem built around convenience and precompiled artifacts.

The patterns we are documenting today will become the defaults tomorrow. The tooling will catch up. Prebuilt packages will get better. Build systems will learn to handle the edge cases. But right now, teams need to understand where the pitfalls are.

What to do if you are starting a FIPS journey

You do not need to become a crypto expert to avoid the obvious traps. You only need a checklist mindset. The teams working through these problems now are building real expertise that will be valuable as FIPS requirements expand across industries.

  • Treat prebuilt native dependencies as suspect. If a dependency includes compiled code, assume it might carry its own crypto linkage until you verify otherwise. You can use ldd on Linux to inspect dynamic linking and confirm that binaries link against your system OpenSSL rather than a bundled alternative.
  • Use a multi-stage build and compile where it matters. Keep your runtime image slim, but allow a builder stage with the compiler and headers needed to compile the few native pieces that must align with your FIPS OpenSSL.
  • Test the real execution path, not just “it starts.” For Rails, that means running a query, not only booting the app or opening a connection. The failures we saw appeared when using the ORM, not on first connection.
  • Budget for supply-chain debugging. The hard part is not turning on FIPS mode. The hard part is making sure all the moving parts actually respect it. Expect to spend time tracing crypto usage through your dependency graph.

Why this matters beyond government contracts

FIPS compliance has traditionally been seen as a checkbox for federal sales. That is changing. As supply chain security becomes a board-level concern across industries, validated cryptography is moving from “nice to have” to “expected.” The skills teams build solving FIPS problems today translate directly to broader supply chain security challenges.

Think about what you learn when you debug a FIPS failure. You learn to trace crypto usage through your dependency graph, to question prebuilt artifacts, to verify that your security boundaries are actually enforced at runtime. Those skills matter whether you are chasing a FedRAMP certification or just trying to answer your CISO’s questions about software provenance.

The opportunity in the complexity

FIPS is not “just a switch” you flip in a base image. View FIPS instead as a new layer of complexity that you might have to debug across your dependency graph. That can sound like bad news, but switch the framing and it becomes an opportunity to get ahead of where the industry is going.

The ecosystem will adapt and the tooling will improve. The teams investing in understanding these problems now will be the ones who can move fastest when FIPS or something like it becomes table stakes.

If you are planning a FIPS rollout, start by controlling the prebuilt native artifacts that quietly bypass the crypto module you thought you were using. Recognize that every problem you solve is building institutional knowledge that compounds over time. This is not just compliance work. It is an investment in your team’s security engineering capability.

]]>
Securing the software supply chain shouldn’t be hard. According to theCUBE Research, Docker makes it simple https://www.docker.com/blog/securing-the-software-supply-chain-shouldnt-be-hard-according-to-thecube-research-docker-makes-it-simple/ Tue, 25 Nov 2025 14:04:33 +0000 https://www.docker.com/?p=83324 In today’s software-driven economy, securing software supply chains is no longer optional, it’s mission-critical. Yet enterprises often struggle to balance developer speed and security. According to theCUBE Research, 95% of organizations say Docker improved their ability to identify and remediate vulnerabilities, while 79% rate it highly effective at maintaining compliance with security standards. Docker embeds security directly into the developer workflow so that protection happens by default, not as an afterthought.

At the foundation are Docker Hardened Images, which are ultra-minimal, continuously patched containers that cut the attack surface by up to 95% and achieve near-zero CVEs. These images, combined with Docker Scout’s real-time vulnerability analysis, allow teams to prevent, detect, and resolve issues early, keeping innovation and security in sync. The result: 92% of enterprises report fewer application vulnerabilities, and 60% see reductions of 25% or more.

Docker also secures agentic AI development through the MCP Catalog, Toolkit, and Gateway. These tools provide a trusted, containerized way to run Model Context Protocol (MCP) servers that power AI agents, ensuring communication happens in a secure, auditable, and isolated environment. According to theCUBE Research, 87% of organizations reduced AI setup time by over 25%, and 95% improved AI testing and validation, demonstrating that Docker makes AI development both faster and safer.

With built-in Zero Trust principles, role-based access controls, and compliance support for SOC 2, ISO 27001, and FedRAMP, Docker simplifies adherence to enterprise-grade standards without slowing developers down. The payoff is clear: 69% of enterprises report ROI above 101%, driven in part by fewer security incidents, faster delivery, and improved productivity. In short, Docker’s modern approach to DevSecOps enables enterprises to build, ship, and scale software that’s not only fast, but fundamentally secure.

Docker’s impact on software supply chain security

Docker has evolved into a complete development platform that helps enterprises build, secure, and deploy modern and agentic AI applications with trusted DevSecOps and containerization practices. From Docker Hardened Images, which are secure, minimal, and production-ready container images with near-zero CVEs, to Docker Scout’s real-time vulnerability insights and the MCP Toolkit for trusted AI agents, teams gain a unified foundation for software supply chain security.

Every part of the Docker ecosystem is designed to blend in with existing developer workflows while making security affordable, transparent, and universal. Whether you want to explore the breadth of the Docker Hardened Images catalog, analyze your own image data with Docker Scout, or test secure AI integration through the MCP Gateway, it is easy to see how Docker embeds security by default, not as an afterthought.

Review additional resources

theCUBE docker banner
]]>
Docker Desktop 4.50: Indispensable for Daily Development  https://www.docker.com/blog/docker-desktop-4-50/ Wed, 12 Nov 2025 14:00:00 +0000 https://www.docker.com/?p=81883 Docker Desktop 4.50 represents a major leap forward in how development teams build, secure, and ship software. Across the last several releases, we’ve delivered meaningful improvements that directly address the challenges you face every day: faster debugging workflows, enterprise-grade security controls that don’t get in your way, and seamless AI integration that makes modern development accessible to every team member.

Whether you’re debugging a build failure at 2 AM, managing security policies across distributed teams, or leveraging AI capabilities to build your applications, Docker Desktop delivers clear, real-world value that keeps your workflows moving and your infrastructure secure.

4.50

Accelerating Daily Development: Productivity and Control for Every Developer

Modern development teams face mounting pressures: complex multi-service applications, frequent context switching between tools, inconsistent local environments, and the constant need to balance productivity with security and governance requirements. For principal engineers managing these challenges, the friction of daily development workflows can significantly impact team velocity and code quality.

Docker Desktop addresses these challenges head-on by delivering seamless experiences that eliminate friction and giving organizations the control necessary to maintain security and compliance without slowing teams down.

Seamless Developer Experiences

Docker Debug is now free for all users, removing barriers to troubleshooting and making it easier for every developer on your team to diagnose issues quickly. The enhanced IDE integration goes deeper than ever before: the Dockerfile debugger in the VSCode Extension enables developers to step through build processes directly within their familiar editing environment, reducing the cognitive overhead of switching between tools. Whether you’re using VSCode, Cursor, or other popular editors, Docker Desktop integrates naturally into your existing workflow. For Windows-based enterprises, Docker Desktop’s ongoing engineering investments are delivering significant stability improvements with WSL2 integration, ensuring consistent performance for development teams at scale.

Getting applications from local development to production environments requires reducing the gap between how developers work locally and how applications run at scale. Compose to Kubernetes capabilities enable teams to translate local multi-service applications into production-ready Kubernetes deployments, while cagent provides a toolkit for running and developing agents that simplifies the development process. Whether you’re orchestrating containerized microservices or developing agentic AI workflows, Docker Desktop accelerates the path from experimentation to production deployment.

Enterprise-Level Control and Governance

For organizations requiring centralized management, Docker Desktop delivers enterprise-grade capabilities that maintain security without sacrificing developer autonomy. Administrators can set proxy settings via macOS configuration profiles, and can specify PAC files and Embedded PAC scripts with installer flags for macOS and Windows Docker, ensuring corporate network policies are automatically enforced during deployment without requiring manual developer configuration, further extending enterprise policy enforcement.

A faster release cadence with continuous updates ensures every developer runs the latest stable version with critical security patches, eliminating the traditional tension between IT requirements and developer productivity. The Kubernetes Dashboard is now part of the left navigation, making it easier to find and use.

Kind (k8s) Enterprise Support brings production-grade Kubernetes tooling to local development, enabling teams to test complex orchestration scenarios before deployment. 

k8s settings

Figure 1: K8 Settings

Together, these capabilities build on Docker Desktop’s position as the foundation for modern development, adding enterprise-grade management that scales with your organization’s needs. You get the visibility and control that enterprise architecture teams require while preserving the speed and flexibility that keeps developers productive.

Securing Container Workloads: Enterprise-Grade Protection Without Sacrificing Speed

As containerized applications move from development to production and AI workloads proliferate across enterprises, security teams face a critical challenge: how do you enforce rigorous security controls without creating bottlenecks that slow development velocity? Traditional approaches often force organizations to choose between security and speed, but that’s a false choice that puts both innovation and infrastructure at risk.

Docker Desktop’s recent releases address this tension directly, delivering enterprise-grade security controls that operate transparently within developer workflows. These aren’t afterthought features; they’re foundational protections designed to give security and platform teams confidence at scale while keeping developers productive.

Granular Control Over Container Behavior

Enforce Local Port Bindings prevents services running in Docker Desktop from being exposed across the local network, ensuring developers maintain network isolation during local development while retaining full functionality. For teams in regulated industries where network segmentation requirements extend to development environments, this capability helps maintain compliance standards without disrupting developer workflows.

Building on Secure Foundations

These runtime protections work in tandem with secure container foundations. Docker’s new Hardened Images, secure, minimal, production-ready container images maintained by Docker with near-zero CVEs and enterprise SLA backing. Recent updates introduced unlimited catalog pricing and the addition of Helm charts to the catalog. We also outlined Docker’s five pillars for Software Supply Chain Security, delivering transparency and eliminating the endless CVE remediation cycle. While Hardened Images are available as a separate add-on, they’re purpose-built to extend the secure-by-default foundation that Docker Desktop provides, giving teams a comprehensive approach to container security from development through production.

Seamless Enterprise Policy Integrations

The Docker CLI now gracefully handles certificates issued by non-conforming certificate authorities (CAs) that use negative serial numbers. While the X.509 standard specifies that certificate serial numbers must be positive, some enterprise PKI systems still produce certificates that violate this rule. Previously, organizations had to choose between adhering to their CA configuration and maintaining Docker compatibility, a frustrating trade-off that often led to insecure workarounds. Now, Docker Desktop works seamlessly with enterprise certificate infrastructure, ensuring developers can authenticate to private registries without security teams compromising their PKI standards.

These improvements reflect Docker’s commitment to being secure by default. Rather than treating security as a feature developers must remember to enable, Docker Desktop builds protection into the platform itself, giving enterprises the confidence to scale container adoption while maintaining the developer experience that drives innovation.

Unlocking AI Development: Making Model Context Protocol (MCP)Accessible for Every Developer

As AI-native development becomes central to modern software engineering, developers face a critical challenge: integrating AI capabilities into their workflows shouldn’t require extensive configuration knowledge or create friction that slows teams down. The Model Context Protocol (MCP) offers powerful capabilities for connecting AI agents to development tools and data sources, but accessing and managing these integrations has historically been complex, creating barriers to adoption, especially for teams with varying technical expertise.

Docker is addressing these challenges directly by making MCP integration seamless and secure within Docker Desktop.

Guided Onboarding Through Learning Center and MCP Toolkit Walkthroughs and Improved MCP Server Discovery

Understanding that accessibility drives adoption, Docker has introduced a redesigned onboarding experience through the Learning Center. The new MCP Toolkit Walkthroughs guide teams through complex setup processes step-by-step, ensuring that engineers of all skill levels can confidently adopt AI-powered workflows. Further, Docker’s MCP Server Discovery feature simplifies discovery by enabling developers to search, filter, and sort available MCP servers efficiently.  By eliminating the knowledge barriers and frictions around discovery, these improvements accelerate time to productivity and help organizations scale AI development practices across their teams.

Expanded Catalog: 270+ MCP Servers and Growing

The Docker MCP Catalog now includes over 270 MCP servers, with support for more than 60 remote servers. We’ve also added one-click connections for popular clients like Claude Code and Codex, making it easier than ever to supercharge your AI coding agents with powerful MCP tools. Getting started takes just a few clicks.

Remote MCP Server Support with Built-In OAuth

Connecting to MCP servers has traditionally meant dealing with manual tokens, fragile config files, and scattered credential management. It’s frustrating, especially for developers new to these workflows, who often don’t know where to find the right credentials in third-party tools. With the latest update to the Docker MCP Toolkit, developers can now securely connect to 60+ remote MCP servers, including Notion and Linear, using built-in OAuth support. This update goes beyond convenience; it lays the foundation for a more connected, intelligent, and automated developer experience, all within Docker Desktop. Read more about connecting to remote MCP servers.

MCP Servers with OAuth

Figure 2: Docker MCP Toolkit now supports remote MCP Servers with OAuth built-in

Smarter, More Efficient, and More Capable Agents with Dynamic MCPs

In this release, we’re introducing dynamic MCPs, a major step forward in enabling AI agents to discover, configure, and compose tools autonomously. Previously, integrating MCP servers required manual setup and static configurations. Now, with new features like Smart Search and Tool Composition, agents can search the MCP Catalog, pull only the tools they need, and even generate code to compose multi-tool workflows, all within a secure, sandboxed environment. These enhancements not only increase agent autonomy but also improve performance by reducing token usage and minimizing context bloat. Ultimately, this leads to less context switching and more focused time for developers. Read more about dynamic MCPs.

Together, these advancements represent Docker’s commitment to making AI-native development accessible and practical for development teams of any size.

Conclusion: Committed to Your Development Success

The innovations across Docker Desktop 4.45 through 4.50 reinforce our commitment to being the development solution teams rely on every day, for every workflow, at any scale.

We’ve made daily development faster and more integrated, with free debugging tools, native IDE support, and enterprise governance that actually works. We’ve strengthened security with controls that protect your infrastructure without creating bottlenecks. And we’ve made AI development accessible, turning complex integrations into guided experiences that accelerate your team’s capabilities. The impact is measurable. Independent research from theCUBE found that Docker Desktop users achieve 50% faster build times and reclaim 10-40+ hours per developer each month, time that goes directly back into innovation

This is Docker Desktop operating as your indispensable foundation: giving developers the tools they need to stay productive, giving security teams the controls they need to stay protected, and giving organizations the confidence they need to innovate at scale.

As we continue our accelerated release cadence, expect Docker to keep delivering the features that matter most to how you build, ship, and run modern applications. We’re committed to being the solution you can count on today and as your needs evolve.

Upgrade to the latest Docker Desktop now

Learn more

]]>
Expanding Docker Hardened Images: Secure Helm Charts for Deployments https://www.docker.com/blog/docker-hardened-images-helm-charts-beta/ Mon, 29 Sep 2025 20:02:52 +0000 https://www.docker.com/?p=78309 Development teams are under growing pressure to secure their software supply chains. Teams need trusted images, streamlined deployments, and compliance-ready tooling from partners they can rely on long term. Our customers have made it clear that they’re not just looking for one-off vendors. They’re looking for true security partners across development and deployment.

That’s why we are now offering Helm charts in the Docker Hardened Images (DHI) Catalog. These charts simplify Kubernetes deployments and make Docker a trusted security partner across the development and deployment lifecycle.

Bringing security and simplicity to Helm deployments

Helm charts are the most popular way to package and deploy applications to Kubernetes, with 75% of users preferring to use them, according to CNCF surveys. With security incidents making headlines more often, confidence now depends on having security and traceability built into every deployment.

Helm charts in the DHI Catalog make it simple to deploy hardened images to production Kubernetes environments. Teams no longer need to worry about insecure configurations, unverified sources, or vulnerable dependencies. Each chart is built with our hardened build system, providing signed provenance and clear traceability so you know exactly what you are deploying every time.

Supporting customers in the wake of Broadcom changes

Broadcom recently announced changes to Bitnami’s distribution model. Most images and charts have moved into a commercial subscription, older versions are archived without updates, and only a limited set of :latest tags remain free for use.

For teams affected by this change, Docker offers a clear path forward:

  • Free Docker Official Images, which can be paired with upstream Helm charts for stable, open source deployments
  • Docker Hardened Images with Helm charts in the DHI Catalog for enterprise-grade security and compliance

Many teams have relied on Bitnami for images and charts. Helm charts in the DHI Catalog now give teams the option to partner with Docker for secure, compliant deployments, with consistent coverage from development through deployment.

If your team is evaluating alternatives, we invite you to join the beta program. Sign up through our interest form to test Helm charts in the DHI Catalog and help guide their development.

What Helm charts in the DHI Catalog offer

Helm charts in the DHI Catalog are available today in beta. Beta offerings are early versions of future functionality that give customers the opportunity to test, validate, and share feedback. Your input directly shapes how we refine these charts before general availability.

The Helm charts in the DHI Catalog include:

  • DHI by default: Every chart automatically references Docker Hardened Images, ensuring deployments inherit DHI’s security, compliance, and SLA-backed patching without manual intervention.
  • Regular updates: New upstream versions and DHI CVE fixes automatically flow into chart releases.
  • Enterprise-grade security: Charts are built with our SLSA Level 3 build system and include signed provenance for compliance.
  • Customer-driven roadmap: We are guided by your feedback, so your input has a direct impact on what we prioritize.

Docker’s Trusted Image Catalogs: DHI and more

It’s worth noting that whether you’re looking for community continuity or enterprise-grade assurance, Docker has you covered:

Docker Official Images (DOI)

Docker Hardened Images (DHI)

Free and widely available

Enterprise-ready

Maintained with upstream communities

Minimal, non-root by default, near-zero CVEs

Billions of pulls every month

SLA-backed with fast CVE patching

Stable, trustworthy foundation

Compliance-ready with signed provenance and SBOMs

Together, DOI and DHI give organizations choice: a free, stable foundation for development, or an enterprise-grade hardened catalog with charts for production. If you rely on Docker Official Images, rest assured: they remain free, stable, and community-driven. You can rely on them for a solid foundation for your open source workloads.

Join the beta: Help shape Helm charts in the DHI Catalog

Helm charts in the DHI Catalog are now in invite-only beta as of October 2025. We are working closely with a set of customers to prioritize which charts matter most and ensure migration is smooth.

Participation is open via our interest form, and we welcome your feedback.

Sign up for the beta today! 

]]>