software supply chain security – Docker https://www.docker.com Tue, 03 Mar 2026 20:30:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.docker.com/app/uploads/2024/02/cropped-docker-logo-favicon-32x32.png software supply chain security – Docker https://www.docker.com 32 32 Announcing Docker Hardened System Packages https://www.docker.com/blog/announcing-docker-hardened-system-packages/ Tue, 03 Mar 2026 20:30:00 +0000 https://www.docker.com/?p=85825 Your Package Manager, Now with a Security Upgrade

Last December, we made Docker Hardened Images (DHI) free because we believe secure, minimal, production-ready images should be the default. Every developer deserves strong security at no cost. It should not be complicated or locked behind a paywall.

From the start, flexibility mattered just as much as security. Unlike opaque, proprietary hardened alternatives, DHI is built on trusted open source foundations like Alpine and Debian. That gives teams true multi-distro flexibility without forcing change. If you run Alpine, stay on Alpine. If Debian is your standard, keep it. DHI strengthens what you already use. It does not require you to replace it.

Today, we are extending that philosophy beyond images.

With Docker Hardened System Packages, we’re driving security deeper into the stack. Every package is built on the same secure supply chain foundation: source-built and patched by Docker, cryptographically attested, and backed by an SLA.

The best part? Multi-distro support by design.

The result is consistent, end-to-end hardening across environments with the production-grade reliability teams expect.

Since introducing DHI Community (our OSS tier), interest has surged. The DHI catalog has expanded from more than 1,000 to over 2,000 hardened container images. Its openness and ability to meet teams where they are have accelerated adoption across the ecosystem. Companies of all sizes, along with a growing number of open source projects, are making DHI their standard for secure containers.

Just consider this short selection of examples:
  • n8n.io has moved its production infrastructure to DHI, they share why and how in this recent webinar
  • Medplum, an open-source electronic health records platform (managing data of 20+ million patients) has now standardized to DHI
  • Adobe uses DHI because of great alignment with its security posture and developer tooling compatibility
  • Attentive co-authored this e-book with Docker on helping others move from POC to production with DHI

Docker Hardened System Packages: Going deeper into the container

From day one, Docker has built and secured the most critical operating system packages to deliver on our CVE remediation commitments. That’s how we continuously maintain near-zero CVEs in DHI images. At the same time, we recognize that many teams extend our minimal base images with additional upstream packages to meet their specific requirements. To support that reality, we are expanding our catalog with more than 8,000 hardened Alpine packages, with Debian coverage coming soon.

This expansion gives teams greater flexibility without weakening their security posture. You can start with a DHI base image and tailor it to your needs while maintaining the same hardened supply chain guarantees. There is no need to switch distros to get continuous patching, verified builds through a SLSA Build Level 3 pipeline, and enterprise-grade assurances. Your teams can continue working with the Alpine and Debian environments they know, now backed by Docker’s secure build system from base image to system package.

Why this matters for your security posture:

Complete provenance chain. Every package is built from source by Docker, attested, and cryptographically signed. From base image to final container, your provenance stays intact.

Faster vulnerability remediation. When a vulnerability is identified, we patch it at the package level and publish it to the catalog. Not image by image. That means fixes move faster and remediation scales across your entire container fleet.

Extending the near-zero CVE guarantee. DHI images maintain near-zero. Hardened System Packages extend that guarantee more broadly across the software ecosystem, covering packages you add during customization.

Use hardened packages with your containers. DHI Enterprise customers get access to the secure packages repository, making it possible to use Hardened System Packages beyond DHI images. Integrate them into your own pipelines and across Alpine and Debian workloads throughout your environment.

The work we’re doing on our users’ behalf: Maintaining thousands of packages is continuous work. We monitor upstream projects, backport patches, test compatibility, rebuild when dependencies change, and generate attestations for every release. Alpine alone accounts for more than 8,000 packages today, soon approaching 10,000, with Debian next.

Making enterprise-grade security even more accessible

We’re also simplifying how teams access DHI. The full catalog of thousands of open-source images under Apache 2.0 now has a new name: DHI Community. There are no licensing changes, this is just a name change, so all of that free goodness has an easy name to refer to.

For teams that need SLA-backed CVE remediation and customization capabilities at a more accessible price point, we’re announcing a new pricing tier today, DHI Select. This new tier brings enterprise-grade security at a price of $5,000 per repo.

For organizations with more demanding requirements, including unlimited customizations, access to the Hardened System Packages repo, and extended lifecycle coverage for up to five years after upstream EOL, DHI Enterprise and the DHI Extended Lifecycle Support add-on remain available.

More options means more teams can adopt the right level of security for where they are today.

Build with the standard that’s redefining container security

Docker’s momentum in securing the software supply chain is accelerating. We’re bringing security to more layers of the stack, making it easier for teams to build securely by default, for open source-based containers as well as your company’s internally-developed software. We’re also pushing toward a one-day (or shorter) timeline for critical CVE fixes. Each step builds on the last, moving us closer to end-to-end supply chain security for all of your critical applications.

Get started:

  • Join the n8n webinar to see how they’re running production workloads on DHI
  • Start your free trial and get access to the full DHI catalog, now with Docker Hardened System Packages

]]>
Hardened Images Are Free. Now What? https://www.docker.com/blog/hardened-images-free-now-what/ Tue, 10 Feb 2026 14:00:00 +0000 https://www.docker.com/?p=85125 Docker Hardened Images are now free, covering Alpine, Debian, and over 1,000 images including databases, runtimes, and message buses. For security teams, this changes the economics of container vulnerability management.

DHI includes security fixes from Docker’s security team, which simplifies security response. Platform teams can pull the patched base image and redeploy quickly. But free hardened images raise a question: how should this change your security practice? Here’s how our thinking is evolving at Docker.

What Changes (and What Doesn’t)

DHI gives you a security “waterline.” Below the waterline, Docker owns vulnerability management. Above it, you do. When a scanner flags something in a DHI layer, it’s not actionable by your team. Everything above the DHI boundary remains yours.

The scope depends on which DHI images you use. A hardened Python image covers the OS and runtime, shrinking your surface to application code and direct dependencies. A hardened base image with your own runtime on top sets the boundary lower. The goal is to push your waterline as high as possible.

Vulnerabilities don’t disappear. Below the waterline, you need to pull patched DHI images promptly. Above it, you still own application code, dependencies, and anything you’ve layered on top.

Supply Chain Isolation

DHI provides supply chain isolation beyond CVE remediation.

Community images like python:3.11 carry implicit trust assumptions: no compromised maintainer credentials, no malicious layer injection via tag overwrite, no tampering since your last pull. The Shai Hulud campaign(s) demonstrated the consequences when attackers exploit stolen PATs and tag mutability to propagate through the ecosystem.

DHI images come from a controlled namespace where Docker rebuilds from source with review processes and cooldown periods. Supply chain attacks that burn through community images stop at the DHI boundary. You’re not immune to all supply chain risk, but you’ve eliminated exposure to attacks that exploit community image trust models.

This is a different value proposition than CVE reduction. It’s isolation from an entire class of increasingly sophisticated attacks.

The Container Image as the Unit of Assessment

Security scanning is fragmented. Dependency scanning, SAST, and SCA all run in different contexts, and none has full visibility into how everything fits together at deployment time.

The container image is where all of this converges. It’s the actual deployment artifact, which makes it the checkpoint where you can guarantee uniform enforcement from developer workstation to production. The same evaluation criteria you run locally after docker build can be identical to what runs in CI and what gates production deployments.

This doesn’t need to replace earlier pipeline scanning altogether. It means the image is where you enforce policy consistency and build a coherent audit trail that maps directly to what you’re deploying.

Policy-Driven Automation

Every enterprise has a vulnerability management policy. The gap is usually between policy (PDFs and wikis) and practice (spreadsheets and Jira tickets).

DHI makes that gap easier to close by dramatically reducing the volume of findings that require policy decisions in the first place. When your scanner returns 50 CVEs instead of 500, even basic severity filtering becomes a workable triage system rather than an overwhelming backlog.

A simple, achievable policy might include the following:

  • High and critical severity vulnerabilities require remediation or documented exception
  • Medium and lower severity issues are accepted with periodic review
  • CISA KEV vulnerabilities are always in scope

Most scanning platforms support this level of filtering natively, including Grype, Trivy, Snyk, Wiz, Prisma Cloud, Aqua, and Docker Scout. You define your severity thresholds, apply them automatically, and surface only what requires human judgment.

For teams wanting tighter integration with DHI coverage data, Docker Scout evaluates policies against DHI status directly. Third-party scanners can achieve similar results through pipeline scripting or by exporting DHI coverage information for comparison.

The goal isn’t perfect automation but rather reducing noise enough that your existing policy becomes enforceable without burning out your engineers.

VEX: What You Can Do Today

Docker Hardened Images ship with VEX attestations that suppress CVEs Docker has assessed as not exploitable in context. The natural extension is for your teams to add their own VEX statements for application-layer findings.

Here’s what your security team can do today:

Consume DHI VEX data. Grype (v0.65+), Trivy, Wiz, and Docker Scout all ingest DHI VEX attestations automatically or via flags. Scanners without VEX support can still use extracted attestations to inform manual triage.

Write your own VEX statements. OpenVEX provides the JSON format. Use vexctl to generate and sign statements.

Attach VEX to images. Docker recommends docker scout attestation add for attaching VEX to images already in a registry:

docker scout attestation add \
  --file ./cve-2024-1234.vex.json \
  --predicate-type https://openvex.dev/ns/v0.2.0 \
  <image>

Alternatively, COPY VEX documents into the image filesystem at build time, though this prevents updates without rebuilding.

Configure scanner VEX ingestion. The workflow: scan, identify investigated findings, document as VEX, feed back into scanner config. Future scans automatically suppress assessed vulnerabilities.

Compliance: What DHI Actually Provides

Compliance frameworks such as ISO 27001, SOC 2, and the EU Cyber Resilience Act require systematic, auditable vulnerability management. DHI addresses specific control requirements:

Vulnerability management documentation (ISO 27001  A.8.8. , SOC 2 CC7.1). The waterline model provides a defensible answer to “how do you handle base image vulnerabilities?” Point to DHI, explain the attestation model, show policy for everything above the waterline.

Continuous monitoring evidence. DHI images rebuild and re-scan on a defined cadence. New digests mean current assessments. Combined with your scanner’s continuous monitoring, you demonstrate ongoing evaluation rather than point-in-time checks.

Remediation traceability. VEX attestations create machine-readable records of how each CVE was handled. When auditors ask about specific CVEs in specific deployments, you have answers tied to image digests and timestamps.

CRA alignment. The Cyber Resilience Act requires “due diligence” vulnerability handling and SBOMs. DHI images include SBOM attestations, and VEX aligns with CRA expectations for exploitability documentation.

This won’t satisfy every audit question, but it provides the foundation most organizations lack.

What to Do After You Read This Post

  1. Identify high-volume base images. Check Docker Hub’s Hardened Images catalog (My Hub → Hardened Images → Catalog) for coverage of your most-used images (Python, Node, Go, Alpine, Debian).
  2. Swap one image. Pick a non-critical service, change the FROM line to DHI equivalent, rebuild, scan, compare results.
  3. Configure policy-based filtering. Set up your scanner to distinguish DHI-covered vulnerabilities from application-layer findings. Use Docker Scout or Wiz for native VEX integration, or configure Grype/Trivy ignore policies based on extracted VEX data.
  4. Document your waterline. Write down what DHI covers and what remains your responsibility. This becomes your policy reference and audit documentation.
  5. Start a VEX practice. Convert one informally-documented vulnerability assessment into a VEX statement and attach it to the relevant image.

DHI solves specific, expensive problems around base image vulnerabilities and supply chain trust. The opportunity is building a practice around it that scales.

The Bigger Picture

DHI coverage is expanding. Today it might cover your OS layer, tomorrow it extends through runtimes and into hardened libraries. Build your framework to be agnostic to where the boundary sits. The question is always the same, though, namely —  what has Docker attested to, and what remains yours to assess?

The methodology Docker uses for DHI (policy-driven assessment, VEX attestations, auditable decisions) extends into your application layer. We can’t own your custom code, but we can provide the framework for consistent practices above the waterline. Whether you use Scout, Wiz, Grype, Trivy, or another scanner, the pattern is the same. You can let DHI handle what it covers, automate policy for what remains, and document decisions in formats that travel with artifacts.

At Docker, we’re using DHI internally to build this vulnerability management model. The framework stays constant regardless of how much of our stack is hardened today versus a year from now. Only the boundary moves.

The hardened images are free. The VEX attestations are included. What’s left is integrating these pieces into a coherent security practice where the container is the unit of truth, policy drives automation, and every vulnerability decision is documented by default.

For organizations that require stronger guarantees, FIPS-enabled and STIG-ready images, and customizations, DHI Enterprise is tailor made for those use cases. Get in touch with the Docker team if you would like a demo. If you’re still exploring, take a look at the catalog (no-signup needed) or take DHI Enterprise for a spin with a free trial.

]]>
Safer Docker Hub Pulls via a Sonatype-Protected Proxy https://www.docker.com/blog/safer-docker-hub-pulls-via-a-sonatype-protected-proxy/ Wed, 14 Jan 2026 20:27:49 +0000 https://www.docker.com/?p=84582 Why a “protected repo”?

Modern teams depend on public container images, yet most environments lack a single, auditable control point for what gets pulled and when. This often leads to three operational challenges:

  • Inconsistent or improvised base images that drift across teams and pipelines.
  • Exposure to new CVEs when tags remain unchanged but upstream content does not.
  • Unreliable workflows due to rate limiting, throttling, or pull interruptions.

A protected repository addresses these challenges by evaluating images at the boundary between public sources and internal systems, ensuring only trusted content is available to the build process. 

Routing upstream pulls through a Nexus Repository Docker proxy that authenticates to Docker Hub and caches approved layers and creates a security and reliability checkpoint. Repository Firewall inspects image layers and their components against configured policies and enforces the appropriate action, such as allow, quarantine, or block, based on the findings. This provides teams a standard, dependable entry point for base images. Approved content is cached to accelerate subsequent pulls, while malware and high-severity vulnerabilities are blocked before any layer reaches the developer’s environment.

Combining this workflow with curated sources such as Docker Official Images or Docker Hardened Images provides a stable, vetted baseline for the entire organization.

Docker Hub authentication (PAT/OAT) quick setup

Before configuring a Nexus Docker proxy, set up authenticated access to Docker Hub. Authentication prevents anonymous-pull rate limits and ensures that shared systems do not rely on personal developer credentials. Docker Hub supports two types of access tokens, and for proxies or CI/CD systems the recommended option is an Organization Access Token (OAT).

Choose the appropriate token type

Personal Access Token (PAT): Use a PAT when authentication is tied to an individual account, such as local development or small teams.

  • Tied to a single user account
  • Required for CLI logins when the user enables two-factor authentication
  • Not recommended for shared infrastructure

Organization Access Token (OAT) (recommended): Use an OAT when authentication is needed for systems that serve multiple users or teams.

  • Associated with an organization rather than an individual
  • Suitable for CI/CD systems, build infrastructure, and Nexus Docker proxies
  • Compatible with SSO and 2FA enforcement
  • Supports granular permissions and revocation
  • Requires a Docker Hub Team or Business plan

Create an access token

To create a Personal Access Token (PAT):

  • Open Docker Hub account settings (clink on your hub avatar in the top right corner).
  • Select “Personal access tokens”.
  • Click on “Generate new token”.
  • Define token Name, Expiration and Access permissions.
  • Choose “Generate” and save the value immediately, as it cannot be viewed again.
image 1

To create an Organization Access Token (OAT):

  1. Sign in to Docker Home and select your organization.
  2. Select Admin Console, then Access tokens.
  3. Select Generate access token.
  4. Expand the Repository drop-down and assign only the required permissions, typically read/pull for proxies or CI systems.
  5. Select Generate token. Copy the token that appears on the screen and save it. You won’t be able to retrieve the token once you exit the screen.
image2

Recommended practices

  • Scope tokens to the minimum necessary permissions
  • Rotate tokens periodically
  • Revoke tokens immediately if they are exposed
  • Monitor last-used timestamps to confirm expected usage patterns

Step-by-step: create a Docker Hub proxy

The next step after configuring authentication is to make your protected repo operational by turning Nexus into your organization’s Docker Hub proxy. A Docker proxy repository in Nexus Repository provides  a single, policy-enforced registry endpoint that performs upstream pulls on behalf of developers and CI, caches layers locally for faster and more reliable builds, and centralizes access and audit trails so teams can manage credentials and image usage from one place.

To create the proxy:

  • As an administrator, navigate to the Settings view (gear icon).
  • Open Repositories and select Create repository.
  • Choose docker (proxy) as the repository type.
  • Configure the following settings:
  • Remote storage: https://registry-1.docker.io
  • Docker V1 API: Enabled
  • Index type: Select “Use Docker Hub”
  • Blob store and network settings as appropriate for your environment
  • Save the repository to finalize the configuration.
image3

Provide a Clean Pull Endpoint
To keep developer workflows simple, expose the proxy at a stable, organization-wide hostname. This avoids custom ports or per-team configurations and makes the proxy a transparent drop-in replacement for direct Docker Hub pulls.
Common examples include:

docker-proxy.company.com
hub.company.internal

Use a reverse proxy or ingress controller to route this hostname to the Nexus proxy repository.

Validate Connectivity

Once the proxy is exposed, verify that it responds correctly and can authenticate to Docker Hub.
Run:

docker login docker-proxy.company.com
docker pull docker-proxy.company.com/dhi/node:24

A successful pull confirms that the proxy is functioning correctly, upstream connectivity is working, and authenticated access is in place.

Turn on Repository Firewall for containers

Once the Docker proxy is in place, enable Repository Firewall so images are inspected before they reach internal systems. Repository Firewall enforces policy at download time, stopping malware and high-severity vulnerabilities at the registry edge, reducing the blast radius of newly disclosed issues and cutting remediation work for engineering teams.

To enable Firewall for the proxy repository:

  1. As an administrator, navigate to the Settings view (gear icon).
  2. Navigate to Capabilities under the System menu.
  3. Create a ‘Firewall Audit and Quarantine’ capability for your Docker proxy repository.
  4. Configure your policies to quarantine new violating components and protect against introducing risk.
  5. Inform your development teams of the change to set expectations.
missinimage

Understanding “Quarantine” vs. “Audit”
Repository Firewall evaluates each image as it is requested:

  • Quarantine – Images that violate a policy are blocked and isolated. They do not reach the developer or CI system. The user receives clear feedback indicating the reason for the failure.
  • Audit – Images that pass the policies are served normally and cached. This improves performance and makes the proxy a consistent, reliable source of trusted base images.

Enabling Repository Firewall gives you immediate, download-time protection and the telemetry to operate it confidently. Start with conservative policies (quarantine on malware, and on CVSS ≥ 8), monitor violations and cache hit rate, tune thresholds based on real-world telemetry, and move to stricter block enforcement once false positives are resolved and teams are comfortable with the workflow.

What a blocked pull looks like

After enabling Repository Firewall and configuring your baseline policies, any pull that fails those checks is denied at the registry edge and no image layers are downloaded. By default Nexus returns a non-descriptive 404 to avoid exposing policy or vulnerability details, though you can surface a short, internal-facing failure message.
As an example, If Firewall is enabled and your CVSS threshold policy is configured correctly, the following pull should fail with a 404 message. 

docker pull docker-proxy.company.com/library/node:20

This confirms that:

  • The request is passing through the proxy.
  • Repository Firewall is inspecting the image metadata.
  • Policy violations are blocked before any image layers are downloaded.

In the Firewall UI, you can open the proxy repository and view the recorded violations. The details can include detected CVEs, severity information, and the policy that triggered the denial. This provides administrators with visibility and confirms that enforcement is functioning as expected.

image4

Additionally, the Quarantined Containers dashboard lists every image that Repository Firewall has blocked, showing the triggering policy and severity so teams can triage with full context. Administrators use this view to review evidence, add remediation notes, and release or delete quarantined items; note that malware is quarantined by default while other violations are quarantined only when their rules are set to Fail at the Proxy stage.

image5

Fix forward: choose an approved base and succeed

Once Policy Enforcement is validated, the next step is to pull a base image that complies with your organization’s security rules. This shows what the normal developer experience looks like when using approved and trusted content.

Pull a compliant tag through the proxy:

docker pull docker-proxy.company.com/dhi/node:24

This request passes the Repository Firewall checks, and the image is pulled successfully. The proxy caches each layer locally so that future pulls are faster and no longer affected by upstream rate limits or registry availability.
If you repeat the pull, the second request is noticeably quicker because it is served directly from the cache. This illustrates the everyday workflow developers should expect: trusted images, predictable performance, and fewer interruptions.

Get started: protect your Docker pulls

A Sonatype-protected Docker proxy gives developers one policy-compliant registry endpoint for image pulls. Layers are cached for speed, policy violations surface with actionable guidance, and teams work with vetted base images with the same Docker CLI workflows they already rely on. When paired with trusted sources such as Docker Hardened Images, this pattern delivers predictable baselines with minimal developer friction.
Ready to try this pattern? Check the following pages:

]]>
theCUBE Research economic validation of Docker’s development platform https://www.docker.com/blog/thecube-research-economic-validation-of-docker-development-platform/ Thu, 30 Oct 2025 11:46:28 +0000 https://www.docker.com/?p=79874 Docker’s ROI and impact on agentic AI, security, and developer productivity.

theCUBE Research surveyed ~400 IT and AppDev professionals at leading global enterprises to investigate Docker’s ROI and impact on agentic AI development, software supply chain security, and developer productivity.  The industry context is that enterprise developers face mounting pressure to rapidly ship features, build agentic AI applications, and maintain security, all while navigating a fragmented array of development tools and open source code that require engineering cycles and introduce security risks. Docker transformed software development through containers and DevSecOps workflows, and is now doing the same for agentic AI development and software supply chain security.  theCUBE Research quantified Docker’s impact: teams build agentic AI apps faster, achieve near-zero CVEs, remediate vulnerabilities before exploits, ship modern cloud-native applications, save developer hours, and generate financial returns.

Keep reading for key highlights and analysis. Download theCUBE Research report and ebook to take a deep dive.

Agentic AI development streamlined using familiar technologies

Developers can build, run, and share agents and compose agentic systems using familiar Docker container workflows. To do this, developers can build agents safely using Docker MCP Gateway, Catalog, and Toolkit; run agents securely with Docker Sandboxes; and run models with Docker Model Runner. These capabilities align with theCUBE Research findings that 87% of organizations reduced AI setup time by over 25% and 80% report accelerating AI time-to-market by at least 26%.  Using Docker’s modern and secure software delivery practices, development teams can implement AI feature experiments faster and in days test agentic AI capabilities that previously took months. Nearly 78% of developers experienced significant improvement in the standardization and streamlining of AI development workflows, enabling better testing and validation of AI models. Docker helps enterprises generate business advantages through deploying new customer experiences that leverage agentic AI applications. This is phenomenal, given the nascent stage of agentic AI development in enterprises.

Software supply chain security and innovation can move in lockstep

Security engineering and vulnerability remediation can slow development to a crawl. Furthermore, checkpoints or controls may be applied too late in the software development cycle, or after dangerous exploits, creating compounded friction between security teams seeking to mitigate vulnerabilities and developers seeking to rapidly ship features. Docker embeds security directly into development workflows through vulnerability analysis and continuously-patched certified container images. theCUBE Research analysis supports these Docker security capabilities: 79% of organizations find Docker extremely or very effective at maintaining security & compliance, while 95% of respondents reported that Docker improved their ability to identify and remediate vulnerabilities. By making it very simple for developers to use secure images as a default, Docker enables engineering teams to plan, build, and deploy securely without sacrificing feature velocity or creating deployment bottlenecks. Security and innovation can move in lockstep because Docker concurrently secures software supply chains and eliminates vulnerabilities.

Developer productivity becomes a competitive advantage

Consistent container environments eliminate friction, accelerate software delivery cycles, and enable teams to focus on building features rather than overcoming infrastructure challenges. When developers spend less time on environment setup and troubleshooting, they ship more features. Application features that previously took months now reach customers in weeks. The research demonstrates Docker’s ability to increase developer productivity. 72% of organizations reported significant productivity gains in development workflows, while 75% have transformed or adopted DevOps practices when using Docker. Furthermore, when it comes to AI and supply chain security, the findings mentioned above further support how Docker unlocks developer productivity.

Financial returns exceed expectations

CFOs demand quantifiable returns for technology investments, and Docker delivers them. 95% of organizations reported substantial annual savings, with 43% reporting $50,000-$250,000 in cost reductions from infrastructure efficiency, reduced rework, and faster time-to-market. The ROI story is equally compelling: 69% of organizations report ROI exceeding 101%, with many achieving ROI above 500%. When factoring in faster feature delivery, improved developer satisfaction, and reduced security incidents, the business case for Docker becomes even more tangible. The direct costs of a security breach can surpass $500 million, so mitigating even a fraction of this cost provides a compelling financial justification for enterprises to deploy Docker to every developer.

Modernization and cloud native apps remain top of mind

For enterprises who maintain extensive legacy systems, Docker serves as a proven catalyst for cloud-native transformation at scale. Results show that nearly nine in ten (88%) of organizations report Docker has enabled modernization of at least 10% of their applications, with half achieving modernization across 31-60% of workloads and another 20% modernizing over 60%. Docker accelerates the shift from monolithic architectures to modern containerized cloud-native environments while also delivering substantial business value.  For example, 37% of organizations report 26% to >50% faster product time-to-market, and 72% report annual cost savings ranging from $50,000 to over $1 million.

Learn more about Docker’s impact on enterprise software development

Docker has evolved from a containerization suite into a development platform for testing, building, securing, and deploying modern software, including agentic AI applications. Docker enables enterprises to apply proven containerization and DevSecOps practices to agentic AI development and software supply chain security.

Download (below) the full report and the ebook from theCUBE Research analysis to learn Docker’s impact on developer productivity, software supply chain security, agentic AI application development, CI/CD and DevSecOps, modernization, cost savings, and ROI.  Learn how enterprises leverage Docker to transform application development and win in markets where speed and innovation determine success.

theCUBE Research economic validation of Docker’s development platform

> Download the Report

> Download the eBook

theCUBE docker banner

]]>