Enterprise – Docker https://www.docker.com Thu, 12 Mar 2026 12:50:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.docker.com/app/uploads/2024/02/cropped-docker-logo-favicon-32x32.png Enterprise – Docker https://www.docker.com 32 32 Flexibility Over Lock-In: The Enterprise Shift in Agent Strategy https://www.docker.com/blog/enterprise-shift-in-agent-strategy/ Thu, 12 Mar 2026 12:50:49 +0000 https://www.docker.com/?p=85896 Building agents is now a strategic priority for 95% of respondents in our latest State of Agentic AI research, which surveyed more than 800 developers and decision-makers worldwide. The shift is happening quickly: agent adoption has moved beyond experiments and demos into early operational maturity. But the road to enterprise-scale adoption is still complex. The foundations are forming, yet far from fully integrated, production-grade platforms that teams can confidently build on.

Security continues to surface as a top blocker to agent adoption. But it’s not the only one. Technical complexity is rising fast as well. Vendor lock-in is a big concern for the vast majority of the respondents surveyed. 

So how do teams cut through the complexity and prepare for a world of multi-model, multi-tool, and multi-framework agents, while avoiding vendor lock-in in their agent workflows? In this blog, we break down the key findings from our research: what teams are actually using to power their agentic workloads, and what it takes to build a more scalable, future-ready agent architecture.

Multi-model and multi-cloud are the new normal. And complexity is rising

Our recent Agent AI study found that enterprises are embracing multi-model and multi-cloud architectures to gain greater control over performance, customization, privacy, and compliance. Multi-model is now the norm. Nearly two-thirds of organizations (61%) combine cloud-hosted and local models. And complexity doesn’t stop there: 46% report using between four and six models within their agents, while just 2% rely on a single model.

lock in fear blog fig 1 1

Deployment environments are just as diverse. 79% of respondents operate agents across two or more environments; 51% in public clouds, 40% on-premises, and 32% on serverless platforms.

This architectural flexibility delivers control, but it also multiplies orchestration and governance efforts. Coordinating models, tools, frameworks, and environments is consistently cited as one of the hardest parts of building agents. Nearly half of respondents (48%) identify operational complexity in managing multiple components as their biggest challenge, while 43% point to increased security exposure driven by orchestration sprawl.

The strategic shift away from vendor lock-in

As organizations double down on agent investments, concerns about supply chain fragility are rising. Seventy-six percent of global respondents report active worries about vendor lock-in.

 Seventy-six percent of global respondents report active concerns about vendor lock-in

Rather than consolidating, teams are responding by diversifying. They’re distributing workloads across multiple models, tools, and cloud environments to reduce dependency and maintain leverage. Among the 61% of organizations using both cloud-hosted and locally hosted models, the primary drivers are control (64%), data privacy (60%), and compliance (54%). Cost ranks significantly lower at 41%, underscoring that flexibility and governance, not cost savings are shaping architectural decisions.

Containers power the next wave of agent adoption

Containerization is already foundational to agent development. Nearly all organizations surveyed (94%) use containers in their agent development or production workflows and the remainder plan to adopt them.

Nearly all organizations surveyed (94%) use containers in their agent development or production workflows and the remainder plan to adopt them.

As agent initiatives scale, teams are extending the same cloud-native practices that power their application pipelines such as microservices architectures, CI/CD, and container orchestration to support agent workloads. Containers are not an add-on; they are the operational backbone. In fact, 94% of teams building agents rely on them.

At the same time, early signs of orchestration standardization are emerging. Among teams building agents with Docker, 40% are using Docker Compose as their orchestration layer, a signal that familiar, container-based tooling is becoming a practical coordination layer for increasingly complex agent systems.

The agentic future won’t be monolithic

The agentic future won’t be monolithic. It’s already multi-cloud, multi-model, and multi-environment. That reality makes open standards and portable infrastructure foundational for sustaining enterprise trust and long-term flexibility.

What’s needed next isn’t reinvention, but standardization around an open, interoperable and portable infrastructure: the flexibility to work across any model, tool, and agent framework, secure-by-default runtimes, consistent orchestration and integrated policy controls. Teams that invest now in this container-based trust layer will move beyond isolated productivity gains to sustainable enterprise-wide outcomes while reducing vendor lock-in risk.

Download the full Agentic AI report for more insights and recommendations on how to scale agents for enterprise.  

Join us on March 25, 2026, for a webinar where we’ll walk through the key findings and the strategies that can help you prioritize what comes next.

Learn more:

]]>
What’s Holding Back AI Agents? It’s Still Security https://www.docker.com/blog/whats-holding-back-ai-agents-its-still-security/ Tue, 10 Mar 2026 12:59:28 +0000 https://www.docker.com/?p=85891 It’s hard to find a team today that isn’t talking about agents. For most organizations, this isn’t a “someday” project anymore. Building agents is a strategic priority for 95% of respondents that we surveyed across the globe with 800+ developers and decision makers in our latest State of Agentic AI research. The shift is happening fast: agent adoption has moved beyond experiments and demos into something closer to early operational maturity. 60% of organizations already report having AI agents in production, though a third of those remain in early stages. 

Agent adoption today is driven by a pragmatic focus on productivity, efficiency, and operational transformation, not revenue growth or cost reduction. Early adoption is concentrated in internal, productivity-focused use cases, especially across software, infrastructure, and operations. The feedback loops are fast, and the risks are easier to control. 

whats holding agents back blog fig 1

So what’s holding back agent scaling? Friction shows up and nearly all roads lead to the same place: AI agent security. 

AI agent security isn’t one issue it’s the constraint

When teams talk about what’s holding them back, AI agent security rises to the top. In the same survey, 40% of respondents cite security as their top blocker when building agents. The reason it hits so hard is that it’s not confined to a single layer of the stack. It shows up everywhere, and it compounds as deployments grow.

For starters, when it comes to infrastructure, as organizations expand agent deployments, teams emphasize the need for secure sandboxing and runtime isolation, even for internal agents.

At the operations layer, complexity becomes a security problem. Once you have more tools, more integrations, and more orchestration logic, it gets harder to see what’s happening end-to-end and harder to control it. Our latest research data reflects that sprawl: over a third of respondents report challenges coordinating multiple tools, and a comparable share say integrations introduce security or compliance risk. That’s a classic pattern: operational complexity creates blind spots, and blind spots become exposure.

45% of organizations say the biggest challenge is ensuring tools are secure, trusted, and enterprise-ready.

And at the governance layer, enterprises want something simple: consistency. They want guardrails, policy enforcement, and auditability that work across teams and workflows. But current tooling isn’t meeting that bar yet. In fact, 45% of organizations say the biggest challenge is ensuring tools are secure, trusted, and enterprise-ready. That’s not a minor complaint: it’s the difference between “we can try this” and “we can scale this.”

MCP is popular but not ready for enterprise

Many teams are adopting Model Context Protocol (MCP) because it gives agents a standardized way to connect to tools, data, and external systems, making agents more useful and customized.  Among respondents further along in their agent journey,  85% say they’re familiar with MCP and two-thirds say they actively use it across personal and professional projects. 

Research data suggests that most teams are operating in what could be described as “leap-of-faith mode” when it comes to MCP, adopting the protocol without security guarantees and operational controls they would demand from mature enterprise infrastructure.

But the security story hasn’t caught up yet. Teams adopt MCP because it works, but they do so without the security guarantees and operational controls they would expect from mature enterprise infrastructure. For teams earlier in their agentic journey: 46% of them identify  security and compliance as the top challenge with MCP.

Organizations are increasingly watching for threats like prompt injection and tool poisoning, along with the more foundational issues of access control, credentials, and authentication. The immaturity and security challenges of current MCP tooling make for a fragile foundation at this stage of agentic adoption.

Conclusion and recommendations

Ai agent security is what sets the speed limit for agentic AI in the enterprise. Organizations aren’t lacking interest, they’re lacking confidence that today’s tooling is enterprise-ready, that access controls can be enforced reliably, and that agents can be kept safely isolated from sensitive systems.  

The path forward is clear. Unlocking agents’ full potential will require new platforms built for enterprise scale, with secure-by-default foundations, strong governance, and policy enforcement that’s integrated, not bolted on.

Download the full Agentic AI report for more insights and recommendations on how to scale agents for enterprise. 

Join us on March 25, 2026, for a webinar where we’ll walk through the key findings and the strategies that can help you prioritize what comes next.

Learn more:

]]>
State of Agentic AI Report: Key Findings https://www.docker.com/blog/state-of-agentic-ai-key-findings/ Fri, 20 Feb 2026 17:18:29 +0000 https://www.docker.com/?p=85400 Based on Docker’s State of Agentic AI report, a global survey of more than 800 developers, platform engineers, and technology decision-makers, this blog summarizes key findings of what’s really happening as agentic AI scales within organizations. Drawing on insights from decision-makers and purchase influencers worldwide, we’ll give you a preview on not only where teams are seeing early wins but also what’s still missing to move from experimentation to enterprise-grade adoption.

Rapid adoption, early maturity

60% of organizations already have AI agents in production, and 94% view building agents as a strategic priority, but most deployments remain internal and focused on productivity and operational efficiency.

Security and complexity are the top barriers

40% of respondents cite security as the #1 challenge in scaling agentic AI, with 45% struggling to ensure tools are secure and enterprise-ready. Technical complexity compounds the challenge. One in three organizations (33%) report orchestration difficulties as multi-model and multi-cloud environments proliferate (79% of organizations run agents across two or more environments).

MCP shows promise but isn’t enterprise-ready

85% of teams are familiar with the Model Context Protocol (MCP), yet most report significant security, configuration, and manageability issues that prevent production-scale deployment.

Want the full picture? Download the latest State of Agentic AI report to explore deeper insights and practical recommendations for scaling agentic AI in your organization.

Fear of vendor lock-in is real

Enterprises worry about dependencies in core agent and agentic infrastructure layers such as model hosting, LLM providers, and even cloud platforms. Seventy-six percent of global  respondents report active concerns about vendor lock-in, rising to 88% in France, 83%
in Japan, and 82% in the UK. 

Containerization remains foundational

94% use containers for agent development or production, and 98% follow the same cloud-native workflows as traditional software, establishing containers as the proven substrate for agentic AI infrastructure.

Long-term outlook

Rather than a “year of the agents,” the data points to a decade-long transformation. Organizations are laying the governance and trust foundations now for scalable, enterprise-grade agent ecosystems.

agentic ai blog

The path forward

The path forward doesn’t require reinvention so much as consolidation around a trust layer: access to trusted content and components that can be safely discovered and reused; secure-by-default runtimes; standardized orchestration and policy; and portable, auditable packaging.

Agentic AI’s near-term value is already real in internal workflows; unlocking the next wave depends on standardizing how we secure, orchestrate, and ship agents. Teams that invest now in this trust layer, on top of the container foundations they already know, will be first to scale agents from local productivity to durable, enterprise-wide outcomes.

Download the full Agentic AI report for more insights and recommendations on how to scale agents for enterprise.  

Learn more:

]]>
How Medplum Secured Their Healthcare Platform with Docker Hardened Images (DHI) https://www.docker.com/blog/medplum-healthcare-docker-hardened-images/ Thu, 19 Feb 2026 14:00:00 +0000 https://www.docker.com/?p=85305 Special thanks to Cody Ebberson and the Medplum team for their open-source contribution and for sharing their migration experience with the community. A real-world example of migrating a HIPAA-compliant EHR platform to DHI with minimal code changes.

Healthcare software runs on trust. When patient data is at stake, security isn’t just a feature but a fundamental requirement. For healthcare platform providers, proving that trust to enterprise customers is an ongoing challenge that requires continuous investment in security posture, compliance certifications, and vulnerability management.

That’s why we’re excited to share how Medplum, an open-source healthcare platform serving over 20 million patients, recently migrated to Docker Hardened Images (DHI). This migration demonstrates exactly what we designed DHI to deliver: enterprise-grade security with minimal friction. Medplum’s team made the switch with just 54 lines of changes across 5 files – a near net-zero code change that dramatically improved their security posture.

Medplum is a headless EHR; the platform handles patient data, clinical workflows, and compliance so developers can focus on building healthcare apps. Built by and for healthcare developers, the platform provides:

  • HIPAA and SOC2 compliance out of the box
  • FHIR R4 API for healthcare data interoperability
  • Self-hosted or managed deployment options
  • Support for 20+ million patients across hundreds of practices

With over 500,000 pulls on Docker Hub for their medplum-server image, Medplum has become a trusted foundation for healthcare developers worldwide. As an open-source project licensed under Apache 2.0, their entire codebase, including Docker configurations, is publicly available onGitHub. This transparency made their DHI migration a perfect case study for the community.

Diagram of Medplum as headless EHR

Caption: Medplum is a headless EHR; the platform handles patient data, clinical workflows, and compliance so developers can focus on building healthcare apps.

Medplum is developer-first. It’s not a plug-and-play low-code tool, it’s designed for engineering teams that want a strong FHIR-based foundation with full control over the codebase.

The Challenge: Vulnerability Noise and Security Toil

Healthcare software development comes with unique challenges. Integration with existing EHR systems, compliance with regulations like HIPAA, and the need for robust security all add complexity and cost to development cycles.

“The Medplum team found themselves facing a challenge common to many high-growth platforms: “Vulnerability Noise.” Even with lean base images, standard distributions often include non-essential packages that trigger security flags during enterprise audits. For a company helping others achieve HIPAA compliance, every “Low” or “Medium” CVE (Common Vulnerability and Exposure) requires investigation and documentation, creating significant “security toil” for their engineering team.”

Reshma Khilnani

CEO, Medplum

Medplum addresses this by providing a compliant foundation. But even with that foundation, their team found themselves facing another challenge common to high-growth platforms: “Vulnerability Noise.”

Healthcare is one of the most security-conscious industries. Medplum’s enterprise customers, including Series C and D funded digital health companies, don’t just ask about security; they actively verify it. These customers routinely scan Medplum’s Docker images as part of their security due diligence.

Even with lean base images, standard distributions often include non-essential packages that trigger security flags during enterprise audits. For a company helping others achieve HIPAA compliance, every “Low” or “Medium” CVE requires investigation and documentation. This creates significant “security toil” for their engineering team.

The First Attempt: Distroless

This wasn’t Medplum’s first attempt at solving the problem. Back in November 2024, the team investigated Google’s distroless images as a potential solution.

The motivations were similar to what DHI would later deliver:

  • Less surface area in production images, and therefore less CVE noise
  • Smaller images for faster deployments
  • Simpler build process without manual hardening scripts

The idea was sound. Distroless images strip away everything except the application runtime: no shell, no package manager, minimal attack surface. On paper, it was exactly what Medplum needed.

But the results were mixed. Image sizes actually increased. Build times went up. There were concerns about multi-architecture support for native dependencies. The PR was closed without merging.

The core problem remained: many CVEs in standard images simply aren’t actionable. Often there isn’t a fix available, so all you can do is document and explain why it doesn’t apply to your use case. And often the vulnerability is in a corner of the image you’re not even using, like Perl, which comes preinstalled on Debian but serves no purpose in a Node.js application.

Fully removing these unused components is the only real answer. The team knew they needed hardened images. They just hadn’t found the right solution yet.

The Solution: Docker Hardened Images

When Docker made Hardened Images freely available under Apache 2.0, Medplum’s team saw an opportunity to simplify their security posture while maintaining compatibility with their existing workflows.

By switching to Docker Hardened Images, Medplum was able to offload the repetitive work of OS-level hardening – like configuring non-root users and stripping out unnecessary binaries – to Docker. This allowed them to provide their users with a “Secure-by-Default” image that meets enterprise requirements without adding complexity to their open-source codebase.

This shift is particularly significant for an open-source project. Rather than maintaining custom hardening scripts that contributors need to understand and maintain, Medplum can now rely on Docker’s expertise and continuous maintenance. The security posture improves automatically with each DHI update, without requiring changes to Medplum’s Dockerfiles.

“By switching to Docker Hardened Images, Medplum was able to offload the repetitive work of OS-level hardening—like configuring non-root users and stripping out unnecessary binaries—to Docker. This allowed their users to provide their users with a “Secure-by-Default” image that meets enterprise requirements without adding complexity to their open-source codebase.”

Cody Ebberson

CTO, Medplum

The Migration: Real Code Changes

The migration was remarkably clean. Previously, Medplum’s Dockerfile required manual steps to ensure security best practices. By moving to DHI, they could simplify their configuration significantly.

Let’s look at what actually changed. Here’s the complete server Dockerfile after the migration:

# Medplum production Dockerfile
# Uses Docker "Hardened Images":
# https://hub.docker.com/hardened-images/catalog/dhi/node/guides

# Supported architectures: linux/amd64, linux/arm64

# Stage 1: Build the application and install production dependencies
FROM dhi.io/node:24-dev AS build-stage
ENV NODE_ENV=production
WORKDIR /usr/src/medplum
ADD ./medplum-server-metadata.tar.gz ./
RUN npm ci --omit=dev && \
  rm package-lock.json

# Stage 2: Create the runtime image
FROM dhi.io/node:24 AS runtime-stage
ENV NODE_ENV=production
WORKDIR /usr/src/medplum
COPY --from=build-stage /usr/src/medplum/ ./
ADD ./medplum-server-runtime.tar.gz ./

EXPOSE 5000 8103

ENTRYPOINT [ "node", "--require", "./packages/server/dist/otel/instrumentation.js", "packages/server/dist/index.js" ]

Notice what’s not there:

  • No groupadd or useradd commands: DHI runs as non-root by default
  • No chown commands: permissions are already correct
  • No USER directive: the default user is already non-privileged

Before vs. After: Server Dockerfile

Before (node:24-slim):

FROM node:24-slim
ENV NODE_ENV=production
WORKDIR /usr/src/medplum

ADD ./medplum-server.tar.gz ./

# Install dependencies, create non-root user, and set permissions
RUN npm ci && \
  rm package-lock.json && \
  groupadd -r medplum && \
  useradd -r -g medplum medplum && \
  chown -R medplum:medplum /usr/src/medplum

EXPOSE 5000 8103

# Switch to the non-root user
USER medplum

ENTRYPOINT [ "node", "--require", "./packages/server/dist/otel/instrumentation.js", "packages/server/dist/index.js" ]

After (dhi.io/node:24):

FROM dhi.io/node:24-dev AS build-stage
ENV NODE_ENV=production
WORKDIR /usr/src/medplum
ADD ./medplum-server-metadata.tar.gz ./
RUN npm ci --omit=dev && rm package-lock.json

FROM dhi.io/node:24 AS runtime-stage
ENV NODE_ENV=production
WORKDIR /usr/src/medplum
COPY --from=build-stage /usr/src/medplum/ ./
ADD ./medplum-server-runtime.tar.gz ./

EXPOSE 5000 8103

ENTRYPOINT [ "node", "--require", "./packages/server/dist/otel/instrumentation.js", "packages/server/dist/index.js" ]

The migration also introduced a cleaner multi-stage build pattern, separating metadata (package.json files) from runtime artifacts.

Before vs. After: App Dockerfile (Nginx)

The web app migration was even more dramatic:

Before (nginx-unprivileged:alpine):

FROM nginxinc/nginx-unprivileged:alpine

# Start as root for permissions
USER root

COPY <<EOF /etc/nginx/conf.d/default.conf
# ... nginx config ...
EOF

ADD ./medplum-app.tar.gz /usr/share/nginx/html
COPY ./docker-entrypoint.sh /docker-entrypoint.sh

# Manual permission setup
RUN chown -R 101:101 /usr/share/nginx/html && \
    chown 101:101 /docker-entrypoint.sh && \
    chmod +x /docker-entrypoint.sh

EXPOSE 3000

# Switch back to non-root
USER 101

ENTRYPOINT ["/docker-entrypoint.sh"]

After (dhi.io/nginx:1):

FROM dhi.io/nginx:1

COPY <<EOF /etc/nginx/nginx.conf
# ... nginx config ...
EOF

ADD ./medplum-app.tar.gz /usr/share/nginx/html
COPY ./docker-entrypoint.sh /docker-entrypoint.sh

EXPOSE 3000

ENTRYPOINT ["/docker-entrypoint.sh"]

Results: Improved Security Posture

After merging the changes, Medplum’s team shared their improved security scan results. The migration to DHI resulted in:

  • Dramatically reduced CVE count – DHI’s minimal base means fewer packages to patch
  • Non-root by default – No manual user configuration required
  • No shell access in production – Reduced attack surface for container escape attempts
  • Continuous patching – All DHI images are rebuilt when upstream security updates are available

For organizations that require stronger guarantees, Docker Hardened Images Enterprise adds SLA-backed remediation timelines, image customizations, and FIPS/STIG variants.

Most importantly, all of this was achieved with zero functional changes to the application. The same tests passed, the same workflows worked, and the same deployment process applied.

CI/CD Integration

Medplum also updated their GitHub Actions workflow to authenticate with the DHI registry:

- name: Login to Docker Hub
  uses: docker/login-action@v2.2.0
  with:
    username: ${{ secrets.DOCKERHUB_USERNAME }}
    password: ${{ secrets.DOCKERHUB_TOKEN }}

- name: Login to Docker Hub Hardened Images
  uses: docker/login-action@v2.2.0
  with:
    registry: dhi.io
    username: ${{ secrets.DOCKERHUB_USERNAME }}
    password: ${{ secrets.DOCKERHUB_TOKEN }}

This allows their CI/CD pipeline to pull hardened base images during builds. The same Docker Hub credentials work for both standard and hardened image registries.

The Multi-Stage Pattern for DHI

One pattern worth highlighting from Medplum’s migration is the use of multi-stage builds with DHI variants:

  1. Build stage: Use dhi.io/node:24-dev which includes npm/yarn for installing dependencies
  2. Runtime stage: Use dhi.io/node:24 which is minimal and doesn’t include package managers

This pattern ensures that build tools never make it into the production image, further reducing the attack surface. It’s a best practice for any containerized Node.js application, and DHI makes it straightforward by providing purpose-built variants for each stage.

Medplum’s Production Architecture

Medplum’s hosted offering runs on AWS using containerized workloads. Their medplum/medplum-server image, built on DHI base images, now deploys to production.

Medplum production architecture

Here’s how the build-to-deploy flow works:

  1. Build time: GitHub Actions pulls dhi.io/node:24-dev and dhi.io/node:24 as base images
  2. Push: The resulting hardened image is pushed to medplum/medplum-server on Docker Hub
  3. Deploy: AWS Fargate pulls medplum/medplum-server:latest and runs the hardened container

The deployed containers inherit all DHI security properties (non-root execution, minimal attack surface, no shell) because they’re built on DHI base images. This demonstrates that DHI works seamlessly with production-grade infrastructure including:

  • AWS Fargate/ECS for container orchestration
  • Elastic Load Balancing for high availability
  • Aurora PostgreSQL for managed database
  • ElastiCache for Redis caching
  • CloudFront for CDN and static assets

No infrastructure changes were required. The same deployment pipeline, the same Fargate configuration, just a more secure base image.

Why This Matters for Healthcare

For healthcare organizations evaluating container security, Medplum’s migration offers several lessons:

1. Eliminating “Vulnerability Noise”

The biggest win from DHI isn’t just security, it’s reducing the operational burden of security. Fewer packages means fewer CVEs to investigate, document, and explain to customers. For teams without dedicated security staff, this reclaimed time is invaluable.

2. Compliance-Friendly Defaults

HIPAA requires covered entities to implement technical safeguards including access controls and audit controls. DHI’s non-root default and minimal attack surface align with these requirements out of the box. For companies pursuing SOC 2 Type 2 certification, which Medplum implemented from Day 1, or HITRUST certification, DHI provides a stronger foundation for the technical controls auditors evaluate.

3. Reduced Audit Surface

When security teams audit container configurations, DHI provides a cleaner story. Instead of explaining custom hardening scripts or why certain CVEs don’t apply, teams can point to Docker’s documented hardening methodology, SLSA Level 3 provenance, and independent security validation by SRLabs. This is particularly valuable during enterprise sales cycles where customers scan vendor images as part of due diligence.

4. Practicing What You Preach

For platforms like Medplum that help customers achieve compliance, using hardened images isn’t just good security, it’s good business. When you’re helping healthcare organizations meet regulatory requirements, your own infrastructure needs to set the example.

5. Faster Security Response

With DHI Enterprise, critical CVEs are patched within 7 days. For healthcare organizations where security incidents can have regulatory implications, this SLA provides meaningful risk reduction and a concrete commitment to share with customers.

Conclusion

Medplum’s migration to Docker Hardened Images demonstrates that improving container security doesn’t have to be painful. With minimal code changes (54 additions and 52 deletions) they achieved:

  • Secure-by-Default images that meet enterprise requirements
  • Automatic non-root execution
  • Dramatically reduced CVE surface
  • Simplified Dockerfiles with no manual hardening scripts
  • Less “security toil” for their engineering team
  • A stronger compliance story for enterprise customers

By offloading OS-level hardening to Docker, Medplum can focus on what they do best: building healthcare infrastructure while their security posture improves automatically with each DHI update.

For a platform with 500,000+ Docker Hub pulls serving healthcare organizations worldwide, this migration shows that DHI is ready for production workloads at scale. More importantly, it shows that security improvements can actually reduce operational burden rather than add to it.

For platforms helping others achieve compliance, practicing what you preach matters. With Docker Hardened Images, that just got a lot easier.

Ready to harden your containers? Explore the Docker Hardened Images documentation or browse the free DHI catalog to find hardened versions of your favorite base images.

Resources

]]>
theCUBE Research economic validation of Docker’s development platform https://www.docker.com/blog/thecube-research-economic-validation-of-docker-development-platform/ Thu, 30 Oct 2025 11:46:28 +0000 https://www.docker.com/?p=79874 Docker’s ROI and impact on agentic AI, security, and developer productivity.

theCUBE Research surveyed ~400 IT and AppDev professionals at leading global enterprises to investigate Docker’s ROI and impact on agentic AI development, software supply chain security, and developer productivity.  The industry context is that enterprise developers face mounting pressure to rapidly ship features, build agentic AI applications, and maintain security, all while navigating a fragmented array of development tools and open source code that require engineering cycles and introduce security risks. Docker transformed software development through containers and DevSecOps workflows, and is now doing the same for agentic AI development and software supply chain security.  theCUBE Research quantified Docker’s impact: teams build agentic AI apps faster, achieve near-zero CVEs, remediate vulnerabilities before exploits, ship modern cloud-native applications, save developer hours, and generate financial returns.

Keep reading for key highlights and analysis. Download theCUBE Research report and ebook to take a deep dive.

Agentic AI development streamlined using familiar technologies

Developers can build, run, and share agents and compose agentic systems using familiar Docker container workflows. To do this, developers can build agents safely using Docker MCP Gateway, Catalog, and Toolkit; run agents securely with Docker Sandboxes; and run models with Docker Model Runner. These capabilities align with theCUBE Research findings that 87% of organizations reduced AI setup time by over 25% and 80% report accelerating AI time-to-market by at least 26%.  Using Docker’s modern and secure software delivery practices, development teams can implement AI feature experiments faster and in days test agentic AI capabilities that previously took months. Nearly 78% of developers experienced significant improvement in the standardization and streamlining of AI development workflows, enabling better testing and validation of AI models. Docker helps enterprises generate business advantages through deploying new customer experiences that leverage agentic AI applications. This is phenomenal, given the nascent stage of agentic AI development in enterprises.

Software supply chain security and innovation can move in lockstep

Security engineering and vulnerability remediation can slow development to a crawl. Furthermore, checkpoints or controls may be applied too late in the software development cycle, or after dangerous exploits, creating compounded friction between security teams seeking to mitigate vulnerabilities and developers seeking to rapidly ship features. Docker embeds security directly into development workflows through vulnerability analysis and continuously-patched certified container images. theCUBE Research analysis supports these Docker security capabilities: 79% of organizations find Docker extremely or very effective at maintaining security & compliance, while 95% of respondents reported that Docker improved their ability to identify and remediate vulnerabilities. By making it very simple for developers to use secure images as a default, Docker enables engineering teams to plan, build, and deploy securely without sacrificing feature velocity or creating deployment bottlenecks. Security and innovation can move in lockstep because Docker concurrently secures software supply chains and eliminates vulnerabilities.

Developer productivity becomes a competitive advantage

Consistent container environments eliminate friction, accelerate software delivery cycles, and enable teams to focus on building features rather than overcoming infrastructure challenges. When developers spend less time on environment setup and troubleshooting, they ship more features. Application features that previously took months now reach customers in weeks. The research demonstrates Docker’s ability to increase developer productivity. 72% of organizations reported significant productivity gains in development workflows, while 75% have transformed or adopted DevOps practices when using Docker. Furthermore, when it comes to AI and supply chain security, the findings mentioned above further support how Docker unlocks developer productivity.

Financial returns exceed expectations

CFOs demand quantifiable returns for technology investments, and Docker delivers them. 95% of organizations reported substantial annual savings, with 43% reporting $50,000-$250,000 in cost reductions from infrastructure efficiency, reduced rework, and faster time-to-market. The ROI story is equally compelling: 69% of organizations report ROI exceeding 101%, with many achieving ROI above 500%. When factoring in faster feature delivery, improved developer satisfaction, and reduced security incidents, the business case for Docker becomes even more tangible. The direct costs of a security breach can surpass $500 million, so mitigating even a fraction of this cost provides a compelling financial justification for enterprises to deploy Docker to every developer.

Modernization and cloud native apps remain top of mind

For enterprises who maintain extensive legacy systems, Docker serves as a proven catalyst for cloud-native transformation at scale. Results show that nearly nine in ten (88%) of organizations report Docker has enabled modernization of at least 10% of their applications, with half achieving modernization across 31-60% of workloads and another 20% modernizing over 60%. Docker accelerates the shift from monolithic architectures to modern containerized cloud-native environments while also delivering substantial business value.  For example, 37% of organizations report 26% to >50% faster product time-to-market, and 72% report annual cost savings ranging from $50,000 to over $1 million.

Learn more about Docker’s impact on enterprise software development

Docker has evolved from a containerization suite into a development platform for testing, building, securing, and deploying modern software, including agentic AI applications. Docker enables enterprises to apply proven containerization and DevSecOps practices to agentic AI development and software supply chain security.

Download (below) the full report and the ebook from theCUBE Research analysis to learn Docker’s impact on developer productivity, software supply chain security, agentic AI application development, CI/CD and DevSecOps, modernization, cost savings, and ROI.  Learn how enterprises leverage Docker to transform application development and win in markets where speed and innovation determine success.

theCUBE Research economic validation of Docker’s development platform

> Download the Report

> Download the eBook

theCUBE docker banner

]]>
Accelerate modernization and cloud migration https://www.docker.com/blog/accelerate-modernization-and-cloud-migration/ Tue, 29 Jul 2025 16:46:27 +0000 https://www.docker.com/?p=72940 In our recent report, we describe that many enterprises today face a stark reality: despite years of digital transformation efforts, the majority of enterprise workloads—up to 80%—still run on legacy systems. This lag in modernization not only increases operational costs and security risks but also limits the agility needed to compete in a rapidly evolving market. The pressure is on for technology leaders to accelerate the ongoing modernization of legacy applications and to accelerate cloud adoption, but the path forward is often blocked by technical complexity, risk, and resource constraints.  Full Report: Accelerate Modernization with Docker.

Enterprises have long been treating modernization as a business imperative. Research shows that 73% of CIOs identify technological disruption as a major risk, and 82% of CEOs believe companies that fail to transform fundamentally risk obsolescence within a decade. Enterprises that further delay modernization risk falling farther behind more agile competitors who are already leveraging cloud-native platforms, DevSecOps practices, and AI or Agentic applications to drive business growth and innovation.

Enterprise challenges for modernization and cloud migration

Transitioning from legacy systems to modern, cloud-native architectures is rarely straightforward. Enterprises face a range of challenges, including:

  • Complex legacy dependencies: Deeply entrenched systems with multiple layers and dependencies make migration risky and costly.
  • Security and compliance risks: Moving to the cloud can increase vulnerabilities by up to 46% if not managed correctly.
  • Developer inefficiencies: Inconsistent environments and manual processes can delay releases, with 69% of developers losing eight or more hours a week to inefficiencies.
  • Cloud cost overruns: Inefficient resource allocation and lack of governance often lead to higher-than-expected cloud expenses.
  • Tool fragmentation: Relying on multiple, disconnected tools for modernization increases risk and slows progress.

These challenges have stalled progress for years, but with the right strategy and tools, enterprises can overcome them and unlock the full benefits of modernization and migration.

How Docker accelerates modernization and cloud migration

Docker products can help enterprises modernize legacy applications and migrate to the cloud efficiently, securely, and incrementally.

Docker brings together Docker Desktop, Docker Hub, Docker Build Cloud, Docker Scout, Testcontainers Cloud, and Administration into a seamless, integrated experience. This solution empowers development teams to:

  • Containerize legacy applications: Simplify the process of packaging and migrating legacy workloads to the cloud.
  • Automate CI/CD pipelines: Accelerate build, test, and deployment cycles with automated workflows and cloud-based build acceleration.
  • Embed security and governance: Integrate real-time vulnerability analysis, policy enforcement, and compliance checks throughout the development lifecycle.
  • Use trusted secure content: Hardened Images ensure every container starts has a signed, distroless base that cuts the attack surface by up to 95 % and comes with built-in SBOMs for effortless audits.
  • Standardize environments: Ensure consistency across development, testing, and production, reducing configuration drift and late-stage defects.
  • Implement incremental, low-risk modernization: Rather than requiring a disruptive, multi-year overhaul, Docker enables enterprises to modernize incrementally. 
  • Increased agility: By modernizing legacy applications and systems, enterprises achieve faster release cycles, rapid product launches, reduced time to market, and seamless scaling in the cloud.

Do not further delay modernization and cloud migrations. Get started with Docker today

Enterprises don’t need to wait for a massive, “big-bang” project — Docker makes it possible to start small, deliver value quickly, and scale ongoing modernization efforts across the organization. By empowering teams with the right tools and a proven approach, Docker enables enterprises to accelerate ongoing application modernization and cloud migrations —unlocking innovation, reducing costs, and securing their competitive edge for the future.

Ready to accelerate your modernization journey?  Learn more about how Docker can help enterprises with modernization and cloud migration – Full Report: Accelerate Modernization with Docker.  

___________
Sources:
– IBM 1; Gartner 1, 2, 3 
– PWC 1, 2
– The Palo Alto Networks State of Cloud-Native Security 2024
– State of Developer Experience Report 2024
___________
Tags: #ApplicationModernization #Modernization #CloudMigration #Docker #DockerBusiness #EnterpriseIT #DevSecOps #CloudNative #DigitalTransformation

]]>
Settings Management for Docker Desktop now generally available in the Admin Console https://www.docker.com/blog/settings-management-for-docker-desktop-now-generally-available-in-the-admin-console/ Wed, 04 Jun 2025 15:39:09 +0000 https://www.docker.com/?p=72840

We’re excited to announce that Settings Management for Docker Desktop is now Generally Available!  Settings Management can be configured in the Admin Console for customers with a Docker Business subscription.  After a successful Early Access period, this powerful administrative solution has been enhanced with new compliance reporting capabilities, completing our vision for centralized Docker Desktop configuration management at scale through the Admin Console.

To add additional context, Docker provides an enterprise-grade integrated solution suite for container development.  This includes administration and management capabilities that support enterprise needs for security, governance, compliance, scale, ease of use, control, insights, and observability.  The new Settings Management capabilities in the Admin Console for managing Docker Desktop instances are the latest enhancement to this area.  This new feature provides organization administrators with a single, unified interface to configure and enforce security policies, and control Docker Desktop settings across all users in their organization.  Overall, Settings Management eliminates the need to manually configure each individual Docker machine and ensures consistent compliance and security standards company-wide.

Enterprise-grade management for Docker Desktop

First introduced in Docker Desktop 4.36 as an Early Access feature, Docker Desktop Settings Management enables administrators to centrally deploy and enforce settings policies directly from the Admin Console. From the Docker Admin Console, administrators can configure Docker Desktop settings according to a security policy and select users to whom the policy applies. When users start Docker Desktop, those settings are automatically applied and enforced.

With the addition of Desktop Settings Reporting in Docker Desktop 4.40, the solution offers end-to-end management capabilities from policy creation to compliance verification.

This comprehensive approach to settings management delivers on our promise to simplify Docker Desktop administration while ensuring organizational compliance across diverse enterprise environments.

Complete settings management lifecycle

Desktop Settings Management now offers multiple administration capabilities:

  • Admin Console policies: Configure and enforce default Docker Desktop settings directly from the cloud-based Admin Console. There’s no need to distribute admin-settings.json files to local machines via MDM.
  • Quick import: Seamlessly migrate existing configurations from admin-settings.json files
  • Export and share: Easily share policies as JSON files with security and compliance teams
  • Targeted testing: Roll out policies to smaller groups before deploying globally
  • Enhanced security: Benefit from improved signing and reporting methods that reduce the risk of tampering with settings
  • Settings compliance reporting: Track and verify policy application across all developers in your engineering organization

Figure 1: Admin Console Settings Management

Admin Console Settings Management

New: Desktop Settings Reporting

The newly added settings reporting dashboard in the Admin Console provides administrators with crucial visibility into the compliance status of all users:

  • Real-time settings compliance tracking: Easily monitor which users are compliant with their assigned settings policies.
  • Streamlined troubleshooting: Detailed status information helps administrators diagnose and resolve non-compliance issues.

The settings reporting dashboard is accessible via Admin Console > Docker Desktop > Reporting, offering options to:

  • Search by username or email address
  • Filter by assigned policies
  • Toggle visibility of compliant users to focus on potential issues
  • View detailed compliance information for specific users
  • Download comprehensive compliance data as a CSV file

For non-compliant users, the settings reporting dashboard provides targeted resolution steps to help administrators quickly address issues and ensure organizational compliance.

Figure 2: Admin Console Settings Reporting

Docker Admin Console Settings Reporting

Figure 3: Locked settings in Docker Desktop

Docker Desktop settings locked

Enhanced security through centralized management

Desktop Settings Management is particularly valuable for engineering organizations with strict security and compliance requirements. This GA release enables administrators to:

  • Enforce consistent configuration across all Docker Desktop instances, without having to go through complicated and error prone MDM based deployments
  • Verify policy application and quickly remediate non-compliant systems
  • Reduce the risk of tampering with local settings
  • Generate compliance reports for security audits

Getting started

To take advantage of Desktop Settings Management:

  1. Ensure your Docker Desktop users are signed in on version 4.40 or later
  2. Log in to the Docker Admin Console
  3. Navigate to Docker Desktop > Settings Management to create policies
  4. Navigate to Docker Desktop > Reporting to monitor compliance

For more detailed information, visit our documentation on Settings Management.

What’s next?

Included with Docker Business, the GA release of Settings Management for Docker Desktop represents a significant milestone in our commitment to delivering enterprise-grade management, governance, and administration tools. We’ll continue to enhance these capabilities based on customer feedback, enterprise needs, and evolving security requirements.

We encourage you to explore Settings Management and let us know how it’s helping you manage Docker Desktop instances more efficiently across your development teams and engineering organization.

We’re thrilled to meet the management and administration needs of our customers with these exciting enhancements and we want you to stay connected with us as we build even more administration and management capabilities for development teams and engineering organizations.

Learn more

Thank you!

]]>
Simplifying Enterprise Management with Docker Desktop on the Microsoft Store https://www.docker.com/blog/docker-desktop-on-microsoft-store/ Thu, 01 May 2025 23:13:08 +0000 https://www.docker.com/?p=71193 We’re excited to announce that Docker Desktop is now available on the Microsoft Store! This new distribution channel enhances both the installation and update experience for individual developers while significantly simplifying management for enterprise IT teams.

This milestone reinforces our commitment to Windows, our most widely used platform among Docker Desktop users. By partnering with the Microsoft Store, we’re ensuring seamless compatibility with enterprise management tools while delivering a more consistent experience to our shared customers.

blog WIndows store resized

[Figure 1]: MS Store listing: https://apps.microsoft.com/detail/xp8cbj40xlbwkx?hl=en-GB&gl=GB

Seamless deployment and control for enterprises

For developers:

  • Automatic Updates: The Microsoft Store handles all update processes automatically, ensuring you’re always running the latest version without manual intervention.
  • Streamlined Installation: Experience a more reliable setup process with fewer startup errors..
  • Unified Management: Manage Docker Desktop alongside your other applications in one familiar interface.

For IT administrators:

  • Native Intune MDM Integration: Deploy Docker Desktop across your organization using Microsoft’s enterprise management tools — Learn how to add Microsoft Store apps via Intune.
  • Centralized Control: Easily roll out Docker Desktop through the Microsoft Store’s enterprise distribution channels.
  • Security-Compatible Updates: Updates are handled automatically by the Microsoft Store infrastructure, even in organizations where users don’t have direct store access.
  • Updates Without Direct Store Access: The native integration with Intune allows automatic updates to function even when users don’t have Microsoft Store access — a significant advantage for security-conscious organizations with restricted environments.
  • Familiar Workflow: The update mechanism works similarly to winget commands (winget install –id=XP8CBJ40XLBWKX –source=msstore), providing consistency with other enterprise software management.

Why it matters for businesses and developers 

With 99% of enterprise users not running the latest version of Docker Desktop, the Microsoft Store’s automatic update capabilities directly address compliance and security concerns while minimizing downtime. IT administrators can now:

  • Increase Productivity: Developers can focus on innovation instead of managing installations.
  • Improve Operational Efficiency: Better control over Docker Desktop deployments reduces IT bottlenecks.
  • Enhance Compliance: Automatic updates and secure installations support enterprise security protocols.

Conclusion

Docker Desktop’s availability on the Microsoft Store represents a significant step forward in simplifying how organizations deploy and maintain development environments. By focusing on seamless updates, reliability, and enterprise-grade management, Docker and Microsoft are empowering teams to innovate with greater confidence.

Ready to try it out? Download Docker Desktop from the Microsoft Store today!

Learn more

]]>