Enterprise – Docker https://www.docker.com Thu, 12 Mar 2026 12:50:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.docker.com/app/uploads/2024/02/cropped-docker-logo-favicon-32x32.png Enterprise – Docker https://www.docker.com 32 32 Flexibility Over Lock-In: The Enterprise Shift in Agent Strategy https://www.docker.com/blog/enterprise-shift-in-agent-strategy/ Thu, 12 Mar 2026 12:50:49 +0000 https://www.docker.com/?p=85896 Building agents is now a strategic priority for 95% of respondents in our latest State of Agentic AI research, which surveyed more than 800 developers and decision-makers worldwide. The shift is happening quickly: agent adoption has moved beyond experiments and demos into early operational maturity. But the road to enterprise-scale adoption is still complex. The foundations are forming, yet far from fully integrated, production-grade platforms that teams can confidently build on.

Security continues to surface as a top blocker to agent adoption. But it’s not the only one. Technical complexity is rising fast as well. Vendor lock-in is a big concern for the vast majority of the respondents surveyed. 

So how do teams cut through the complexity and prepare for a world of multi-model, multi-tool, and multi-framework agents, while avoiding vendor lock-in in their agent workflows? In this blog, we break down the key findings from our research: what teams are actually using to power their agentic workloads, and what it takes to build a more scalable, future-ready agent architecture.

Multi-model and multi-cloud are the new normal. And complexity is rising

Our recent Agent AI study found that enterprises are embracing multi-model and multi-cloud architectures to gain greater control over performance, customization, privacy, and compliance. Multi-model is now the norm. Nearly two-thirds of organizations (61%) combine cloud-hosted and local models. And complexity doesn’t stop there: 46% report using between four and six models within their agents, while just 2% rely on a single model.

lock in fear blog fig 1 1

Deployment environments are just as diverse. 79% of respondents operate agents across two or more environments; 51% in public clouds, 40% on-premises, and 32% on serverless platforms.

This architectural flexibility delivers control, but it also multiplies orchestration and governance efforts. Coordinating models, tools, frameworks, and environments is consistently cited as one of the hardest parts of building agents. Nearly half of respondents (48%) identify operational complexity in managing multiple components as their biggest challenge, while 43% point to increased security exposure driven by orchestration sprawl.

The strategic shift away from vendor lock-in

As organizations double down on agent investments, concerns about supply chain fragility are rising. Seventy-six percent of global respondents report active worries about vendor lock-in.

 Seventy-six percent of global respondents report active concerns about vendor lock-in

Rather than consolidating, teams are responding by diversifying. They’re distributing workloads across multiple models, tools, and cloud environments to reduce dependency and maintain leverage. Among the 61% of organizations using both cloud-hosted and locally hosted models, the primary drivers are control (64%), data privacy (60%), and compliance (54%). Cost ranks significantly lower at 41%, underscoring that flexibility and governance, not cost savings are shaping architectural decisions.

Containers power the next wave of agent adoption

Containerization is already foundational to agent development. Nearly all organizations surveyed (94%) use containers in their agent development or production workflows and the remainder plan to adopt them.

Nearly all organizations surveyed (94%) use containers in their agent development or production workflows and the remainder plan to adopt them.

As agent initiatives scale, teams are extending the same cloud-native practices that power their application pipelines such as microservices architectures, CI/CD, and container orchestration to support agent workloads. Containers are not an add-on; they are the operational backbone. In fact, 94% of teams building agents rely on them.

At the same time, early signs of orchestration standardization are emerging. Among teams building agents with Docker, 40% are using Docker Compose as their orchestration layer, a signal that familiar, container-based tooling is becoming a practical coordination layer for increasingly complex agent systems.

The agentic future won’t be monolithic

The agentic future won’t be monolithic. It’s already multi-cloud, multi-model, and multi-environment. That reality makes open standards and portable infrastructure foundational for sustaining enterprise trust and long-term flexibility.

What’s needed next isn’t reinvention, but standardization around an open, interoperable and portable infrastructure: the flexibility to work across any model, tool, and agent framework, secure-by-default runtimes, consistent orchestration and integrated policy controls. Teams that invest now in this container-based trust layer will move beyond isolated productivity gains to sustainable enterprise-wide outcomes while reducing vendor lock-in risk.

Download the full Agentic AI report for more insights and recommendations on how to scale agents for enterprise.  

Join us on March 25, 2026, for a webinar where we’ll walk through the key findings and the strategies that can help you prioritize what comes next.

Learn more:

]]>
What’s Holding Back AI Agents? It’s Still Security https://www.docker.com/blog/whats-holding-back-ai-agents-its-still-security/ Tue, 10 Mar 2026 12:59:28 +0000 https://www.docker.com/?p=85891 It’s hard to find a team today that isn’t talking about agents. For most organizations, this isn’t a “someday” project anymore. Building agents is a strategic priority for 95% of respondents that we surveyed across the globe with 800+ developers and decision makers in our latest State of Agentic AI research. The shift is happening fast: agent adoption has moved beyond experiments and demos into something closer to early operational maturity. 60% of organizations already report having AI agents in production, though a third of those remain in early stages. 

Agent adoption today is driven by a pragmatic focus on productivity, efficiency, and operational transformation, not revenue growth or cost reduction. Early adoption is concentrated in internal, productivity-focused use cases, especially across software, infrastructure, and operations. The feedback loops are fast, and the risks are easier to control. 

whats holding agents back blog fig 1

So what’s holding back agent scaling? Friction shows up and nearly all roads lead to the same place: AI agent security. 

AI agent security isn’t one issue it’s the constraint

When teams talk about what’s holding them back, AI agent security rises to the top. In the same survey, 40% of respondents cite security as their top blocker when building agents. The reason it hits so hard is that it’s not confined to a single layer of the stack. It shows up everywhere, and it compounds as deployments grow.

For starters, when it comes to infrastructure, as organizations expand agent deployments, teams emphasize the need for secure sandboxing and runtime isolation, even for internal agents.

At the operations layer, complexity becomes a security problem. Once you have more tools, more integrations, and more orchestration logic, it gets harder to see what’s happening end-to-end and harder to control it. Our latest research data reflects that sprawl: over a third of respondents report challenges coordinating multiple tools, and a comparable share say integrations introduce security or compliance risk. That’s a classic pattern: operational complexity creates blind spots, and blind spots become exposure.

45% of organizations say the biggest challenge is ensuring tools are secure, trusted, and enterprise-ready.

And at the governance layer, enterprises want something simple: consistency. They want guardrails, policy enforcement, and auditability that work across teams and workflows. But current tooling isn’t meeting that bar yet. In fact, 45% of organizations say the biggest challenge is ensuring tools are secure, trusted, and enterprise-ready. That’s not a minor complaint: it’s the difference between “we can try this” and “we can scale this.”

MCP is popular but not ready for enterprise

Many teams are adopting Model Context Protocol (MCP) because it gives agents a standardized way to connect to tools, data, and external systems, making agents more useful and customized.  Among respondents further along in their agent journey,  85% say they’re familiar with MCP and two-thirds say they actively use it across personal and professional projects. 

Research data suggests that most teams are operating in what could be described as “leap-of-faith mode” when it comes to MCP, adopting the protocol without security guarantees and operational controls they would demand from mature enterprise infrastructure.

But the security story hasn’t caught up yet. Teams adopt MCP because it works, but they do so without the security guarantees and operational controls they would expect from mature enterprise infrastructure. For teams earlier in their agentic journey: 46% of them identify  security and compliance as the top challenge with MCP.

Organizations are increasingly watching for threats like prompt injection and tool poisoning, along with the more foundational issues of access control, credentials, and authentication. The immaturity and security challenges of current MCP tooling make for a fragile foundation at this stage of agentic adoption.

Conclusion and recommendations

Ai agent security is what sets the speed limit for agentic AI in the enterprise. Organizations aren’t lacking interest, they’re lacking confidence that today’s tooling is enterprise-ready, that access controls can be enforced reliably, and that agents can be kept safely isolated from sensitive systems.  

The path forward is clear. Unlocking agents’ full potential will require new platforms built for enterprise scale, with secure-by-default foundations, strong governance, and policy enforcement that’s integrated, not bolted on.

Download the full Agentic AI report for more insights and recommendations on how to scale agents for enterprise. 

Join us on March 25, 2026, for a webinar where we’ll walk through the key findings and the strategies that can help you prioritize what comes next.

Learn more:

]]>
Docker Hardened Images: Security Independently Validated by SRLabs https://www.docker.com/blog/docker-hardened-images-security-independently-validated-by-srlabs/ Fri, 19 Dec 2025 13:50:24 +0000 https://www.docker.com/?p=84274 Earlier this week, we took a major step forward for the industry. Docker Hardened Images (DHI) is now available at no cost, bringing secure-by-default development to every team, everywhere. Anyone can now start from a secure, minimal, production-ready foundation from the first pull, without a subscription.  

With that decision comes a responsibility: if Docker Hardened Images become the new starting point for modern development, then developers must be able to trust them completely. Not because we say they’re secure, but because they prove it: under scrutiny, under pressure, and through independent validation.

Security threats evolve constantly. Supply chains grow more complex. Attackers get smarter. The only way DHI stays ahead is by continuously pushing our security forward. That’s why we partnered with  SRLabs, one of the world’s leading cybersecurity research groups, known for uncovering high-impact vulnerabilities in highly sensitive systems.

This review included threat modeling, architecture analysis, and grey-box testing using publicly available artifacts. At Docker, we understand that trust is not earned through claims, it is earned through testing, validation and a commitment to do this continuously.  

Phase One: Grey Box Assessment

SRLabs started with a grey box assessment focused on how we build, sign, scan, and distribute hardened images. They validated our provenance chain, our signing practices, and our vulnerability management workflow.

One of the first things they called out was the strength of our verifiability model. Every artifact in DHI carries SLSA Build Level 3 provenance and Cosign signatures, all anchored in transparency logs via Rekor. This gives users a clear, cryptographically verifiable trail for where every hardened image came from and how it was built. As SRLabs put it:

“Docker incorporates signed provenance with Cosign, providing a verifiable audit trail aligned with SLSA level 3 standards.”

They also highlighted the speed and clarity of our vulnerability management process. Every image includes an SBOM and VEX data, and our automated rebuild system responds quickly when new CVEs appear. SRLabs noted:

“Fast patching. Docker promises a 7 day patch SLA, greatly reducing vulnerability exposure windows.”

They validated the impact of our minimization strategy as well. Non root by default, reduced footprint, and the removal of unnecessary utilities dramatically reduce what an attacker could exploit inside a container. Their assessment:

“Non root, minimal container images significantly reduce attack vectors compared to traditional images.”

After three weeks of targeted testing, including adversarial modeling and architectural probing, SRLabs came back with a clear message: no critical vulnerabilities, no high-severity exploitation paths, just a medium residual risk driven by industry-wide challenges like key stewardship and upstream trust. And the best part? The architecture is already set up to reach even higher assurance without needing a major redesign. In their words:

“Docker Hardened Images deliver on their public security promises for today’s threat landscape.”

 “No critical or high severity break outs were identified.”

And 

“By implementing recommended hardening steps, Docker can raise assurance to the level expected of a reference implementation for supply chain security without major re engineering.”

Throughout the assessment, our engineering teams worked closely with SRLabs. Several findings, such as a labeling issue and a race condition, were resolved during the engagement. Others, including a prefix-hijacking edge case, moved into remediation quickly. For SRLabs, this responsiveness showed more than secure technology; it demonstrated a security-first culture where issues are triaged fast, fixes land quickly, and collaboration is part of the process. 

SRLabs pointed to places where raising the bar would make DHI even stronger, and we are already acting on them. They told us our signing keys should live in Hardware Security Modules with quorum controls, and that we should move toward a keyless Fulcio flow, so we have started that work right away. They pointed out that offline environments need better protection against outdated or revoked signatures, and we are updating our guidance and exploring freshness checks to close that gap.They also flagged that privileged builds weaken reproducibility and SBOM accuracy. Several of those builds have already been removed or rebuilt, and the rest are being redesigned to meet our hardening standards.

 You can read more about the findings from the report here.

Phase Two: Full White Box Assessment

Grey box testing is just the beginning. 

This next phase goes much deeper. SRLabs will step into the role of an insider-level attacker. They’ll dig through code paths, dependency chains, and configuration logic. They’ll map every trust boundary, hunt for logic flaws, and stress-test every assumption baked into the hardened image pipeline. We expect to share that report in the coming months.

SRLabs showed us how DHI performs under pressure, but validation in the lab is only half the story.

The real question is: what happens when teams put Docker at the center of their daily work? The good news is,  we have the data. When organizations adopt Docker, the impact reaches far beyond reducing vulnerabilities.

New research from theCUBE, based on a survey of 393 IT, platform, and engineering leaders, reveals that 95 percent improved vulnerability detection and remediation, 93 percent strengthened policy and compliance, and 81 percent now meet most or all of their security goals across the entire SDLC. You can read about it in the report linked above.

By combining Independent validation, Continuous security testing and Transparent attestations and provenance, Docker is raising the baseline for what secure software supply chains should look like.

The full white-box report from SRLabs will be shared when complete, and every new finding, good or bad, will shape how we continue improving DHI. Being secure-by-default is something we aim to prove, continuously.

]]>
Building AI agents shouldn’t be hard. According to theCUBE Research, Docker makes it easy https://www.docker.com/blog/building-ai-agents-shouldnt-be-hard-according-to-thecube-research-docker-makes-it-easy/ Tue, 02 Dec 2025 15:00:00 +0000 https://www.docker.com/?p=83488 For most developers, getting started with AI is still too complicated. Different models, tools, and platforms don’t always play nicely together. But with Docker, that’s changing fast.

Docker is emerging as essential infrastructure for standardized, portable, and scalable AI environments. By bringing composability, simplicity, and GPU accessibility to the agentic era, Docker is helping developers and the enterprises they support move faster, safer, and with far less friction. 

Real results: Faster AI delivery with Docker

The platform is accelerating innovation: According to the latest report from theCUBE Research, 88% of respondents reported that Docker reduced the time-to-market for new features or products, with nearly 40% achieving efficiency gains of more than 25%. Docker is playing an increasingly vital role in AI development as well. 52% of respondents cut AI project setup time by over 50%, while 97% report increased speed for new AI product development.

Reduced AI project failures and delays

Reliability remains a key performance indicator for AI initiatives, and Docker is proving instrumental in minimizing risk. 90% of respondents indicated that Docker helped prevent at least 10% of project failures or delays, while 16% reported prevention rates exceeding 50%. Additionally, 78% significantly improved testing and validation of AI models. These results highlight how Docker’s consistency, isolation, and repeatability not only speed development but also reduce costly rework and downtime, strengthening confidence in AI project delivery.

Build, share, and run agents with Docker, easily and securely

Docker’s mission for AI is simple: make building and running AI and agentic applications as easy, secure, and shareable as any other kind of software.

Instead of wrestling with fragmented tools, developers can now rely on Docker’s trusted, container-based foundation with curated catalogs of verified models and tools, and a clean, modular way to wire them together. Whether you’re connecting an LLM to a database or linking services into a full agentic workflow, Docker makes it plug-and-play.

With Docker Model Runner, you can pull and run large language models locally with GPU acceleration. The Docker MCP Catalog and Toolkit connect agents to over 300 MCP servers from partners like Stripe, Elastic, and GitHub. And with Docker Compose, you can define the whole AI stack of models, tools, and services in a single YAML file that runs the same way locally or in the cloud. Cagent, our open-source agent builder, lets you easily build, run, and share AI agents, with behavior, tools, and persona all defined in a single YAML file. And with Docker Sandboxes, you can run coding agents like Claude Code in a secure, local environment, keeping your workflows isolated and your data protected.

Conclusion 

Docker’s vision is clear: to make AI development as simple and powerful as the workflows developers already know and love. And it’s working: theCUBE reports 52% of users cut AI project setup time by more than half, while 87% say they’ve accelerated time-to-market by at least 26%.

Learn more

theCUBE docker banner
]]>
Securing the software supply chain shouldn’t be hard. According to theCUBE Research, Docker makes it simple https://www.docker.com/blog/securing-the-software-supply-chain-shouldnt-be-hard-according-to-thecube-research-docker-makes-it-simple/ Tue, 25 Nov 2025 14:04:33 +0000 https://www.docker.com/?p=83324 In today’s software-driven economy, securing software supply chains is no longer optional, it’s mission-critical. Yet enterprises often struggle to balance developer speed and security. According to theCUBE Research, 95% of organizations say Docker improved their ability to identify and remediate vulnerabilities, while 79% rate it highly effective at maintaining compliance with security standards. Docker embeds security directly into the developer workflow so that protection happens by default, not as an afterthought.

At the foundation are Docker Hardened Images, which are ultra-minimal, continuously patched containers that cut the attack surface by up to 95% and achieve near-zero CVEs. These images, combined with Docker Scout’s real-time vulnerability analysis, allow teams to prevent, detect, and resolve issues early, keeping innovation and security in sync. The result: 92% of enterprises report fewer application vulnerabilities, and 60% see reductions of 25% or more.

Docker also secures agentic AI development through the MCP Catalog, Toolkit, and Gateway. These tools provide a trusted, containerized way to run Model Context Protocol (MCP) servers that power AI agents, ensuring communication happens in a secure, auditable, and isolated environment. According to theCUBE Research, 87% of organizations reduced AI setup time by over 25%, and 95% improved AI testing and validation, demonstrating that Docker makes AI development both faster and safer.

With built-in Zero Trust principles, role-based access controls, and compliance support for SOC 2, ISO 27001, and FedRAMP, Docker simplifies adherence to enterprise-grade standards without slowing developers down. The payoff is clear: 69% of enterprises report ROI above 101%, driven in part by fewer security incidents, faster delivery, and improved productivity. In short, Docker’s modern approach to DevSecOps enables enterprises to build, ship, and scale software that’s not only fast, but fundamentally secure.

Docker’s impact on software supply chain security

Docker has evolved into a complete development platform that helps enterprises build, secure, and deploy modern and agentic AI applications with trusted DevSecOps and containerization practices. From Docker Hardened Images, which are secure, minimal, and production-ready container images with near-zero CVEs, to Docker Scout’s real-time vulnerability insights and the MCP Toolkit for trusted AI agents, teams gain a unified foundation for software supply chain security.

Every part of the Docker ecosystem is designed to blend in with existing developer workflows while making security affordable, transparent, and universal. Whether you want to explore the breadth of the Docker Hardened Images catalog, analyze your own image data with Docker Scout, or test secure AI integration through the MCP Gateway, it is easy to see how Docker embeds security by default, not as an afterthought.

Review additional resources

theCUBE docker banner
]]>
How Docker Hardened Images Patches Vulnerabilities in 24 hours https://www.docker.com/blog/how-docker-hardened-images-patch-cves-in-24-hours/ Fri, 21 Nov 2025 18:40:39 +0000 https://www.docker.com/?p=83152 On November 19, 2025, the Golang project published two Common Vulnerabilities and Exposures (CVEs) affecting the widely-used golang.org/x/crypto/ssh package. While neither vulnerability received a critical CVSS score, both presented real risks to applications using SSH functionality in Go-based containers.

image2 1

CVE-2025-58181 affects SSH servers parsing GSSAPI authentication requests. The vulnerability allows attackers to trigger unbounded memory consumption by exploiting the server’s failure to validate the number of mechanisms specified in authentication requests. CVE-2025-47914 impacts SSH Agent servers that fail to validate message sizes when processing identity requests, potentially causing system panics when malformed messages arrive. (These two vulnerabilities came just days after CVE-2025-47913, a high-severity vulnerability affecting the same Golang component that Docker also quickly patched)

For teams running Go applications with SSH functionality in their containers, leaving these vulnerabilities unpatched creates exposure to denial-of-service attacks and potential system instability.

How Docker achieves lightning fast vulnerability response

image1 3

When these CVEs hit the Golang project’s security feed, Docker Hardened Images customers had patched versions available in less than 24 hours. This rapid response stems from Docker Scout’s continuous monitoring architecture and DHI’s automated remediation pipeline.

Here’s how it works:

Continuous CVE ingestion: Unlike vulnerability scanning that runs on batch schedules, Docker Scout continuously ingests CVE information from upstream sources including GitHub security advisories, the National Vulnerability Database, and project-specific feeds. The moment CVE data becomes available, Scout begins analysis.

Instant impact assessment: Within seconds of CVE ingestion, Scout identifies which Docker Hardened Images are affected based in Scout’s comprehensive SBOM database. This immediate notification allows the remediation process to start without delay.

Automated patching workflow: Depending on the vulnerability and package, Docker either patches automatically or triggers a manual review process for complex changes. For these Golang SSH vulnerabilities, the team initiated builds immediately after upstream patches became available.

Cascading builds: Once the patched Golang package builds successfully, the system automatically triggers rebuilds of all dependent packages and images. Every Docker Hardened Image containing the affected golang.org/x/crypto/ssh package gets rebuilt with the security fix.

The entire process, from CVE disclosure to patched images available to customers, was completed in under 24 hours. Customers using Docker Scout received immediate notifications about the vulnerabilities and the availability of patched versions.

Why Docker’s Security Response Is Different

One of Docker’s key differentiators is its continuous, real-time monitoring, rather than periodic batch scanning. Traditional vulnerability management relies on daily or weekly scans, leaving containers exposed to known vulnerabilities for hours or even days.

With Docker Scout’s real-time CVE ingestion, detection starts the moment a vulnerability is published, enabling remediation within seconds and minimizing exposure.

This foundation powers Docker Hardened Images (DHI), where packages and dependencies are continuously tracked and automatically updated when issues arise. For example, when vulnerabilities were found in the golang.org/x/crypto library, all affected images were rebuilt and released within a day. Customers simply pull the latest tags to stay secure, no manual patching, emergency maintenance, or impact triage required.

But continuous monitoring is just the foundation. What truly sets Docker apart is how that real-time intelligence flows into an automated, transparent, and trusted remediation pipeline, built on over a decade of experience securing and maintaining the Docker Official Images program.These are the same images trusted and used by millions of developers and organizations worldwide, forming the foundation of countless production environments. That long-standing operational experience in continuously maintaining, rebuilding, and distributing secure images at global scale gives Docker a proven track record in delivering reliability, consistency, and trust few others can match.

Beyond automation, Docker’s AI guardrails add yet another layer of protection. Purpose-built for the Hardened Images pipeline, these AI systems continuously analyze upstream code changes, flag risky patterns, and prevent flawed dependencies from entering the supply chain. Unlike standard coding assistants, Docker’s AI guardrails are informed by manual, project-specific reviews, blending human expertise with adaptive intelligence. When the system detects a high-confidence issue such as an inverted error check, ignored failure, or resource mismanagement, it halts the release until a Docker engineer verifies and applies the fix. This human-in-the-loop model ensures vulnerabilities are caught long before they can reach customers, turning AI into a force multiplier for safety, not a replacement for human judgment.

Another critical differentiator is complete transparency. Consider what happens when a security scanner still flags a vulnerability even after you’ve pulled a patched image. With DHI, every image includes a comprehensive and accurate Software Bill of Materials (SBOM) that provides definitive visibility into what’s actually inside your container. When a scanner reports a supposedly remediated image as vulnerable, teams can verify the exact package versions and patch status directly from the SBOM instead of relying on scanner heuristics.

This transparency also extends to how Docker Scout handles CVE data. Docker relies entirely on independent, third-party sources for vulnerability decisions and prioritization, including the National Vulnerability Database (NVD), GitHub Security Advisories, and upstream project maintainers. This approach is essential because traditional scanners often depend on pattern matching and heuristics that can produce false positives. They may miss vendor-specific patches, overlook backported fixes, or flag vulnerabilities that have already been remediated due to database lag. In some cases, even vendor-recommended scanners fail to detect unpatched vulnerabilities, creating a false sense of security.

Without an accurate SBOM and objective CVE data, teams waste valuable time chasing phantom vulnerabilities or debating false positives with compliance auditors. Docker’s approach eliminates that uncertainty. Because the SBOM is generated directly from the build process, not inferred after the fact, it provides definitive evidence of what’s inside each image and why certain CVEs do or don’t apply. This transforms vulnerability management from guesswork and debate into objective, verifiable security assurance, backed by transparent, third-party data.

CVEs don’t have to disrupt your week

Managing vulnerabilities consumes significant engineering time. When critical CVEs drop, teams rush to assess impact, test patches, and coordinate deployments. Docker Hardened Images eliminate this overhead by continuously updating base images with complete transparency into their contents with rapid turnarounds to reduce your exposure window.

If you’re tired of vulnerability whack-a-mole disrupting your team’s roadmap, Docker Hardened Images offers a better path forward. Learn more about how Docker Scout and Hardened Images can reduce your vulnerability management burden, or contact our team to discuss your specific security requirements.

]]>
theCUBE Research economic validation of Docker’s development platform https://www.docker.com/blog/thecube-research-economic-validation-of-docker-development-platform/ Thu, 30 Oct 2025 11:46:28 +0000 https://www.docker.com/?p=79874 Docker’s ROI and impact on agentic AI, security, and developer productivity.

theCUBE Research surveyed ~400 IT and AppDev professionals at leading global enterprises to investigate Docker’s ROI and impact on agentic AI development, software supply chain security, and developer productivity.  The industry context is that enterprise developers face mounting pressure to rapidly ship features, build agentic AI applications, and maintain security, all while navigating a fragmented array of development tools and open source code that require engineering cycles and introduce security risks. Docker transformed software development through containers and DevSecOps workflows, and is now doing the same for agentic AI development and software supply chain security.  theCUBE Research quantified Docker’s impact: teams build agentic AI apps faster, achieve near-zero CVEs, remediate vulnerabilities before exploits, ship modern cloud-native applications, save developer hours, and generate financial returns.

Keep reading for key highlights and analysis. Download theCUBE Research report and ebook to take a deep dive.

Agentic AI development streamlined using familiar technologies

Developers can build, run, and share agents and compose agentic systems using familiar Docker container workflows. To do this, developers can build agents safely using Docker MCP Gateway, Catalog, and Toolkit; run agents securely with Docker Sandboxes; and run models with Docker Model Runner. These capabilities align with theCUBE Research findings that 87% of organizations reduced AI setup time by over 25% and 80% report accelerating AI time-to-market by at least 26%.  Using Docker’s modern and secure software delivery practices, development teams can implement AI feature experiments faster and in days test agentic AI capabilities that previously took months. Nearly 78% of developers experienced significant improvement in the standardization and streamlining of AI development workflows, enabling better testing and validation of AI models. Docker helps enterprises generate business advantages through deploying new customer experiences that leverage agentic AI applications. This is phenomenal, given the nascent stage of agentic AI development in enterprises.

Software supply chain security and innovation can move in lockstep

Security engineering and vulnerability remediation can slow development to a crawl. Furthermore, checkpoints or controls may be applied too late in the software development cycle, or after dangerous exploits, creating compounded friction between security teams seeking to mitigate vulnerabilities and developers seeking to rapidly ship features. Docker embeds security directly into development workflows through vulnerability analysis and continuously-patched certified container images. theCUBE Research analysis supports these Docker security capabilities: 79% of organizations find Docker extremely or very effective at maintaining security & compliance, while 95% of respondents reported that Docker improved their ability to identify and remediate vulnerabilities. By making it very simple for developers to use secure images as a default, Docker enables engineering teams to plan, build, and deploy securely without sacrificing feature velocity or creating deployment bottlenecks. Security and innovation can move in lockstep because Docker concurrently secures software supply chains and eliminates vulnerabilities.

Developer productivity becomes a competitive advantage

Consistent container environments eliminate friction, accelerate software delivery cycles, and enable teams to focus on building features rather than overcoming infrastructure challenges. When developers spend less time on environment setup and troubleshooting, they ship more features. Application features that previously took months now reach customers in weeks. The research demonstrates Docker’s ability to increase developer productivity. 72% of organizations reported significant productivity gains in development workflows, while 75% have transformed or adopted DevOps practices when using Docker. Furthermore, when it comes to AI and supply chain security, the findings mentioned above further support how Docker unlocks developer productivity.

Financial returns exceed expectations

CFOs demand quantifiable returns for technology investments, and Docker delivers them. 95% of organizations reported substantial annual savings, with 43% reporting $50,000-$250,000 in cost reductions from infrastructure efficiency, reduced rework, and faster time-to-market. The ROI story is equally compelling: 69% of organizations report ROI exceeding 101%, with many achieving ROI above 500%. When factoring in faster feature delivery, improved developer satisfaction, and reduced security incidents, the business case for Docker becomes even more tangible. The direct costs of a security breach can surpass $500 million, so mitigating even a fraction of this cost provides a compelling financial justification for enterprises to deploy Docker to every developer.

Modernization and cloud native apps remain top of mind

For enterprises who maintain extensive legacy systems, Docker serves as a proven catalyst for cloud-native transformation at scale. Results show that nearly nine in ten (88%) of organizations report Docker has enabled modernization of at least 10% of their applications, with half achieving modernization across 31-60% of workloads and another 20% modernizing over 60%. Docker accelerates the shift from monolithic architectures to modern containerized cloud-native environments while also delivering substantial business value.  For example, 37% of organizations report 26% to >50% faster product time-to-market, and 72% report annual cost savings ranging from $50,000 to over $1 million.

Learn more about Docker’s impact on enterprise software development

Docker has evolved from a containerization suite into a development platform for testing, building, securing, and deploying modern software, including agentic AI applications. Docker enables enterprises to apply proven containerization and DevSecOps practices to agentic AI development and software supply chain security.

Download (below) the full report and the ebook from theCUBE Research analysis to learn Docker’s impact on developer productivity, software supply chain security, agentic AI application development, CI/CD and DevSecOps, modernization, cost savings, and ROI.  Learn how enterprises leverage Docker to transform application development and win in markets where speed and innovation determine success.

theCUBE Research economic validation of Docker’s development platform

> Download the Report

> Download the eBook

theCUBE docker banner

]]>
Your Org, Your Tools: Building a Custom MCP Catalog https://www.docker.com/blog/build-custom-mcp-catalog/ Fri, 24 Oct 2025 19:07:39 +0000 https://www.docker.com/?p=79282 I’m Mike Coleman, a staff solutions architect at Docker. In this role, I spend a lot of time talking to enterprise customers about AI adoption. One thing I hear over and over again is that these companies want to ensure appropriate guardrails are in place when it comes to deploying AI tooling. 

For instance, many organizations want tighter control over which tools developers and AI assistants can access via Docker’s Model Context Protocol (MCP) tooling. Some have strict security policies that prohibit pulling images directly from Docker Hub. Others simply want to offer a curated set of trusted MCP servers to their teams or customers.

In this post, we walk through how to build your own MCP catalog. You’ll see how to:

  • Fork Docker’s official MCP catalog
  • Host MCP server images in your own container registry
  • Publish a private catalog
  • Use MCP Gateway to expose those servers to clients

Whether you’re pulling existing MCP servers from Docker’s MCP Catalog or building your own, you’ll end up with a clean, controlled MCP environment that fits your organization.

Introducing Docker’s MCP Tooling

Docker’s MCP ecosystem has three core pieces:

MCP Catalog

A YAML-based index of MCP server definitions. These describe how to run each server and what metadata (description, image, repo) is associated with it. The MCP Catalog hosts over 220+ containerized MCP servers, ready to run with just a click. 

The official docker-mcp catalog is read-only. But you can fork it, export it, or build your own.

MCP Gateway

The MCP Gateway connects your clients to your MCP servers. It doesn’t “host” anything — the servers are just regular Docker containers. But it provides a single connection point to expose multiple servers from a catalog over HTTP SSE or STDIO.

Traditionally, with X servers and Y clients, you needed X * Y configuration entries. MCP Gateway reduces that to just Y entries (one per client). Servers are managed behind the scenes based on your selected catalog.

You can start the gateway using a specific catalog:

docker mcp gateway run –catalog my-private-catalog

MCP Gateway is open source: https://github.com/docker/mcp-gateway

Untitled presentation

Figure 1: The MCP Gateway provides a single connection point to expose multiple MCP servers

MCP Toolkit (GUI)

Built into Docker Desktop, the MCP Toolkit provides a graphical way to work with the MCP Catalog and MCP Gateway. This allows you to:

  • Access to Docker’s MCP Catalog via a rich GUI
  • Secure handling of secrets (like GitHub tokens)
  • Easily enable MCP servers
  • Connect your selected MCP servers with one click to a variety of clients like Claude code, Claude Desktop, Codex, Cursor, Continue.dev, and Gemini CLI

Workflow Overview

The workflow below will show you the steps necessary to create and use a custom MCP catalog. 

The basic steps are:

  1. Export the official MCP Catalog to inspect its contents
  2. Fork the Catalog so you can edit it
  3. Create your own private catalog
  4. Add specific server entries
  5. Pull (or rebuild) images and push them to your registry
  6. Update your catalog to use your images
  7. Run the MCP Gateway using your catalog
  8. Connect clients to it

Step-by-Step Guide: Creating and Using a Custom MCP Catalog

We start by setting a few environment variables to make this process repeatable and easy to modify later.

For the purpose of this example, assume we are migrating an existing MCP server (DuckDuckGo) to a private registry (ghcr.io/mikegcoleman). You can also add your own custom MCP server images into the catalog, and we mention that below as well. 

export MCP_SERVER_NAME="duckduckgo"
export GHCR_REGISTRY="ghcr.io"
export GHCR_ORG="mikegcoleman"
export GHCR_IMAGE="${GHCR_REGISTRY}/${GHCR_ORG}/${MCP_SERVER_NAME}:latest"
export FORK_CATALOG="my-fork"
export PRIVATE_CATALOG="my-private-catalog"
export FORK_EXPORT="./my-fork.yaml"
export OFFICIAL_DUMP="./docker-mcp.yaml"
export MCP_HOME="${HOME}/.docker/mcp"
export MCP_CATALOG_FILE="${MCP_HOME}/catalogs/${PRIVATE_CATALOG}.yaml"

Step 1: Export the official MCP Catalog 

Exporting the official Docker MCPCatalog gives you a readable local YAML file listing all servers. This makes it easy to inspect metadata like images, descriptions, and repository sources outside the CLI.

docker mcp catalog show docker-mcp --format yaml > "${OFFICIAL_DUMP}"

Step 2: Fork the official MCP Catalog

Forking the official catalog creates a copy you can modify. Since the built-in Docker catalog is read-only, this fork acts as your editable version.

docker mcp catalog fork docker-mcp "${FORK_CATALOG}"
docker mcp catalog ls

Step 3: Create a new catalog

Now create a brand-new catalog that will hold only the servers you explicitly want to support. This ensures your organization runs a clean, controlled catalog that you fully own.

docker mcp catalog create "${PRIVATE_CATALOG}"

Step 4: Add specific server entries

Export your forked catalog to a file so you can copy over just the entries you want. Here we’ll take only the duckduckgo server and add it to your private catalog.

docker mcp catalog export "${FORK_CATALOG}" "${FORK_EXPORT}"
docker mcp catalog add "${PRIVATE_CATALOG}" "${MCP_SERVER_NAME}" "${FORK_EXPORT}"

Step 5: Pull (or rebuild) images and push them to your registry

At this point you have two options:

If you are able to pull from Docker Hub, find the image key for the server you’re interested in by looking at the YAML file you exported earlier. Then pull that image down to your local machine. After you’ve pulled it down, retag it for whatever repository it is you want to use. 

Example for duckduckgo

vi "${OFFICIAL_DUMP}" # look for the duckduck go entry and find the image: key which will look like this:
# image: mcp/duckduckgo@sha256:68eb20db6109f5c312a695fc5ec3386ad15d93ffb765a0b4eb1baf4328dec14f

# pull the image to your machine
docker pull \
mcp/duckduckgo@sha256:68eb20db6109f5c312a695fc5ec3386ad15d93ffb765a0b4eb1baf4328dec14f 

# tag the image with the appropriate registry
docker image tag mcp/duckduckgo@sha256:68eb20db6109f5c312a695fc5ec3386ad15d93ffb765a0b4eb1baf4328dec14f  ${GHCR_IMAGE}

# push the  image
docker push ${GHCR_IMAGE}

At this point you can move on to editing the MCP Catalog file in the next section.

 
If you cannot download from Docker Hub you can always rebuild the MCP server from its GitHub repo. To do this, open the exported YAML and look for your target server’s GitHub source repository. You can use tools like vi, cat, or grep to find it — it’s usually listed under a source key. 

Example for duckduckgo:
source: https://github.com/nickclyde/duckduckgo-mcp-server/tree/main

export SOURCE_REPO="https://github.com/nickclyde/duckduckgo-mcp-server.git"

Next, you’ll rebuild the MCP server image from the original GitHub repository and push it to your own registry. This gives you full control over the image and eliminates dependency on Docker Hub access.

echo "${GH_PAT}" | docker login "${GHCR_REGISTRY}" -u "${GHCR_ORG}" --password-stdin

docker buildx build \
  --platform linux/amd64,linux/arm64 \
  "${SOURCE_REPO}" \
  -t "${GHCR_IMAGE}" \
  --push


Step 6: Update your catalog 

After publishing the image to GHCR, update your private catalog so it points to that new image instead of the Docker Hub version. This step links your catalog entry directly to the image you just built.

vi "${MCP_CATALOG_FILE}"

# Update the image line for the duckduckgo server to point to the image you created in the previous step (e.g. ghcr.io/mikegcoleman/duckduckgo-mcp)

Remove the forked version of the catalog as you no longer need it

docker mcp catalog rm "${FORK_CATALOG}"

Step 7: Run the MCP Gateway 

Enabling the server activates it within your MCP environment. Once enabled, the gateway can load it and make it available to connected clients. You will get warnings about “overlapping servers” that is because the same servers are listed in two places (your catalog and the original catalog)

docker mcp server enable "${MCP_SERVER_NAME}"
docker mcp server list

Step 8: Connect to popular clients 

Now integrate the MCP Gateway with your chosen client. The raw command to run the gateway is: 

docker mcp gateway run --catalog "${PRIVATE_CATALOG}"

But that just runs an instance on your local machine, when what you probably want is to integrate with some client application. 

To do this you need to format the raw command so that it works for the client you wish to use. For example, with VS Code you’d want to update the mcp.json as follows:

"servers": {
    "docker-mcp-gateway-private": {
        "type": "stdio",
        "command": "docker",
        "args": [
            "mcp",
           "gateway",
            "run",
            "--catalog",
            "my-private-catalog"
        ]
    }
}

Finally, verify that the gateway is using your new GHCR image and that the server is properly enabled. This quick check confirms everything is configured as expected before connecting clients.

docker mcp server inspect "${MCP_SERVER_NAME}" | grep -E 'name|image'

Summary of Key Commands

You might find the following CLI commands handy:

docker mcp catalog show docker-mcp --format yaml > ./docker-mcp.yaml
docker mcp catalog fork docker-mcp my-fork
docker mcp catalog export my-fork ./my-fork.yaml
docker mcp catalog create my-private-catalog
docker mcp catalog add my-private-catalog duckduckgo ./my-fork.yaml
docker buildx build --platform linux/amd64,linux/arm64 https://github.com/nickclyde/duckduckgo-mcp-server.git \
  -t ghcr.io/mikegcoleman/duckduckgo:latest --push
docker mcp server enable duckduckgo
docker mcp gateway run --catalog my-private-catalog

Conclusion

By using Docker’s MCP Toolkit, Catalog, and Gateway, you can fully control the tools available to your developers, customers, or AI agents. No more one-off setups, scattered images, or cross-client connection headaches.

Your next steps:

  • Add more servers to your catalog
  • Set up CI to rebuild and publish new server images
  • Share your catalog internally or with customers

Docs:

Happy curating. 

We’re working on some exciting enhancements to make creating custom catalogs even easier. Stay tuned for updates!

Learn more

]]>
Introducing the Docker Premium Support and TAM service https://www.docker.com/blog/introducing-the-docker-premium-support-and-tam-service/ Thu, 25 Sep 2025 13:19:17 +0000 https://www.docker.com/?p=78039 The Docker Customer Success and Technical Account Management organizations are excited to introduce the Premium Support and TAM service — a new service designed to extend Docker’s support to always-on 24/7, priority SLAs, expert guidance, and TAM add-on services.  We have carefully designed these new services to support our valued customers’ developers and global business operations.

Docker Premium Support and TAM service offers development teams:

  • Always-on, high-priority response
  • Advanced incident analysis
  • Guidance from Docker experts
  • Support across the Docker ecosystem
  • And much more, as you’ll see below

Always-on, high-priority response

In mission-critical technology environments, every minute counts. Docker Premium Support delivers 24/7 coverage, with guaranteed response SLAs as fast as one hour for Severity-1 critical issues. Customers also receive priority ticket routing, escalation management, and the option of live troubleshooting audio or video calls — ensuring developers can quickly get back to what matters most: delivering innovative software.

Advanced incident analysis

Downtime shouldn’t just be fixed — it should be prevented. With Premium Support, major incidents include Root Cause Analysis (RCA) reporting, so your teams gain visibility into what happened and how Docker is addressing the issue moving forward. This proactive approach helps strengthen resilience and minimize repeat disruptions.

Guidance from Docker experts

As mentioned above, Docker Premium Support resources provide an always-on, high-priority response. But customers can extend their coverage with the Technical Account Manager (TAM) add-on, adding proactive, high-touch expertise through a trusted TAM advisor.

The TAM add-on service unlocks even greater value for Premium Support customers. TAMs are experienced Docker experts who act as committed advisors to your business lines and engineering teams, providing a strategic partnership tailored to your organization’s goals. The Premium Support service offers customers both Designated TAM and Dedicated TAM options. With a TAM, Docker Premium Support becomes more than a safety net — it becomes a force multiplier for your development organization.

Support across the Docker ecosystem

From Docker Desktop and Hub to Scout, Build Cloud, Hardened Images, AI Model Runner, MCP, Docker Offload, and more, Premium Support covers your entire Docker footprint. As your engineering organization adopts new Docker products or scales existing use, Premium Support and TAM services scale with you.

Why Premium Support matters

Enterprises rely on Docker for application development and delivery across cloud and hybrid environments. Additionally, new demands for secure software supply chains, AI-powered applications, and AI agent development, make modern software development even more challenging. Premium Support ensures that when the unexpected happens, your development teams are never left waiting. Further, with the TAM add-on, you gain a committed partner to guide strategy, adoption, and long-term success.

Next steps

The Premium Support and TAM service is available to Docker Business and DHI customers. The TAM service is available only to Premium Support customers. Additional fees may apply to the Premium Support service and to the TAM service. Contact sales to learn more about pricing

Please leverage these resources to learn more about how Docker’s Premium Support and TAM service can help your organization.

Premium Support and TAM service

]]>
Everyone’s a Snowflake: Designing Hardened Image Processes for the Real World https://www.docker.com/blog/hardened-image-best-practices/ Tue, 05 Aug 2025 18:54:38 +0000 https://www.docker.com/?p=75423 No two Harden images are the same 2

Hardened container images and distroless software are the new hotness as startups and incumbents alike pile into the fast-growing market. In theory, hardened images provide not only a smaller attack surface but operational simplicity. In practice, there remains a fundamental – and often painful – tension between the promised security perfection of hardened images and the reality of building software atop those images and running them in production. This causes real challenges for platform engineering teams trying to hit the Golden Mean between usability and security.

Why? Everyone’s a snowflake. 

No two software stacks, CI/CD pipeline set ups and security profiles are exactly the same. In software, small differences can cause big headaches. When a developer can no longer access their preferred debugging tools, or cannot add the services they are used to pairing in a container, that causes friction and frustration. Naturally, devs who must ship figure out workarounds or other methods to achieve desired functionality. This snowflake reality can have a snowball affect of driving modifications underground, moving them outside of the hardened image process, or causing backlogs at hardened image vendors who designed their products for rigid security, not reality. In the worst case, they simplify ditch distroless and stymie adoption.

The counterintuitive truth? Rigid container solutions can have the opposite effect, making organizations less secure. This is why the process of designing and applying hardened images is most effective when developer and DevOps needs are taken into account and flexibility is baked into the process. At the same time, too much choice is chaos and chaos generates excessive risk. This is a delicate balance and the ultimate challenge for platform ops today.

The Snowflake Problem: Why Every Environment is Unique

The Snowflake Challenge in container security is pervasive. Walk into any engineering team and you’ll find them standardized not only on an OS distro and changes to that distro will likely cause unforeseen disruptions. They’ve got applications that need to connect to internal services with self-signed certificates, but hardened images often lack the CA bundles or the ability to easily add custom ones. They need to debug production issues with standard system tools, but hardened images leave them out. They’re running containers with multiple processes because splitting legacy applications into separate containers would break existing functionality and require months of rewriting. And they rely on package managers to install operational tools that security teams never planned for.

Distribution, tool and package loyalty isn’t just preference. It’s years of institutional knowledge baked into deployment scripts, monitoring configurations, and troubleshooting runbooks. Teams that have mastered a specific toolchain don’t want to retrain their entire organization just to get security benefits they can’t immediately see. Platform teams know this and will bias towards hardened image solutions that do not layer on cognitive load.

The reality is this. Real-world deployment patterns rarely match the security team’s slideshow. Multi-service containers are everywhere because deadlines matter more than architectural purity. These environments work, they’re tested, and they’re supporting actual users. Asking teams to rebuild their entire stack for theoretical security improvements feels like asking them to fix something that isn’t obviously broken. And they will find a way not to. So platform’s job is to find a hardened image solution that recognizes these types of realities and adjusts for them rather than forces behavioral change.

Familiarity as a Security Strategy

The most secure system in the world is worthless if your development teams route around it or ignore it. Flexibility and recognition that at least giving teams what they are used to having can make security nearly invisible and quite palatable.

In this light, multi-distro options from a hardened image vendor  isn’t a luxury feature. It’s an adoption requirement and critical way to mitigate the Snowflake Challenge. A hardened image solution that supports multiple major distros removes the biggest barrier to getting started – the fear of having to adopt an unfamiliar operating system. Once they recognize that their operating system in the hardened images will be familiar, platform teams can confidently begin hardening their existing stacks without worrying about retraining their entire engineering organization on a new base distribution or rewriting their deployment tooling.

Self-service customization turns potential friction into adoption drivers. When developers can add their required CA certificates easily and through self-service instead of filing support tickets, they actually use the tool. When they can merge their existing images with hardened bases through automated workflows, the migration path becomes clear. The goal isn’t to eliminate necessary customization but to make it just another simple step that is no big deal. No big deal modifications leads to smooth adoption paths and developer satisfaction.

The adoption math is straightforward. DDifficulty correlates inversely with security coverage. A perfectly hardened image that only 20% of teams can use provides less overall organizational security than a reasonably hardened image that 80% of teams adopt. Meeting developers where they are beats forcing architectural changes every time.

Migration Friction and Community Trust

The gap between current state and hardened images can feel daunting to many teams. Their existing Dockerfiles might be single-stage builds with years of accumulated dependencies. Their CI/CD pipelines assume certain tools will be available. Their developers assume packages they are comfortable with will be supported.

Modern tooling for hardened images can bridge this gap through progressive assistance. AI-powered converters can help translate existing Dockerfiles into multi-stage builds compatible with hardened bases. Converting legacy applications to hardened images through guided automation removes much of the technical friction. The tools handle the mechanical aspects of separating build dependencies from runtime dependencies while preserving application functionality. Teams can retain their existing development flows with less disruption and toil. Security adoption will be greater, while down-sizing the attack surface.

Hardened image adoption can depend on trust as much as technical merit. Organizations trust hardened image providers who demonstrate knowledge of the open source projects they’re securing. Docker has maintained close relationships with each open source project of the more than 70 official images listed on Docker Hub, That signals long-term commitment beyond just security theater. The reality is, the best hardened image design processes are dialogues that include project stakeholders and benefit from project insights and experience.

The upshot? Platform teams need to talk to their developer and DevOps customers to understand what software is critical and to talk to their hardened image provider to understand their ties and active interactions with the upstream communities. A successful hardened image rollout must navigates these realities and acknowledge all the invested parties. 

The Happy Medium: Secure Defaults, Controlled Flexibility, Community Cred

Effective container security resembles building with Lego blocks rather than erecting security monoliths. The beloved Lego kits not only have a base-level design but are also easy to modify while maintaining structural integrity. Monoliths make appear more solid and substantial but modifying them is challenging and their strong opinionated view of the world is destined to cramp someone’s style.

Auditable customization paths maintain security posture while accommodating reality. When developers can add packages through controlled processes that log changes and validate security implications, both security and productivity goals get met. The secret lies in making the secure path the easy path rather than trying to eliminate all alternatives. At the foundational level, this requires solutions that integrate with existing practices rather than replacing them wholesale. 

Success metrics need to include coverage and adoption alongside traditional hardening measurements. A hardened image strategy that achieves 95% team adoption with 80% attack surface reduction delivers better organizational security than one that achieves 99% hardening but only gets used by 30% of applications. Platform teams that understand this math are far more likely to succeed in hardened image adoption and embrace.

Beyond the Binary: A New Security Paradigm

The bottom line? Really good security deployed everywhere beats perfect security deployed sporadically because security is a system property, not a component property. The weakest link determines overall posture. An organization with consistent, reasonable security practices across all applications faces lower aggregate risk than one with perfect security on some applications and no security on others.

The path forward involves designing hardened image processes that acknowledge developer reality and involves community in order to improve security outcomes. That comes through broad adoption and minimal disruption.. This means creating migration paths that feel achievable rather than overwhelming, providing automation to smooth the path, and delivering self-service options rather than more Jira-ticket Bingo. Every organization may be a snowflake, but that doesn’t make security impossible. It just means hardened image solutions need to be as adaptable as the environments they’re protecting.

]]>