Kubernetes – Docker https://www.docker.com Fri, 09 Jan 2026 15:36:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.docker.com/app/uploads/2024/02/cropped-docker-logo-favicon-32x32.png Kubernetes – Docker https://www.docker.com 32 32 From Compose to Kubernetes to Cloud: Designing and Operating Infrastructure with Kanvas https://www.docker.com/blog/compose-to-kubernetes-to-cloud-kanvas/ Mon, 08 Dec 2025 14:00:00 +0000 https://www.docker.com/?p=83168 Docker has long been the simplest way to run containers. Developers start with a docker-compose.yml file, run docker compose up, and get things running fast.

As teams grow and workloads expand into Kubernetes and integrate into cloud services, simplicity fades. Kubernetes has become the operating system of the cloud, but your clusters rarely live in isolation. Real-world platforms are a complex intermixing of proprietary cloud services – AWS S3 buckets, Azure Virtual Machines, Google Cloud SQL databases – all running alongside your containerized workloads. You and your teams are working with clusters and clouds in a sea of YAML.

Managing this hybrid sprawl often means context switching between Docker Desktop, the Kubernetes CLI, cloud provider consoles, and infrastructure as code. Simplicity fades as you juggle multiple distinct tools.

Bringing clarity back from this chaos is the new Docker Kanvas Extension from Layer5 – a visual, collaborative workspace built right into Docker Desktop that allows you to design, deploy, and operate not just Kubernetes resources, but your entire cloud infrastructure across AWS, GCP, and Azure.

image6 1

What Is Kanvas?

Kanvas is a collaborative platform designed for engineers to visualize, manage, and design multi-cloud and Kubernetes-native infrastructure. Kanvas transforms the concept of infrastructure as code into infrastructure as design. This means your architecture diagram is no longer just documentation – it is the source of truth that drives your deployment. Built on top of Meshery (one of the Cloud Native Computing Foundation’s highest-velocity open source projects), Kanvas moves beyond simple Kubernetes manifests by using Meshery Models – definitions that describe the properties and behavior of specific cloud resources. This allows Kanvas to support a massive catalog of Infrastructure-as-a-Service (IaaS) components: 

  • AWS: Over 55+ services (e.g., EC2, Lambda, RDS, DynamoDB).
  • Azure: Over 50+ components (e.g., Virtual Machines, Blob Storage, VNet).
  • GCP: Over 60+ services (e.g., Compute Engine, BigQuery, Pub/Sub).

Kanvas bridges the gap between abstract architecture and concrete operations through two integrated modes: Designer and Operator.

Designer Mode (declarative mode)

Designer mode serves as a “blueprint studio” for cloud architects and DevOps teams, emphasizing declarative modeling – describing what your infrastructure should look like rather than how to build it step-by-step – making it ideal for GitOps workflows and team-based planning. 

  • Build and iterate collaboratively: Add annotations, comments for design reviews, and connections between components to visualize data flows, architectures, and relationships.
  • Dry-run and validate deployments: Before touching production, simulate your deployments by performing a dry-run to verify that your configuration is valid and that you have the necessary permissions. 
  • Import and export: Brownfield designs by connecting your existing clusters or importing Helm charts from your GitHub repositories. 
  • Reuse patterns, clone, and share: Pick from a catalog of reference architectures, sample configurations, and infrastructure templates, so you can start from proven blueprints rather than a blank design. Share designs just as you would a Google Doc. Clone designs just as you would a GitHub repo. Merge designs just as you would in a pull request.
image11 1

Operator Mode (imperative mode)

Kanvas Operator mode transforms static diagrams into live, managed infrastructure. When you switch to Operator mode, Kanvas stops being a configuration tool and becomes an active infrastructure console, using Kubernetes controllers (like AWS Controllers for Kubernetes (ACK) or Google Config Connector) to actively manage your designs.

Operator mode allows you to:

  • Load testing and performance management: With Operator’s built-in load generator, you can execute stress tests and characterize service behavior by analyzing latency and throughput against predefined performance profiles, establishing baselines to measure the impact of infrastructure configuration changes made in Designer mode.
  • Multi-player, interactive terminal: Open a shell session with your containers and execute commands, stream and search container logs without leaving the visual topology. Streamline your troubleshooting by sharing your session with teammates. Stay in-context and avoid context-switching to external command-line tools like kubectl.
  • Integrated observability: Use the Prometheus integration to overlay key performance metrics (CPU usage, memory, request latency) and quickly find spot “hotspots” in your architecture visually. Import your existing Grafana dashboards for deeper analysis.
  • Multi-cluster, multi-cloud operations: Connect multiple Kubernetes clusters (across different clouds or regions) and manage workloads that span across a GKE cluster and an EKS cluster in a single topology view.them all from a single Kanvas interface.
image4 1

While Kanvas Designer mode is about intent (what you want to build), Operator mode is about reality (what is actually running). Kanvas Designer mode and Operator mode are simply two, tightly integrated sides of the same coin. 

With this understanding, let’s see both modes in-action in Docker Desktop.

Walk-Through: From Compose to Kubernetes in Minutes

With the Docker Kanvas extension (install from Docker Hub), you can take any existing Docker Compose file and instantly see how it translates into Kubernetes, making it incredibly easy to understand, extend, and deploy your application at scale.

The Docker Samples repository offers a plethora of samples. Let’s use the Spring-based PetClinic example below. 

# sample docker-compose.yml

services:
  petclinic:
    build:
      context: .
      dockerfile: Dockerfile.multi
      target: development
    ports:
      - 8000:8000
      - 8080:8080
    environment:
      - SERVER_PORT=8080
      - MYSQL_URL=jdbc:mysql://mysqlserver/petclinic
    volumes:
      - ./:/app
    depends_on:
      - mysqlserver
    
  mysqlserver:
    image: mysql:8
    ports:
      - 3306:3306
    environment:
      - MYSQL_ROOT_PASSWORD=
      - MYSQL_ALLOW_EMPTY_PASSWORD=true
      - MYSQL_USER=petclinic
      - MYSQL_PASSWORD=petclinic
      - MYSQL_DATABASE=petclinic
    volumes:
      - mysql_data:/var/lib/mysql
      - mysql_config:/etc/mysql/conf.d
volumes:
  mysql_data:
  mysql_config:

image2 1

With your Docker Kanvas extension installed:

  1. Import sample app: Save the PetClinic docker-compose.yml file to your computer, then click to import or drag and drop the file onto Kanvas.
image7 1

Kanvas renders an interactive topology of your stack showing services, dependencies (like MySQL), volumes, ports, and configurations, all mapped to their Kubernetes equivalents. Kanvas performs this rendering in phases, applying an increasing degree of scrutiny in the evaluation performed in each phase. Let’s explore the specifics of this tiered evaluation process in a moment.

  1. Enhance the PetClinic design

From here, you can enhance the generated design in a visual, no-YAML way:

  • Add a LoadBalancer, Ingress, or ConfigMap
  • Configure Secrets for your database URL or sensitive environment variables
  • Modify service relationships or attach new components
  • Add comments or any other annotations.

Importantly, Kanvas saves your design as you make changes. This gives you production-ready deployment artifacts generated directly from your Compose file.

  1. Deploy to a cluster

With one click, deploy the design to any cluster connected to Docker Desktop or any other remote cluster. Kanvas handles the translation and applies your configuration.

  1. Switch modes and interact with your app

After deploying (or when managing an existing workload), switch to Operator mode to observe and manage your deployed design. You can:

  • Inspect Deployments, Services, Pods, and their relationships.
  •  Open a terminal session with your containers for quick debugging.
  •  Tail and search your container logs and monitor resource metrics.
  •  Generate traffic and analyze the performance of your deployment under heavy load.
  • Share your Operator View with teammates for collaborative management.
image3 1

Within minutes, a Compose-based project becomes a fully managed Kubernetes workload, all without leaving Docker Desktop. This seamless flow from a simple Compose file to a fully managed, operable workload highlights the ease by which infrastructure can be visually managed, leading us to consider the underlying principle of Infrastructure as Design.

Infrastructure as Design

Infrastructure as design elevates the visual layout of your stack to be the primary driver of its configuration, where the act of adjusting the proximity and connectedness of components is one in the same as the process of configuring your infrastructure. In other words, the presence, absence, proximity, or connectedness of individual components (all of which affect how one component relates to another) respectively augments the underlying configuration of each. Kanvas is highly intelligent in this way, understanding at a very granular level of detail how each individual component relates to all other components and will augment the configuration of those components accordingly.

Understand that the process by which Kanvas renders the topology of your stack’s architecture in phases. The initial rendering involves a lightweight analysis of each component, establishing a baseline for the contents of your new design. A subsequent phase of rendering applies a higher level of sophistication in its analysis as Kanvas introspect the configuration of each of your stack’s components, their interdependencies, and proactively evaluates the manner in which each component relates to one another. Kanvas will add, remove, and update the configuration of your components as a result of this relationship evaluation.

image1 1

This process of relationship evaluation is ongoing. Every time you make a change to your design, Kanvas re-evaluates each component configuration.

To offer an example, if you were to bring a Kubernetes Deployment in the same vicinity of the Kubernetes Namespace you will find that one magnetizes to the next and that your Deployment is visually placed inside of the Namespace, and at the same time, that Deployment’s configuration is mutated to include its new Namespace designation. Kanvas proactively evaluates and mutates the configuration of the infrastructure resources in your design as you make changes.

This ability for Kanvas to intelligently interpret and adapt to changes in your design—automatically managing configuration and relationships—is the key to achieving infrastructure as design. This power comes from a sophisticated system that gives Kanvas a level of intelligence, but with the reliability of a policy-driven engine.

AI-like Intelligence, Anchored by Deterministic Truth

In an era where generative AI dramatically accelerates infrastructure design, the risk of “hallucinations”—plausible but functionally invalid configurations—remains a critical bottleneck. Kanvas solves this by pairing the generative power of AI with a rigid, deterministic policy engine.

image8 1

This engine acts as an architectural guardrail, offering you precise control over the degree to which AI is involved in assessing configuration correctness. It transforms designs from simple visual diagrams into validated, deployable blueprints.

While AI models function probabilistically, Kanvas’s policy engine functions deterministically, automatically analyzing designs to identify, validate, and enforce connections between components based on ground-truth rules. Each of these rules are statically defined and versioned in their respective Kanvas models.

  • Deep Contextualization: The evaluation goes beyond simple visualization. It treats relationships as context-aware and declarative, interpreting how components interact (e.g., data flows, dependencies, or resource sharing) to ensure designs are not just imaginative, but deployable and compliant.
  • Semantic Rigor: The engine distinguishes between semantic relationships (infrastructure-meaningful, such as a TCP connection that auto-configures ports) and non-semantic relationships (user-defined visuals, like annotations). This ensures that aesthetic choices never compromise infrastructure integrity.

Kanvas acknowledges that trust is not binary. You maintain sovereignty over your designs through granular controls that dictate how the engine interacts with AI-generated suggestions:

  • “Human-in-the-Loop” Slider: You can modulate the strictness of the policy evaluation. You might allow the AI to suggest high-level architecture while enforcing strict policies on security configurations (e.g., port exposure or IAM roles).
  • Selective Evaluation: You can disable evaluations via preferences for specific categories. For example, you may trust the AI to generate a valid Kubernetes Service definition, but rely entirely on the policy engine to validate the Ingress controller linking to it.

Kanvas does not just flag errors; it actively works to resolve them using sophisticated detection and correction strategies.

  • Intelligent Scanning: The engine scans for potential relationships based on component types, kinds, and subtypes (e.g., a Deployment linking to a Service via port exposure), catching logical gaps an AI might miss.
  • Patches and Resolvers: When a partial or a hallucinated configuration is detected, Kanvas applies patches to either propagate missing configuration or dynamically adjusts configurations to resolve conflicts, ensuring the final infrastructure-as-code export (e.g., Kubernetes manifests, Helm chart) is clean, versionable, and secure.

Turn Complexity into Clarity

Kanvas takes the guesswork out of managing modern infrastructure. For developers used to Docker Compose, it offers a natural bridge to Kubernetes and cloud services — with visibility and collaboration built in.

Capability

How It Helps You

Import and Deploy Compose Apps

Move from Compose, Helm, or Kustomize to Kubernetes in minutes.

Visual Designer

Understand your architecture through connected, interactive diagrams.

Design Catalog

Use ready-made templates and proven infrastructure patterns.

Terminal Integration

Debug directly from the Kanvas UI, without switching tools.

Sharable Views

Collaborate on live infrastructure with your team.

Multi-Environment Management

Operate across local, staging, and cloud clusters from one dashboard.

Kanvas brings visual design and real-time operations directly into Docker Desktop. Import your Compose files, Kubernetes Manifests, Helm Charts, and Kustomize files to explore the catalog of ready-to-use architectures, and deploy to Kubernetes in minutes — no YAML wrangling required.

Designs can also be exported in a variety of formats, including as OCI-compliant images and shared through registries like Docker Hub, GitHub Container Registry, or AWS ECR — keeping your infrastructure as design versioned and portable.

Install the Kanvas Extension from Docker Hub and start designing your infrastructure today.

]]>
Docker Desktop 4.50: Indispensable for Daily Development  https://www.docker.com/blog/docker-desktop-4-50/ Wed, 12 Nov 2025 14:00:00 +0000 https://www.docker.com/?p=81883 Docker Desktop 4.50 represents a major leap forward in how development teams build, secure, and ship software. Across the last several releases, we’ve delivered meaningful improvements that directly address the challenges you face every day: faster debugging workflows, enterprise-grade security controls that don’t get in your way, and seamless AI integration that makes modern development accessible to every team member.

Whether you’re debugging a build failure at 2 AM, managing security policies across distributed teams, or leveraging AI capabilities to build your applications, Docker Desktop delivers clear, real-world value that keeps your workflows moving and your infrastructure secure.

4.50

Accelerating Daily Development: Productivity and Control for Every Developer

Modern development teams face mounting pressures: complex multi-service applications, frequent context switching between tools, inconsistent local environments, and the constant need to balance productivity with security and governance requirements. For principal engineers managing these challenges, the friction of daily development workflows can significantly impact team velocity and code quality.

Docker Desktop addresses these challenges head-on by delivering seamless experiences that eliminate friction and giving organizations the control necessary to maintain security and compliance without slowing teams down.

Seamless Developer Experiences

Docker Debug is now free for all users, removing barriers to troubleshooting and making it easier for every developer on your team to diagnose issues quickly. The enhanced IDE integration goes deeper than ever before: the Dockerfile debugger in the VSCode Extension enables developers to step through build processes directly within their familiar editing environment, reducing the cognitive overhead of switching between tools. Whether you’re using VSCode, Cursor, or other popular editors, Docker Desktop integrates naturally into your existing workflow. For Windows-based enterprises, Docker Desktop’s ongoing engineering investments are delivering significant stability improvements with WSL2 integration, ensuring consistent performance for development teams at scale.

Getting applications from local development to production environments requires reducing the gap between how developers work locally and how applications run at scale. Compose to Kubernetes capabilities enable teams to translate local multi-service applications into production-ready Kubernetes deployments, while cagent provides a toolkit for running and developing agents that simplifies the development process. Whether you’re orchestrating containerized microservices or developing agentic AI workflows, Docker Desktop accelerates the path from experimentation to production deployment.

Enterprise-Level Control and Governance

For organizations requiring centralized management, Docker Desktop delivers enterprise-grade capabilities that maintain security without sacrificing developer autonomy. Administrators can set proxy settings via macOS configuration profiles, and can specify PAC files and Embedded PAC scripts with installer flags for macOS and Windows Docker, ensuring corporate network policies are automatically enforced during deployment without requiring manual developer configuration, further extending enterprise policy enforcement.

A faster release cadence with continuous updates ensures every developer runs the latest stable version with critical security patches, eliminating the traditional tension between IT requirements and developer productivity. The Kubernetes Dashboard is now part of the left navigation, making it easier to find and use.

Kind (k8s) Enterprise Support brings production-grade Kubernetes tooling to local development, enabling teams to test complex orchestration scenarios before deployment. 

k8s settings

Figure 1: K8 Settings

Together, these capabilities build on Docker Desktop’s position as the foundation for modern development, adding enterprise-grade management that scales with your organization’s needs. You get the visibility and control that enterprise architecture teams require while preserving the speed and flexibility that keeps developers productive.

Securing Container Workloads: Enterprise-Grade Protection Without Sacrificing Speed

As containerized applications move from development to production and AI workloads proliferate across enterprises, security teams face a critical challenge: how do you enforce rigorous security controls without creating bottlenecks that slow development velocity? Traditional approaches often force organizations to choose between security and speed, but that’s a false choice that puts both innovation and infrastructure at risk.

Docker Desktop’s recent releases address this tension directly, delivering enterprise-grade security controls that operate transparently within developer workflows. These aren’t afterthought features; they’re foundational protections designed to give security and platform teams confidence at scale while keeping developers productive.

Granular Control Over Container Behavior

Enforce Local Port Bindings prevents services running in Docker Desktop from being exposed across the local network, ensuring developers maintain network isolation during local development while retaining full functionality. For teams in regulated industries where network segmentation requirements extend to development environments, this capability helps maintain compliance standards without disrupting developer workflows.

Building on Secure Foundations

These runtime protections work in tandem with secure container foundations. Docker’s new Hardened Images, secure, minimal, production-ready container images maintained by Docker with near-zero CVEs and enterprise SLA backing. Recent updates introduced unlimited catalog pricing and the addition of Helm charts to the catalog. We also outlined Docker’s five pillars for Software Supply Chain Security, delivering transparency and eliminating the endless CVE remediation cycle. While Hardened Images are available as a separate add-on, they’re purpose-built to extend the secure-by-default foundation that Docker Desktop provides, giving teams a comprehensive approach to container security from development through production.

Seamless Enterprise Policy Integrations

The Docker CLI now gracefully handles certificates issued by non-conforming certificate authorities (CAs) that use negative serial numbers. While the X.509 standard specifies that certificate serial numbers must be positive, some enterprise PKI systems still produce certificates that violate this rule. Previously, organizations had to choose between adhering to their CA configuration and maintaining Docker compatibility, a frustrating trade-off that often led to insecure workarounds. Now, Docker Desktop works seamlessly with enterprise certificate infrastructure, ensuring developers can authenticate to private registries without security teams compromising their PKI standards.

These improvements reflect Docker’s commitment to being secure by default. Rather than treating security as a feature developers must remember to enable, Docker Desktop builds protection into the platform itself, giving enterprises the confidence to scale container adoption while maintaining the developer experience that drives innovation.

Unlocking AI Development: Making Model Context Protocol (MCP)Accessible for Every Developer

As AI-native development becomes central to modern software engineering, developers face a critical challenge: integrating AI capabilities into their workflows shouldn’t require extensive configuration knowledge or create friction that slows teams down. The Model Context Protocol (MCP) offers powerful capabilities for connecting AI agents to development tools and data sources, but accessing and managing these integrations has historically been complex, creating barriers to adoption, especially for teams with varying technical expertise.

Docker is addressing these challenges directly by making MCP integration seamless and secure within Docker Desktop.

Guided Onboarding Through Learning Center and MCP Toolkit Walkthroughs and Improved MCP Server Discovery

Understanding that accessibility drives adoption, Docker has introduced a redesigned onboarding experience through the Learning Center. The new MCP Toolkit Walkthroughs guide teams through complex setup processes step-by-step, ensuring that engineers of all skill levels can confidently adopt AI-powered workflows. Further, Docker’s MCP Server Discovery feature simplifies discovery by enabling developers to search, filter, and sort available MCP servers efficiently.  By eliminating the knowledge barriers and frictions around discovery, these improvements accelerate time to productivity and help organizations scale AI development practices across their teams.

Expanded Catalog: 270+ MCP Servers and Growing

The Docker MCP Catalog now includes over 270 MCP servers, with support for more than 60 remote servers. We’ve also added one-click connections for popular clients like Claude Code and Codex, making it easier than ever to supercharge your AI coding agents with powerful MCP tools. Getting started takes just a few clicks.

Remote MCP Server Support with Built-In OAuth

Connecting to MCP servers has traditionally meant dealing with manual tokens, fragile config files, and scattered credential management. It’s frustrating, especially for developers new to these workflows, who often don’t know where to find the right credentials in third-party tools. With the latest update to the Docker MCP Toolkit, developers can now securely connect to 60+ remote MCP servers, including Notion and Linear, using built-in OAuth support. This update goes beyond convenience; it lays the foundation for a more connected, intelligent, and automated developer experience, all within Docker Desktop. Read more about connecting to remote MCP servers.

MCP Servers with OAuth

Figure 2: Docker MCP Toolkit now supports remote MCP Servers with OAuth built-in

Smarter, More Efficient, and More Capable Agents with Dynamic MCPs

In this release, we’re introducing dynamic MCPs, a major step forward in enabling AI agents to discover, configure, and compose tools autonomously. Previously, integrating MCP servers required manual setup and static configurations. Now, with new features like Smart Search and Tool Composition, agents can search the MCP Catalog, pull only the tools they need, and even generate code to compose multi-tool workflows, all within a secure, sandboxed environment. These enhancements not only increase agent autonomy but also improve performance by reducing token usage and minimizing context bloat. Ultimately, this leads to less context switching and more focused time for developers. Read more about dynamic MCPs.

Together, these advancements represent Docker’s commitment to making AI-native development accessible and practical for development teams of any size.

Conclusion: Committed to Your Development Success

The innovations across Docker Desktop 4.45 through 4.50 reinforce our commitment to being the development solution teams rely on every day, for every workflow, at any scale.

We’ve made daily development faster and more integrated, with free debugging tools, native IDE support, and enterprise governance that actually works. We’ve strengthened security with controls that protect your infrastructure without creating bottlenecks. And we’ve made AI development accessible, turning complex integrations into guided experiences that accelerate your team’s capabilities. The impact is measurable. Independent research from theCUBE found that Docker Desktop users achieve 50% faster build times and reclaim 10-40+ hours per developer each month, time that goes directly back into innovation

This is Docker Desktop operating as your indispensable foundation: giving developers the tools they need to stay productive, giving security teams the controls they need to stay protected, and giving organizations the confidence they need to innovate at scale.

As we continue our accelerated release cadence, expect Docker to keep delivering the features that matter most to how you build, ship, and run modern applications. We’re committed to being the solution you can count on today and as your needs evolve.

Upgrade to the latest Docker Desktop now

Learn more

]]>
Desktop 4.39: Smarter AI Agent, Docker Desktop CLI in GA, and Effortless Multi-Platform Builds https://www.docker.com/blog/docker-desktop-4-39/ Thu, 06 Mar 2025 18:29:59 +0000 https://www.docker.com/?p=67670 Developers need a fast, secure, and reliable way to build, share, and run applications — and Docker makes that easy. With the Docker Desktop 4.39 release, we’re excited to announce a few developer productivity enhancements including Docker AI Agent with Model Context Protocol (MCP) and Kubernetes support, general availability of Docker Desktop CLI, and `platform` flag support for more seamless multi-platform image management.

1920x1080 4.39 docker desktop release

Docker AI Agent: Smarter, more capable, and now with MCP & Kubernetes

In our last release, we introduced the Docker AI Agent in beta as an AI-powered, context-aware assistant built into Docker Desktop and the CLI. It simplifies container management, troubleshooting, and workflows with guidance and automation. And the response has been incredible: a 9x increase in weekly active users. With each Docker Desktop release, we’re making Docker AI Agent smarter, more helpful, and more versatile across developer container workflows. And if you’re using Docker for GitHub Copilot, you’ll get these upgrades automatically — so you’re always working with the latest and greatest.

Docker AI Agent now supports Model Context Protocol (MCP) and Kubernetes, along with usability upgrades like multiline prompts and easy copying. The agent can now also interact with the Docker Engine to list and clean up containers, images, and volumes. Plus, with access to the Kubernetes cluster, Docker AI Agent can list namespaces, deploy and expose, for example, an Nginx service, and analyze pod logs. 

How Docker AI Agent Uses MCP

MCP is a new standard for connecting AI agents and models to external data and tools. It lets AI-powered apps and agents retrieve data and information from external sources, perform operations with third-party services, and interact with local filesystems, unlocking new and expanded capabilities. MCP works by introducing the concept of MCP clients and MCP Servers, this way clients request resources and the servers handle the request and perform the requested action.

The Docker AI Agent acts as an MCP client and can interact with MCP servers running as containers. When running the docker ai command in the terminal or in the Docker Desktop AI Agent window to ask a question, the agent looks for a gordon-mcp.yml file in the working directory for a list of MCP servers that should be used when in that context. For example, as a specialist in all things Docker, Docker AI Agent can:

To make MCP adoption easier and more secure, Docker has collaborated with Anthropic to build container images for the reference implementations of MCP servers, available on Docker Hub under the mcp namespace. Check out our docs for examples of using MCP with Docker AI Agent. 

Containerizing apps in multiple popular languages: More coming soon

Docker AI Agent is also more capable, and can now support the containerization of applications in new programming languages including:

  • JavaScript/TypeScript applications using npm, pnpm, yarn and bun;
  • Go applications using Go modules;
  • Python applications using pip, poetry, and uv;
  • C# applications using nuget

Try it out — just ask, “Can you containerize my application?” 

Once the agent runs through steps such as determining the number of services in the project, the language, package manager, and relevant information for containerization, it’ll generate Docker-related assets. You’ll have an optimized Dockerfile, Docker Compose file, dockerignore file, and a README to jumpstart your application with Docker. 

More language and package manager support will be available soon!

Ask Gordon Containerize my app 1200x1000 1

Figure 1: Docker AI Agent helps with containerizing your app and shows steps of its work

No need to write scripts, just ask Docker AI Agent

The Docker AI Agent also comes with built-in capabilities such as interfacing with containers, images, and volumes. Instead of writing scripts, you can simply ask in natural language to perform complex operations.  For example, combining various servers, to do complex tasks such as finding and cleaning unused images.

Ask Gordon CLI Find me all the images2 1000x680 1

Figure 2: Finding and optimizing unused images storage with a simple ask to Docker AI Agent

Docker Desktop CLI: Now in GA

With the Docker Desktop 4.37 release, we introduced the Docker Desktop CLI controller in Beta, a command-line tool to manage Docker Desktop. In addition to performing tasks like starting, stopping, restarting, and checking the status of Docker Desktop directly from the command line, developers can also print logs and update to the latest version of Docker Desktop. 

Docker meets developers where they work — whether in the CLI or GUI. With the Docker Desktop CLI, developers can seamlessly switch between GUI and command-line workflows, tailoring their workflows to their needs. 

This feature lets you automate Docker Desktop operations in CI/CD pipelines, expedites troubleshooting directly from the terminal, and creates a smoother, distraction-free workflow. IT admins also benefit from this feature; for example, they can use these commands in automation scripts to manage updates. 

Improve multi-platform image management with the new --platform flag 

Containerized applications often need to run across multiple architectures, making efficient platform-specific image management essential. To simplify this, we’ve introduced a --platform flag for docker save, docker load, and docker history. This addition will let developers explicitly select and manage images for specific architectures like linux/amd64, linux/arm64, and more.

The new –platform flag gives you full control over environment variants when saving or loading. For example, exporting only the linux/arm64 version of an image is now as simple as running:

docker save --platform linux/arm64 -o my-image.tar my-app:latest

Similarly, docker load --platform linux/amd64 ensures that only the amd64 variant is imported from a multi-architecture archive, reducing ambiguity and improving cross-platform workflows. For debugging and optimization, docker history --platform provides detailed insights into the build history of a specific architecture.

These enhancements streamline multi-platform development by giving developers full control over how they build, store, and distribute images. 

Head over to our history, load, and save documentation to learn more! 

Wrapping up 

Docker Desktop 4.39 reinforces our commitment to streamlining the developer experience. With Docker AI Agent’s expanded support for MCP, Kubernetes, built-in capabilities of interacting with containers, and more, developers can simplify and customize their workflow. They can also seamlessly switch between the GUI and command-line, while creating automations with the Docker Desktop CLI. Plus, with the new --platform flag, developers now have full control over how they build, store, and distribute images. 

Less friction, more flexibility — we can’t wait to see what you build next!

Authenticate and update today to receive your subscription level’s newest Docker Desktop features.

Learn more

]]>
Docker Desktop 4.38: New AI Agent, Multi-Node Kubernetes, and Bake in GA https://www.docker.com/blog/docker-desktop-4-38/ Wed, 05 Feb 2025 21:42:31 +0000 https://www.docker.com/?p=67191 At Docker, we’re committed to simplifying the developer experience and empowering enterprises to scale securely and efficiently. With the Docker Desktop 4.38 release, teams can look forward to improved developer productivity and enterprise governance. 

We’re excited to announce the General Availability of Bake, a powerful feature for optimizing build performance and multi-node Kubernetes testing to help teams “shift left.” We’re also expanding availability for several enterprise features designed to boost operational efficiency. And last but not least, Docker AI Agent (formerly Project: Agent Gordon) is now in Beta, delivering intelligent, real-time Docker-related suggestions across Docker CLI, Desktop, and Hub. It’s here to help developers navigate Docker concepts, fix errors, and boost productivity.

1920x1080 4.38 docker desktop release

Docker’s AI Agent boosts developer productivity  

We’re thrilled to introduce Docker AI Agent (also known as Project: Agent Gordon) — an embedded, context-aware assistant seamlessly integrated into the Docker suite. Available within Docker Desktop and CLI, this innovative agent delivers real-time, tailored guidance for tasks like container management and Docker-specific troubleshooting — eliminating disruptive context-switching. Docker AI agent can be used for every Docker-related concept and technology, whether you’re getting started, optimizing an existing Dockerfile or Compose file, or understanding Docker technologies in general. By addressing challenges precisely when and where developers encounter them, Docker AI Agent ensures a smoother, more productive workflow. 

The first iteration of Docker’s AI Agent is now available in Beta for all signed-in users. The agent is disabled by default, so user activation is required. Read more about Docker’s New AI Agent and how to use it to accelerate developer velocity here

blog DD AI agent 1110x806 1

Figure 1: Asking questions to Docker AI Agent in Docker Desktop

Simplify build configurations and boost performance with Docker Bake

Docker Bake is an orchestration tool that simplifies and speeds up Docker builds. After launching as an experimental feature, we’re thrilled to make it generally available with exciting new enhancements.

While Dockerfiles are great for defining build steps, teams often juggle docker build commands with various options and arguments — a tedious and error-prone process. Bake changes the game by introducing a declarative file format that consolidates all options and image dependencies (also known as targets) in one place. No more passing flags to every build command! Plus, Bake’s ability to parallelize and deduplicate work ensures faster and more efficient builds.

Key benefits of Docker Bake

  • Simplicity: Abstract complex build configurations into one simple command.
  • Flexibility: Write build configurations in a declarative syntax, with support for custom functions, matrices, and more.
  • Consistency: Share and maintain build configurations effortlessly across your team.
  • Performance: Bake parallelizes multi-image workflows, enabling faster and more efficient builds.

Developers can simplify multi-service builds by integrating Bake directly into their Compose files — Bake supports Compose files natively. It enables easy, efficient building of multiple images from a single repository with shared configurations. Plus, it works seamlessly with Docker Build Cloud locally and in CI. With Bake-optimized builds as the foundation, developers can achieve more efficient Docker Build Cloud performance and faster builds.

Learn more about streamlining build configurations, boosting performance, and improving team workflows with Bake in our announcement blog

Shift Left with Multi-Node Kubernetes testing in Docker Desktop

In today’s complex production environments, “shifting left”  is more essential than ever. By addressing concerns earlier in the development cycle, teams reduce costs and simplify fixes, leading to more efficient workflows and better outcomes. That’s why we continue to bring new features and enhancements to integrate feedback directly into the developer’s inner loop


Docker Desktop now includes Multi-Node Kubernetes integration, enabling easier and extensive testing directly on developers’ machines. While single-node clusters allow for quick verification of app deployments, they fall short when it comes to testing resilience and handling the complex, unpredictable issues of distributed systems. To tackle this, we’re updating our Kubernetes distribution with kind — a lightweight, fast, and user-friendly solution for local test and multi-node cluster simulations.

blog Multi Node K8 1083x775 1

Figure 2: Selecting Kubernetes version and cluster number for testing

Key Benefits:

  • Multi-node cluster support: Replicate a more realistic production environment to test critical features like node affinity, failover, and networking configurations.
  • Multiple Kubernetes versions: Easily test across different Kubernetes versions, which is a must for validating migration paths.
  • Up-to-date maintenance: Since kind is an actively maintained open-source project, developers can update to the latest version on demand without waiting for the next Docker Desktop release.

Head over to our documentation to discover how to use multi-node Kubernetes clusters for local testing and simulation.

General availability of administration features for Docker Business subscription

With the Docker Desktop 4.36 release, we introduced Beta enterprise admin tools to streamline administration, improve security, and enhance operational efficiency. And the feedback from our Early Access Program customers has been overwhelmingly positive. 

For instance, enforcing sign-in with macOS configuration files and across multiple organizations makes deployment easier and more flexible for large enterprises. Also, the PKG installer simplifies managing large-scale Docker Desktop deployments on macOS by eliminating the need to convert DMG files into PKG first.

Today, the features below are now available to all Docker Business customers.  

Looking ahead, Docker is dedicated to continue expanding enterprise administration capabilities. Stay tuned for more announcements!

Wrapping up 

Docker Desktop 4.38 reinforces our commitment to simplifying the developer experience while equipping enterprises with robust tools. 

With Bake now in GA, developers can streamline complex build configurations into a single command. The new Docker AI Agent offers real-time, on-demand guidance within their preferred Docker tools. Plus, with Multi-node Kubernetes testing in Docker Desktop, they can replicate realistic production environments and address issues earlier in the development cycle. Finally, we made a few new admin tools available to all our Business customers, simplifying deployment, management, and monitoring. 

We look forward to how these innovations accelerate your workflows and supercharge your operations! 

Learn more

]]>
How to Set Up a Kubernetes Cluster on Docker Desktop https://www.docker.com/blog/how-to-set-up-a-kubernetes-cluster-on-docker-desktop/ Tue, 07 Jan 2025 13:49:06 +0000 https://www.docker.com/?p=65135 Kubernetes is an open source platform for automating the deployment, scaling, and management of containerized applications across clusters of machines. It’s become the go-to solution for orchestrating containers in production environments. But if you’re developing or testing locally, setting up a full Kubernetes cluster can be complex. That’s where Docker Desktop comes in — it lets you run Kubernetes directly on your local machine, making it easy to test microservices, CI/CD pipelines, and containerized apps without needing a remote cluster.

Getting Kubernetes up and running can feel like a daunting task, especially for developers working in local environments. But with Docker Desktop, spinning up a fully functional Kubernetes cluster is simpler than ever. Whether you’re new to Kubernetes or just want an easy way to test containerized applications locally, Docker Desktop provides a streamlined solution. In this guide, we’ll walk through the steps to start a Kubernetes cluster on Docker Desktop and offer troubleshooting tips to ensure a smooth experience. 

Note: Docker Desktop’s Kubernetes cluster is designed specially for local development and testing; it is not for production use. 

2400x1260 container docker

Benefits of running Kubernetes in Docker Desktop 

The benefits of this setup include: 

  • Easy local Kubernetes cluster: A fully functional Kubernetes cluster runs on your local machine with minimal setup, handling network access between the host and Kubernetes as well as storage management. 
  • Easier learning path and developer convenience: For developers familiar with Docker but new to Kubernetes, having Kubernetes built into Docker Desktop offers a low-friction learning path. 
  • Testing Kubernetes-based applications locally: Docker Desktop gives developers a local environment to test Kubernetes-based microservices applications that require Kubernetes features like services, pods, ConfigMaps, and secrets without needing access to a remote cluster. It also helps developers to test CI/CD pipelines locally. 

How to start Kubernetes cluster on Docker Desktop in three steps

  1. Download the latest Docker Desktop release.
  2. Install Docker Desktop on the operating system of your choice. Currently, the supported operating systems are macOS, Linux, and Windows.
  3. In the Settings menu, select Kubernetes > Enable Kubernetes and then Apply & restart to start a one-node Kubernetes cluster (Figure 1). Typically, the time it takes to set up the Kubernetes cluster depends on your internet speed to pull the needed images.
Screenshot of Settings menu with Kubernetes chosen on the left and the Enable Kubernetes option selected.
Figure 1: Starting Kubernetes.

Once the Kubernetes cluster is started successfully, you can see the status from the Docker Desktop dashboard or the command line.

From the dashboard (Figure 2):

Screenshot of Docker Desktop dashboard showing green dot next to Kubernetes is running.
Figure 2: Status from the dashboard.

The command-line status:

$ kubectl get node
NAME             STATUS   ROLES           AGE   VERSION
docker-desktop   Ready    control-plane   5d    v1.30.2

Getting Kubernetes support

Docker bundles Kubernetes but does not provide official Kubernetes support. If you are experiencing issues with Kubernetes, however, you can get support in several ways, including from the Docker community, Docker guides, and GitHub documentation: 

What to do if you experience an issue 

Generate a diagnostics file

Before troubleshooting, generate a diagnostics file using your terminal.

Refer to the documentation for diagnosing from the terminal. For example, if you are using a Mac, run the following command:

/Applications/Docker.app/Contents/MacOS/com.docker.diagnose gather -upload

The command will show you where the diagnostics file is saved:

Gathering diagnostics for ID  into /var/folders/50/<Random Character>/<Random Character>/<Machine unique ID>/<YYYYMMDDTTTT>.zip.

In this case, the file is saved at /var/folders/50/<Random Characters>/<Random Characters>/<YYYMMDDTTTT>.zip. Unzip the file (<YYYYMMDDTTTT>.zip) where you can find the logs file for Docker Desktop.

Check for logs

Checking for logs instead of guessing the issue is good practice. Understanding what Kubernetes components are available and what their functions are is essential before you start troubleshooting. You can narrow down the process by looking at the specific component logs. Look for the keyword error or fatal in the logs. 

Depending on which platform you are using, one method is to use the grep command and search for the keyword in the macOS terminal, a Linux distro for WSL2, or the Linux terminal for the file you unzipped:

$ grep -Hrni "&lt;keyword>" &lt;The path of the unzipped file>

## For example, one of the error found related to Kubernetes in the "com.docker.backend.exe" logs:

$ grep -Hrni "error" *
[com.docker.backend.exe.log:[2022-12-05T05:24:39.377530700Z][com.docker.backend.exe][W] starting kubernetes: 1 error occurred: 
com.docker.backend.exe.log:	* starting kubernetes: pulling kubernetes images: pulling registry.k8s.io/coredns:v1.9.3: Error response from daemon: received unexpected HTTP status: 500 Internal Server Error

Troubleshooting example

Let’s say you notice there is an issue starting up the cluster. This issue could be related to the Kubelet process, which works as a node-level agent to help with container management and orchestration within a Kubernetes cluster. So, you should check the Kubelet logs. 

But, where is the Kubelet log located? It’s at log/vm/kubelet.log in the diagnostics file.

An example of a possible related issue can be found in the kubelet.log. The images needed to set up Kubernetes are not able to be pulled due to network/internet restrictions. You might find errors related to failing to pull the necessary Kubernetes images to set up the Kubernetes cluster.

For example:

starting kubernetes: pulling kubernetes images: pulling registry.k8s.io/coredns:v1.9.3: Error response from daemon: received unexpected HTTP status: 500 Internal Server Error

Normally, 10 images are needed to set up the cluster. The following output is from a macOS running Docker Desktop version 4.33:

$ docker image ls
REPOSITORY                                TAG                                                                           IMAGE ID       CREATED         SIZE
docker/desktop-kubernetes                 kubernetes-v1.30.2-cni-v1.4.0-critools-v1.29.0-cri-dockerd-v0.3.11-1-debian   5ef3082e902d   4 weeks ago     419MB
registry.k8s.io/kube-apiserver            v1.30.2                                                                       84c601f3f72c   7 weeks ago     112MB
registry.k8s.io/kube-scheduler            v1.30.2                                                                       c7dd04b1bafe   7 weeks ago     60.5MB
registry.k8s.io/kube-controller-manager   v1.30.2                                                                       e1dcc3400d3e   7 weeks ago     107MB
registry.k8s.io/kube-proxy                v1.30.2                                                                       66dbb96a9149   7 weeks ago     87.9MB
registry.k8s.io/etcd                      3.5.12-0                                                                      014faa467e29   6 months ago    139MB
registry.k8s.io/coredns/coredns           v1.11.1                                                                       2437cf762177   11 months ago   57.4MB
docker/desktop-vpnkit-controller          dc331cb22850be0cdd97c84a9cfecaf44a1afb6e                                      3750dfec169f   14 months ago   35MB
registry.k8s.io/pause                     3.9                                                                           829e9de338bd   22 months ago   514kB
docker/desktop-storage-provisioner        v2.0                                                                          c027a58fa0bb   3 years ago     39.8MB

You can check whether you successfully pulled the 10 images by running docker image ls. If images are missing, a workaround is to save the missing image using docker image save from a machine that successfully starts the Kubernetes cluster (provided both run the same Docker Desktop version). Then, you can transfer the image to your machine, use docker image load to load the image into your machine, and tag it. 

For example, if the registry.k8s.io/coredns:v<VERSION> image is not available,  you can follow these steps:

  1. Use docker image save from a machine that successfully starts the Kubernetes cluster to save it as a tar file: docker save registry.k8s.io/coredns:v<VERSION> > <Name of the file>.tar.
  2. Manually transfer the <Name of the file>.tar to your machine.
  3. Use docker image load to load the image on your machine: docker image load < <Name of the file>.tar.
  4. Tag the image: docker image tag registry.k8s.io/coredns:v<VERSION> <Name of the file>.tar.
  5. Re-enable the Kubernetes from your Docker Desktop’s settings.
  6. Check other logs in the diagnostics log.

What to look for in the diagnostics log

In the diagnostics log, look for the folder starting named kube/. (Note that the <kube> below,  for macOS and Linux is kubectl and for Windows is kubectl.exe.)

  • kube/get-namespaces.txt: List down all the namespaces, equal to <kube> --context docker-desktop get namespaces.
  • kube/describe-nodes.txt: Describe the docker-desktop node, equal to <kube> --context docker-desktop describe nodes.
  • kube/describe-pods.txt: Description of all pods running in the Kubernetes cluster.
  • kube/describe-services.txt: Description of the services running, equal to <kube> --context docker-desktop describe services --all-namespaces.
  • You also can find other useful Kubernetes logs in the mentioned folder.

Search for known issues

For any error message found in the steps above, you can search for known Kubernetes issues on GitHub to see if a workaround or any future permanent fix is planned.

Reset or reboot 

If the previous steps weren’t helpful, try a reboot. And, if the previous steps weren’t helpful, try a reboot. And, if a reboot is not helpful, the last alternative is to reset your Kubernetes cluster, which often helps resolve issues: 

  • Reboot: To reboot, restart your machine. Rebooting a machine in a Kubernetes cluster can help resolve issues by clearing transient states and restoring the system to a clean state.
  • Reset: For a reset, navigate to Settings > Kubernetes > Reset the Kubernetes Cluster. Resetting a Kubernetes cluster can help resolve issues by essentially reverting the cluster to a clean state, and clearing out misconfigurations, corrupted data, or stuck resources that may be causing problems.

Bringing Kubernetes to your local development environment

This guide offers a straightforward way to start a Kubernetes cluster on Docker Desktop, making it easier for developers to test Kubernetes-based applications locally. It covers key benefits like simple setup, a more accessible learning path for beginners, and the ability to run tests without relying on a remote cluster. We also provide some troubleshooting tips and resources for resolving common issues. 

Whether you’re just getting started or looking to improve your local Kubernetes workflow, give it a try and see what you can achieve with Docker Desktop’s Kubernetes integration.

Learn more

]]>
10 Years Since Kubernetes Launched at DockerCon https://www.docker.com/blog/10-years-since-kubernetes-launched-at-dockercon/ Mon, 10 Jun 2024 17:07:39 +0000 https://www.docker.com/?p=55971 It is not often you can reflect back and pinpoint a moment where an entire industry changed, less often to pinpoint that moment and know you were there to see it first hand.

On June 10th, 2014, day 2 of the first ever DockerCon, 16:04 seconds into his keynote speech, Google VP of Infrastructure Eric Brewer announced that Google was releasing the open source solution they built for orchestrating containers: Kubernetes. This was one of those moments. The announcement of Kubernetes began a tectonic shift in how the internet runs at scale, so many of the most important applications in the world today would not be possible without Docker and Kubernetes.

2400x1260 kubernetes 10th anniversary

You can watch the announcement on YouTube.

We didn’t know how much Kubernetes would change things at that time. In fact, in those two days, Apache Mesos, Red Hat’s GearD, Docker Libswarm, and Facebook’s Tupperware were all also launched. This triggered what later became known by some as “the Container Orchestration War.” Fast forward three years and the community had consolidated on Kubernetes for the orchestration layer and Docker (powered by containerd) for the container format, distribution protocol, and runtime. In 2017,  Docker integrated Kubernetes in its desktop and server products, and this helped cement Kubernetes leadership.

Why was it so impactful? Kubernetes landed at just the right time and solved just the right problems. The number of containers and server nodes in production was increasing exponentially every day. The role of DevOps put a lot of burden on the engineer. They needed solutions that could help manage applications at unprecedented scale. Containers and their orchestration engines were, and continue to be, the lifeblood of modern application deployments because they are the only real way to solve this need.

We, the Docker team and community, consider ourselves incredibly fortunate to have played a role in this history. To look back and say we had a part in what has been built from that one moment is humbling.

… and the potential of what is yet to come is beyond exciting! Especially knowing that our impact continues today as a keystone to modern application development. Docker enables app development teams to rapidly deliver applications, secure their software supply chains, and do so without compromising the visibility and controls required by the business.

Happy 10th birthday Kubernetes! Congratulations to all who were and continue to be involved in creating this tremendous gift to the software industry.

Learn more

]]>
Develop Kubernetes Operators in Java without Breaking a Sweat https://www.docker.com/blog/develop-kubernetes-operators-in-java-without-breaking-a-sweat/ Thu, 06 Jun 2024 13:48:45 +0000 https://www.docker.com/?p=55506 Developing Kubernetes operators in Java is not yet the norm. So far, Go has been the language of choice here, not least because of its excellent support for writing corresponding tests. 

One challenge in developing Java-based projects has been the lack of easy automated integration testing that interacts with a Kubernetes API server. However, thanks to the open source library Kindcontainer, based on the widely used Testcontainers integration test library, this gap can be bridged, enabling easier development of Java-based Kubernetes projects. 

In this article, we’ll show how to use Testcontainers to test custom Kubernetes controllers and operators implemented in Java.

2400x1260 develop kubernetes operators in java without breaking a sweat

Kubernetes in Docker

Testcontainers allows starting arbitrary infrastructure components and processes running in Docker containers from tests running within a Java virtual machine (JVM). The framework takes care of binding the lifecycle and cleanup of Docker containers to the test execution. Even if the JVM is terminated abruptly during debugging, for example, it ensures that the started Docker containers are also stopped and removed. In addition to a generic class for any Docker image, Testcontainers offers specialized implementations in the form of subclasses — for components with sophisticated configuration options, for example. 

These specialized implementations can also be provided by third-party libraries. The open source project Kindcontainer is one such third-party library that provides specialized implementations for various Kubernetes containers based on Testcontainers:

  • ApiServerContainer
  • K3sContainer
  • KindContainer

Although ApiServerContainer focuses on providing only a small part of the Kubernetes control plane, namely the Kubernetes API server, K3sContainer and KindContainer launch complete single-node Kubernetes clusters in Docker containers. 

This allows for a trade-off depending on the requirements of the respective tests: If only interaction with the API server is necessary for testing, then the significantly faster-starting ApiServerContainer is usually sufficient. However, if testing complex interactions with other components of the Kubernetes control plane or even other operators is in the scope, then the two “larger” implementations provide the necessary tools for that — albeit at the expense of startup time. For perspective, depending on the hardware configuration, startup times can reach a minute or more.

A first example

To illustrate how straightforward testing against a Kubernetes container can be, let’s look at an example using JUnit 5:

@Testcontainers
public class SomeApiServerTest {
  @Container
  public ApiServerContainer<?> K8S = new ApiServerContainer<>();

  @Test
  public void verify_no_node_is_present() {
    Config kubeconfig = Config.fromKubeconfig(K8S.getKubeconfig());
    try (KubernetesClient client = new KubernetesClientBuilder()
           .withConfig(kubeconfig).build()) {
      // Verify that ApiServerContainer has no nodes
      assertTrue(client.nodes().list().getItems().isEmpty());
    }
  }
}

Thanks to the @Testcontainers JUnit 5 extension, lifecycle management of the ApiServerContainer is easily handled by marking the container that should be managed with the @Container annotation. Once the container is started, a YAML document containing the necessary details to establish a connection with the API server can be retrieved via the getKubeconfig() method. 

This YAML document represents the standard way of presenting connection information in the Kubernetes world. The fabric8 Kubernetes client used in the example can be configured using Config.fromKubeconfig(). Any other Kubernetes client library will offer similar interfaces. Kindcontainer does not impose any specific requirements in this regard.

All three container implementations rely on a common API. Therefore, if it becomes clear at a later stage of development that one of the heavier implementations is necessary for a test, you can simply switch to it without any further code changes — the already implemented test code can remain unchanged.

Customizing your Testcontainers

In many situations, after the Kubernetes container has started, a lot of preparatory work needs to be done before the actual test case can begin. For an operator, for example, the API server must first be made aware of a Custom Resource Definition (CRD), or another controller must be installed via a Helm chart. What may sound complicated at first is made simple by Kindcontainer along with intuitively usable Fluent APIs for the command-line tools kubectl and helm.

The following listing shows how a CRD is first applied from the test’s classpath using kubectl, followed by the installation of a Helm chart:

@Testcontainers
public class FluentApiTest {
  @Container
  public static final K3sContainer<?> K3S = new K3sContainer<>()
    .withKubectl(kubectl -> {
      kubectl.apply.fileFromClasspath(“manifests/mycrd.yaml”).run();
    })
    .withHelm3(helm -> {
      helm.repo.add.run(“repo”, “https://repo.example.com”);
      helm.repo.update.run();
      helm.install.run(“release”, “repo/chart”);
    );
  // Tests go here
}

Kindcontainer ensures that all commands are executed before the first test starts. If there are dependencies between the commands, they can be easily resolved; Kindcontainer guarantees that they are executed in the order they are specified.

The Fluent API is translated into calls to the respective command-line tools. These are executed in separate containers, which are automatically started with the necessary connection details and connected to the Kubernetes container via the Docker internal network. This approach avoids dependencies on the Kubernetes image and version conflicts regarding the available tooling within it.

Selecting your Kubernetes version

If nothing else is specified by the developer, Kindcontainer starts the latest supported Kubernetes version by default. However, this approach is generally discouraged, so the best practice would require you to explicitly specify one of the supported versions when creating the container, as shown:

@Testcontainers
public class SpecificVersionTest {
  @Container
  KindContainer<?> container = new KindContainer<>(KindContainerVersion.VERSION_1_24_1);
  // Tests go here
}

Each of the three container implementations has its own Enum, through which one of the supported Kubernetes versions can be selected. The test suite of the Kindcontainer project itself ensures — with the help of an elaborate matrix-based integration test setup — that the full feature set can be easily utilized for each of these versions. This elaborate testing process is necessary because the Kubernetes ecosystem evolves rapidly, and different initialization steps need to be performed depending on the Kubernetes version.

Generally, the project places great emphasis on supporting all currently maintained Kubernetes major versions, which are released every 4 months. Older Kubernetes versions are marked as @Deprecated and eventually removed when supporting them in Kindcontainer becomes too burdensome. However, this should only happen at a time when using the respective Kubernetes version is no longer recommended.

Bring your own Docker registry

Accessing Docker images from public sources is often not straightforward, especially in corporate environments that rely on an internal Docker registry with manual or automated auditing. Kindcontainer allows developers to specify their own coordinates for the Docker images used for this purpose. However, because Kindcontainer still needs to know which Kubernetes version is being used due to potentially different initialization steps, these custom coordinates are appended to the respective Enum value:

@Testcontainers
public class CustomKubernetesImageTest {
  @Container
  KindContainer<?> container = new KindContainer<>(KindContainerVersion.VERSION_1_24_1
    .withImage(“my-registry/kind:1.24.1”));
  // Tests go here
}

In addition to the Kubernetes images themselves, Kindcontainer also uses several other Docker images. As already explained, command-line tools such as kubectl and helm are executed in their own containers. Appropriately, the Docker images required for these tools are configurable as well. Fortunately, no version-dependent code paths are needed for their execution. 

Therefore, the configuration shown in the following is simpler than in the case of the Kubernetes image:

@Testcontainers
public class CustomFluentApiImageTest {
  @Container
  KindContainer<?> container = new KindContainer<>()
    .withKubectlImage(
      DockerImageName
        .parse(“my-registry/kubectl:1.21.9-debian-10-r10”))
    .withHelm3Image(DockerImageName.parse(“my-registry/helm:3.7.2”));
  // Tests go here
}

The coordinates of the images for all other containers started can also be easily chosen manually. However, it is always the developer’s responsibility to ensure the use of the same or at least compatible images. For this purpose, a complete list of the Docker images used and their versions can be found in the documentation of Kindcontainer on GitHub.

Admission controller webhooks

For the test scenarios shown so far, the communication direction is clear: A Kubernetes client running in the JVM accesses the locally or remotely running Kubernetes container over the network to communicate with the API server running inside it. Docker makes this standard case incredibly straightforward: A port is opened on the Docker container for the API server, making it accessible. 

Kindcontainer automatically performs the necessary configuration steps for this process and provides suitable connection information as Kubeconfig for the respective network configuration.

However, admission controller webhooks present a technically more challenging testing scenario. For these, the API server must be able to communicate with external webhooks via HTTPS when processing manifests. In our case, these webhooks typically run in the JVM where the test logic is executed. However, they may not be easily accessible from the Docker container.

To facilitate testing of these webhooks independently of the network setup, yet still make it simple, Kindcontainer employs a trick. In addition to the Kubernetes container itself, two more containers are started. An SSH server provides the ability to establish a tunnel from the test JVM into the Kubernetes container and set up reverse port forwarding, allowing the API server to communicate back to the JVM. 

Because Kubernetes requires TLS-secured communication with webhooks, an Nginx container is also started to handle TLS termination for the webhooks. Kindcontainer manages the administration of the required certificate material for this. 

The entire setup of processes, containers, and their network communication is illustrated in Figure 1.

Illustration of network setup for testing webhooks, showing JVM on the left side with Webhook server, SSH client, and JUnit test, and Docker network on right side with SSH server, Nginx container, and Kubernetes container.
Figure 1: Network setup for testing webhooks.

Fortunately, Kindcontainer hides this complexity behind an easy-to-use API:

@Testcontainers
public class WebhookTest {
    @Container
    ApiServerContainer<?> container = new ApiServerContainer<>()
.withAdmissionController(admission -> {
        admission.mutating()
                .withNewWebhook("mutating.example.com")
                .atPort(webhookPort) // Local port of webhook
                .withNewRule()
                .withApiGroups("")
                .withApiVersions("v1")
                .withOperations("CREATE", "UPDATE")
                .withResources("configmaps")
                .withScope("Namespaced")
                .endRule()
                .endWebhook()
                .build();
    });

    // Tests go here
}

The developer only needs to provide the port of the locally running webhook along with some necessary information for setting up in Kubernetes. Kindcontainer then automatically handles the configuration of SSH tunneling, TLS termination, and Kubernetes.

Consider Java

Starting from the simple example of a minimal JUnit test, we have shown how to test custom Kubernetes controllers and operators implemented in Java. We have explained how to use familiar command-line tools from the ecosystem with the help of Fluent APIs and how to easily execute integration tests even in restricted network environments. Finally, we have shown how even the technically challenging use case of testing admission controller webhooks can be implemented simply and conveniently with Kindcontainer. 

Thanks to these new testing possibilities, we hope more developers will consider Java as the language of choice for their Kubernetes-related projects in the future.

Learn more

]]>
KubeCon EU 2024: Highlights from Paris https://www.docker.com/blog/kubecon-eu-2024-highlights-from-paris/ Wed, 03 Apr 2024 14:10:54 +0000 https://www.docker.com/?p=53600 Are in-person tech conferences back in fashion? Or are engineers just willing to travel for fresh baguettes? In this post, I round up a few highlights from KubeCon Europe 2024, held March 19-24 in Paris.

My last KubeCon was in Detroit in 2022, when tech events were still slowly recovering from COVID. But KubeCon EU in Paris was buzzing, with more than 12,000 attendees! I couldn’t even get into a few of the most popular talks because the lines to get in wrapped around the exhibition hall even after the rooms were full. Fortunately, the CNCF has already posted all the talk recordings so we can catch up on what we missed in person.

Now that I’ve been back home for a bit, here are a few highlights I rounded up from KubeCon EU 2024.

2400x1260 kubecon 2024 highlights

Docker at KubeCon

If you stopped by the Docker booth, you may have seen our Megennis Motorsport Racing experience.

kubecon eu 20224 f1 e1712150464822
The KubeCon EU 2024 Docker booth featured a Megennis Motorsport Racing experience.

Or you may have talked to one of our engineers about our new fast Docker Build Cloud experience. Everyone I talked to about Build Cloud got it immediately. I’m proud of all the work we did to make fast, hosted image builds work seamlessly with the existing docker build. 

kubecon eu 20224 f2
KubeCon booth visitors ran 173 identical builds using both Docker Build Cloud and GHA. Build Cloud builds took an average of 32 seconds each compared to GHA’s 159 sec/build.

Docker Build Cloud wasn’t the only new product we highlighted at KubeCon this year. I also got a lot of questions about Docker Scout and how to track image dependencies. Our Head of Security, Rachel Taylor, was available to demo Docker Scout for curious customers.

kubecon eu 20224 f3
Chloe Cucinotta, a Director in Docker’s marketing org, hands out eco-friendly swag for booth visitors. Photo by Docker Captain Mohammad-Ali A’râbi.

Docker Scout and Sysdig Security Day

In addition to live Docker Scout demos at the booth, Docker Scout was represented at Kubecon through a co-sponsored AMA panel and party with Sysdig Security Day. The event aimed to raise awareness around Docker’s impact on securing the software supply chain and how to solve concrete security issues with Docker Scout. It was an opportunity to explore topics in the cloud-native and open source security space alongside industry leaders Snyk and Sysdig.

The AMA panel featured Rachel Taylor, Director of Information Security, Risk, & Trust at Docker, who discussed approaches to securing the software supply chain. The post-content party served as an opportunity for Docker to hear more about our shared customers’ unique challenges one-on-one. Through participation in the event, Docker customers were able to learn more about how the Sysdig runtime monitoring integration within Docker Scout results in even more actionable insights and remediation recommendations.

Live from the show floor

Docker CEO Scott Johnston spoke with theCUBE hosts Savannah Peterson and Rob Strechay to discuss Docker Build Cloud. “What used to take an hour is now a minute and a half,” he explained.

Testcontainers and OpenShift

During KubeCon, we announced that Red Hat and Testcontainers have partnered to provide Testcontainers in OpenShift. This collaboration simplifies the testing process, allowing developers to efficiently manage their workflows without compromising on security or flexibility. By streamlining development tasks, this solution promises a significant boost in productivity for developers working within containerized environments. Read Improving the Developer Experience with Testcontainers and OpenShift to learn more.

kubecon eu 20224 f4 e1712150058949
Eli Aleyner (Head of Technical Alliances at Docker) and Daniel Oh (Senior Principal Technical Marketing Manager at Red Hat) take a selfie at the Red Hat booth.

Eli Aleyner (Head of Technical Alliances at Docker) and Daniel Oh (Senior Principal Technical Marketing Manager at Red Hat) provided a demo and an AMA at the Red Hat booth. 

Must-watch talks

kubecon eu 20224 f5 e1712150129895
During the Friday keynote, Bob Wise, CEO of Heroku, describes a lightbulb moment when he first heard about Docker as part of his discussion about the beginnings of cloud-native.

For a long time, I’ve felt that the Kubernetes API model has been its superpower. The investment in easy ways to extend Kubernetes with CRDs and the controller-runtime project are unlocking a bunch of exciting platform engineering projects.

Here are a few of the many talks that I and other people on my team really liked, and that are on YouTube now.

Platform 

In his talk Building a Large Scale Multi-Cloud Multi-Region SaaS Platform with Kubernetes Controllers, I loved how Sébastien Guilloux (Elastic) explains how to put all the pieces together to help build a multi-region platform. It takes advantage of the nice bits of Kubernetes controllers, while also questioning the assumptions about how global state should work.

Stefan Proda (ControlPlane) gave a talk on GitOps Continuous Delivery at Scale with Flux. Flux has a strong, opinionated point of view on how CI/CD tools should interact with CRDs and the events API.  There were a few different talks on Crossplane that I’d like to go back and watch. We’ve been experimenting a lot with Crossplane at Docker, and we like how it fits into Helm and Image Registries in a way that fits in with our existing image and registry tools. 

AI 

Of course, people at KubeCon are infra nerds, so when we think about AI, we first think about all those GPUs the AIs are going to need.

There was an armful of GPU provisioning talks. I attended the How KubeVirt Improves Performance with CI-Driven Benchmarking, and You Can Too. Speakers Ryan Hallisey and Alay Patel from Nvidia talked about driving down the time to allocate VMs with GPUs. But how is AI going to fit into how we run and operate servers on Kubernetes? There was less consensus on this point, but it was fun to make random guesses on what it might look like. When I was hanging out at the AuthZed booth, I made a joke about asking an AI to write my infra access control rules, and they mostly laughed and rolled their eyes.

Slimming and debugging 

Here’s a container journey I see a lot these days:

  • I have a fat container image.
  • I get a security alert about a vulnerability in one of that image’s dependencies that I don’t even use.
  • I switch to a slimmer base image, like a distroless image.
  • Oops! Now the image doesn’t work and is annoying to debug because there’s no shell.

But we’re making progress on making this easier!

In his KubeCon talk Is Your Image Really Distroless?, Docker’s Laurent Goderre walked through how to use multi-stage builds and init containers to separate out the build + init dependencies from the steady-state runtime dependencies. 

Ephemeral containers in Kubernetes graduated to stable in 2022. In their talk, Building a Tool to Debug Minimal Container Images in Kubernetes, Docker, and ContainerD, Kyle Quest (AutonomousPlane) and Saiyam Pathak (Civo) showed how you can use the ephemeral containers API to build tooling for creating a shell in a distroless container without a shell.

kubecon eu 20224 f6 e1712149960327
Talk slide showing the comparison of different approaches for running commands in a minimal container image with no shell.

One thing that Kyle and Saiyam mentioned was how useful Nix and Nixery.dev is for building these kinds of debugging tools. We’re also using Nix in docker debug. Docker engineer Johannes Grossman says that Nix solves some problems around dynamic linking that he calls “the clash-free composability property of Nix.”

See you in Salt Lake City!

Now that we’ve recovered from the action-packed KubeCon in Paris, we can start planning for KubeCon + CloudNativeCon North America 2024. We’ll see you in beautiful Salt Lake City!

Learn more

]]>
Scott Johnston, Docker | KubeCon EU 2024 nonadult
Is Your Container Image Really Distroless? https://www.docker.com/blog/is-your-container-image-really-distroless/ Wed, 27 Mar 2024 13:25:43 +0000 https://www.docker.com/?p=52629 Containerization helped drastically improve application security by providing engineers with greater control over the runtime environment of their applications. However, a significant time investment is required to maintain the security posture of those applications, given the daily discovery of new vulnerabilities as well as regular releases of languages and frameworks. 

The concept of distroless images offers the promise of greatly reducing the time needed to keep applications secure by eliminating most of the software contained in typical container images. This approach also reduces the amount of time teams spend remediating vulnerabilities, allowing them to focus only on the software they are using. 

In this article, we explain what makes an image distroless, describe tools that make the creation of distroless images practical, and discuss whether distroless images live up to their potential.

What’s a distro?

A Linux distribution is a complete operating system built around the Linux kernel, comprising a package management system, GNU tools and libraries, additional software, and often a graphical user interface.

Common Linux distributions include Debian, Ubuntu, Arch Linux, Fedora, Red Hat Enterprise Linux, CentOS, and Alpine Linux (which is more common in the world of containers). These Linux distributions, like most Linux distros, treat security seriously, with teams working diligently to release frequent patches and updates to known vulnerabilities. A key challenge that all Linux distributions must face involves the usability/security dilemma. 

On its own, the Linux kernel is not very usable, so many utility commands are included in distributions to cover a large array of use cases. Having the right utilities included in the distribution without having to install additional packages greatly improves a distro’s usability. The downside of this increase in usability, however, is an increased attack surface area to keep up to date. 

A Linux distro must strike a balance between these two elements, and different distros have different approaches to doing so. A key aspect to keep in mind is that a distro that emphasizes usability is not “less secure” than one that does not emphasize usability. What it means is that the distro with more utility packages requires more effort from its users to keep it secure.

What’s a distroless image?

A distroless image is a container image with a minimal list of applications that also shares the host Linux kernel. Distroless container images also:

  • Don’t include a package manager
  • Don’t include a shell
  • Don’t include a web client (such as curl or wget)

With fewer components to exploit, distroless images limit what attackers can do if a container is compromised. This makes them a practical alternative for developers struggling with the utility and security dilemma that comes with Linux distros.

Tools for building minimal and distroless containers

Now that we’ve broken down what makes an image distroless, let’s look at how developers can actually create images that meet their security goals and minimize their attack surface. Two key tools in the Docker toolbox, multi-stage builds and BuildKit, give you precise control over what goes into your final image.

Multi-stage builds

Multi-stage builds allow developers to separate build-time dependencies from runtime ones. Developers can now start from a full-featured build image with all the necessary components installed, perform the necessary build step, and then copy only the result of those steps to a more minimal or even an empty image, called “scratch”. With this approach, there’s no need to clean up dependencies and, as an added bonus, the build stages are also cacheable, which can considerably reduce build time. 

The following example shows a Go program taking advantage of multi-stage builds. Because the Golang runtime is compiled into the binary, only the binary and root certificates need to be copied to the blank slate image.

FROM golang:1.21.5-alpine as build
WORKDIR /
COPY go.* .
RUN go mod download
COPY . .
RUN go build -o my-app


FROM scratch
COPY --from=build
  /etc/ssl/certs/ca-certificates.crt
  /etc/ssl/certs/ca-certificates.crt
COPY --from=build /my-app /usr/local/bin/my-app
ENTRYPOINT ["/usr/local/bin/my-app"]

BuildKit

BuildKit, the current engine used by docker build, helps developers create minimal images thanks to its extensible, pluggable architecture. It provides the ability to specify alternative frontends (with the default being the familiar Dockerfile) to abstract and hide the complexity of creating distroless images. These frontends can accept more streamlined and declarative inputs for builds and can produce images that contain only the software needed for the application to run. 

The following example shows the input for a frontend for creating Python applications called mopy by Julian Goede.

#syntax=cmdjulian/mopy
apiVersion: v1
python: 3.9.2
build-deps:
  - libopenblas-dev
  - gfortran
  - build-essential
envs:
  MYENV: envVar1
pip:
  - numpy==1.22
  - slycot
  - ./my_local_pip/
  - ./requirements.txt
labels:
  foo: bar
  fizz: ${mopy.sbom}
project: my-python-app/

So, is your image really distroless?

Thanks to new tools for creating container images like multi-stage builds and BuildKit, it is now a lot more practical to create images that only contain the required software and its runtime dependencies. 

However, many images claiming to be distroless still include a shell (usually Bash) and/or BusyBox, which provides many of the commands a Linux distribution does — including wget — that can leave containers vulnerable to Living off the land (LOTL) attacks. This raises the question, “Why would an image trying to be distroless still include key parts of a Linux distribution?” The answer typically involves container initialization. 

Developers often have to make their applications configurable to meet the needs of their users. Most of the time, those configurations are not known at build time so they need to be configured at run time. Often, these configurations are applied using shell initialization scripts, which in turn depend on common Linux utilities such as sed, grep, cp, etc. When this is the case, the shell and utilities are only needed for the first few seconds of the container’s lifetime. Luckily, there is a way to create true distroless images while still allowing initialization using tools available from most container orchestrators: init containers.

Init containers

In Kubernetes, an init container is a container that starts and must complete successfully before the primary container can start. By using a non-distroless container as an init container that shares a volume with the primary container, the runtime environment and application can be configured before the application starts. 

The lifetime of that init container is short (often just a couple seconds), and it typically doesn’t need to be exposed to the internet. Much like multi-stage builds allow developers to separate the build-time dependencies from the runtime dependencies, init containers allow developers to separate initialization dependencies from the execution dependencies. 

The concept of init container may be familiar if you are using relational databases, where an init container is often used to perform schema migration before a new version of an application is started.

Kubernetes example

Here are two examples of using init containers. First, using Kubernetes:

apiVersion: v1
kind: Pod
metadata:
  name: kubecon-postgress-pod
  labels:
    app.kubernetes.io/name: KubeConPostgress
spec:
  containers:
  - name: postgress
    image: laurentgoderre689/postgres-distroless
    securityContext:
      runAsUser: 70
      runAsGroup: 70
    volumeMounts:
    - name: db
      mountPath: /var/lib/postgresql/data/
  initContainers:
  - name: init-postgress
    image: postgres:alpine3.18
    env:
      - name: POSTGRES_PASSWORD
        valueFrom:
          secretKeyRef:
            name: kubecon-postgress-admin-pwd
            key: password
    command: ['docker-ensure-initdb.sh']
    volumeMounts:
    - name: db
      mountPath: /var/lib/postgresql/data/
  volumes:
  - name: db
    emptyDir: {}

- - - 

> kubectl apply -f pod.yml &amp;&amp; kubectl get pods
pod/kubecon-postgress-pod created
NAME                    READY   STATUS     RESTARTS   AGE
kubecon-postgress-pod   0/1     Init:0/1   0          0s
> kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
kubecon-postgress-pod   1/1     Running   0          10s

Docker Compose example

The init container concept can also be emulated in Docker Compose for local development using service dependencies and conditions.

services:
 db:
   image: laurentgoderre689/postgres-distroless
   user: postgres
   volumes:
     - pgdata:/var/lib/postgresql/data/
   depends_on:
     db-init:
       condition: service_completed_successfully

 db-init:
   image: postgres:alpine3.18
   environment:
      POSTGRES_PASSWORD: example
   volumes:
     - pgdata:/var/lib/postgresql/data/
   user: postgres
    command: docker-ensure-initdb.sh

volumes:
 pgdata:

- - - 
> docker-compose up 
[+] Running 4/0
 ✔ Network compose_default      Created                                                                                                                      
 ✔ Volume "compose_pgdata"      Created                                                                                                                     
 ✔ Container compose-db-init-1  Created                                                                                                                      
 ✔ Container compose-db-1       Created                                                                                                                      
Attaching to db-1, db-init-1
db-init-1  | The files belonging to this database system will be owned by user "postgres".
db-init-1  | This user must also own the server process.
db-init-1  | 
db-init-1  | The database cluster will be initialized with locale "en_US.utf8".
db-init-1  | The default database encoding has accordingly been set to "UTF8".
db-init-1  | The default text search configuration will be set to "english".
db-init-1  | [...]
db-init-1 exited with code 0
db-1       | 2024-02-23 14:59:33.191 UTC [1] LOG:  starting PostgreSQL 16.1 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit
db-1       | 2024-02-23 14:59:33.191 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
db-1       | 2024-02-23 14:59:33.191 UTC [1] LOG:  listening on IPv6 address "::", port 5432
db-1       | 2024-02-23 14:59:33.194 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db-1       | 2024-02-23 14:59:33.196 UTC [9] LOG:  database system was shut down at 2024-02-23 14:59:32 UTC
db-1       | 2024-02-23 14:59:33.198 UTC [1] LOG:  database system is ready to accept connections

As demonstrated by the previous example, an init container can be used alongside a container to remove the need for general-purpose software and allow the creation of true distroless images. 

Conclusion

This article explained how Docker build tools allow for the separation of build-time dependencies from run-time dependencies to create distroless images. For example, using init containers allows developers to separate the logic needed to configure a runtime environment from the environment itself and provide a more secure container. This approach also helps teams focus their efforts on the software they use and find a better balance between security and usability.

Learn more

]]>
Docker and Kubernetes: How They Work Together https://www.docker.com/blog/docker-and-kubernetes/ Thu, 30 Nov 2023 14:41:40 +0000 https://www.docker.com/?p=49243 Docker and Kubernetes are two of the most popular technologies for containerized development. Docker is used to package applications into containers, while Kubernetes is used to orchestrate and manage those containers in production. 

Kubernetes changed how we develop and deploy containerized applications, providing a powerful orchestration platform that automates tasks such as scaling, load balancing, and self-healing. To realize the full potential of Kubernetes orchestration, your applications must be well-prepared and efficiently and securely developed from the start. That’s where Docker’s development tools come into play.

Image of blue Docker  whitepaper titled Docker with Kubernetes: Building from the foundation with Docker developer tools

Docker is the original container engine that powers Kubernetes. Over the years, Docker’s suite of developer tools has significantly evolved to provide a comprehensive ecosystem for building, shipping, and running secure containers. Leveraging Docker’s tools with Kubernetes orchestration, developers can streamline the development process, ensure application security, and accelerate deployment.

A trusted and reliable start: Docker development tools

Having a well-developed application is essential before turning to Kubernetes orchestration. Kubernetes is a powerful tool but can be complex to configure and manage. Using Docker, teams can start with a trusted and reliable foundation for their applications, which translates to a simpler process of deploying and managing their applications with Kubernetes.

Docker helps by providing the following tools:

  • Docker Desktop: An application that provides the entire environment needed to develop and run Linux, Windows, and macOS containers locally. Docker Desktop provides a consistent and secure environment for developers to build and test their applications and automates many of the tasks needed to set up a container.
  • Docker Hub: The world’s largest and most widely used repository of container images, with more than 11 billion monthly image downloads. Developers can store and share their container images on Docker Hub, making deploying their applications to Kubernetes clusters on-premises or in the cloud easier.
  • Docker Scout: A software supply chain solution that helps build better, more trusted applications. Developers can use Docker Scout to debut and test their containerized applications locally so they can find and fix any issues earlier in the development process.
  • Docker Extensions: A set of tools and plugins that can extend existing capabilities and build new functionality into Docker Desktop. Developers can use any IDE plugin or CI/CD tool to help them better streamline workflows for building, testing, and deploying containerized applications.

A harmonious development and deployment process

Docker and Kubernetes work in harmony to create a complete ecosystem for containerized development, deployment, and management. Once developers have packaged their applications into secure containers using Docker, Kubernetes can orchestrate these containers, automating much of the work involved in managing and deploying them in production.

The benefits organizations will see from these seamless workflows include:

  • Streamlined development: Features like automated builds, vulnerability recommendations, and consistent environments help speed up developer efficiency. They no longer have to manage multiple environments and tools, and they know certain tasks will be delivered in a consistent manner.
  • Improved application security: Developers experience fewer disruptions in production because problems are identified and fixed earlier in their build process. Additionally, using a repository like Docker Hub helps developers avoid using insecure images that could contain malware or other vulnerabilities.
  • Accelerated deployment: Because developers are more efficient and productive upfront, they deploy more rapidly against their schedule. Developers are freed to work on more complex and interesting tasks, while Docker and Kubernetes’ automation capabilities handle the more tedious work.

Conclusion

It’s not a question of how Docker and Kubernetes are different but how they best work together. By harnessing Docker’s dev tools with Kubernetes, developers can respond nimbly to market demands, streamline development processes, and ensure secure and efficient containerized applications. Ultimately, Docker and Kubernetes will improve the efficiency and productivity of the development process and reduce the risk of errors.

Want to learn more about the relationship between Docker and Kubernetes? Download our Docker with Kubernetes white paper.

Learn more

]]>