Customers – Docker https://www.docker.com Wed, 19 Nov 2025 18:19:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.docker.com/app/uploads/2024/02/cropped-docker-logo-favicon-32x32.png Customers – Docker https://www.docker.com 32 32 Itaú Unibanco https://www.docker.com/customer-stories/itau-unibanco/ Wed, 01 Oct 2025 16:26:56 +0000 https://www.docker.com/?post_type=customer&p=78321
gray
Case Study

Itaú Unibanco Scales Securely Toward 100% Cloud with Docker as a Strategic Partner

Accelerating developer security and standardization across 480,000 repositories
logo itau unibanco

Company: Itaú Unibanco
Industry: Finance & Insurance
Headquarters: São Paulo, Brazil
Employees: 95,700+ (17,600+ in technology)
Key Technologies: Docker Business, Enhanced Container Isolation, Registry Access Management, Image Access Management, Docker Compose, LocalStack, AWS
Challenge: Secure, scalable developer environments for a cloud-first transformation

Challenges

Standardizing development in a massive organization while maintaining strict security standards

As Latin America’s largest private bank, Itaú Unibanco set a bold goal: migrate 100% of its infrastructure to the cloud by 2028. With operations in 18 countries and over 4,000 developers actively working in containers, the stakes were high and the transformation complex. Moving infrastructure to the cloud would modernize its operations across 18 countries, improve agility, and support faster digital product delivery.


To support this transition, the bank evaluated various container tools and platforms. During the comprehensive evaluation of containerization alternatives, the team encountered several challenges. As the bank explored different options for its containerization stack, issues with developer productivity, security controls, and integration with AWS infrastructure created significant friction. Itaú’s internal Red Team identified security gaps in some alternative solutions that lacked comprehensive developer workstation configurations and access controls. Integration challenges with AWS, Itaú Unibanco’s primary cloud provider, further impacted development velocity, causing some teams to experience delays. For a transformation of this scale, fragmented workflows and inconsistent security practices posed serious risks, including potential non-compliance and increased operational overhead.

“Docker has proven to be an effective solution for achieving the level of security and virtualization that meets our institution’s requirements.” said Lucas Polaquini, Staff Software Engineer.

Beyond security considerations, the developer experience was also being impacted. Engineers who were familiar with Docker workflows began creating custom scripts or aliasing commands to replicate functionality when testing alternatives. Environments became fragmented across teams, and container standards became inconsistent.

Faced with these challenges, the bank needed to make a strategic choice: continue with an approach that presented risks—or invest in a platform that could provide secure, scalable cloud-native development capabilities.

Solution

A strategic partnership with Docker for security, scale, and standardization

After thorough evaluation, Itaú made the strategic decision to standardize on Docker Business, positioning it not just as a development tool, but as a foundational platform supporting secure cloud transformation. The comprehensive security features, consistent developer experiences, and reliable AWS integration made Docker the best fit for aligning development velocity with Itaú’s strict compliance standards.

With the strategic partnership, the team implemented authentication controls, applied Registry and Image Access Management, and deployed pre-configured base images with Enhanced Container Isolation on developer workstations. At scale, these configurations brought standardization and security to an organization managing over 480,000 repositories.

Here’s how it works:

  • Registry Access Management restricted container image sources to approved registries;
  • Enhanced Container Isolation and Hyper-V support enforced operating system-level segmentation, which significantly improved Itaú’s security posture, a priority for the bank;
  • Single Sign-On (SSO) and authentication policies ensured traceable user access across environments;

By distributing pre-configured, secure base images, developers no longer needed to set up their own environments, drastically reducing misconfigurations, human error, and vulnerability exposure.

“Docker’s support has always been top-notch. When we had issues with PowerShell access, Docker released a new version within a day to address our specific use case.” — Sergio Lopes, Principal Software Engineer

For Itaú, modernizing infrastructure encompasses both security and speed. Docker Compose and Docker’s integration with LocalStack enabled Itaú to virtualize entire environments including AWS services, like databases and caches, without provisioning cloud resources.

Developers can now spin up services locally, run tests faster, and reduce reliance on cloud infrastructure, saving cost and time. This also ensures test environments remain isolated from production systems, further minimizing security risk.

Adoption on such a massive scale requires ongoing collaboration and partnership that transcends technology. Through targeted engagements, ranging from strategic planning sessions to hands-on Tech Talks and high-impact events like Itaú Docker Day, the teams were able to drive effective adoption, which contributed to improved results. These initiatives were instrumental in building trust, aligning on shared goals, and creating momentum. The first Docker Day brought together many engineers in their HQ and remotely, demonstrating strong organizational engagement.

Boosting Productivity with Docker Build Cloud

Docker Build Cloud testing reveals significant reductions in build time.

“We reduced local build time by 90% with Docker Build Cloud,” – explains Denis Rodrigues, Staff Plus engineer at Itaú. “It’s impressive.”

Strategic Contract Renewal and Partnership Expansion

To support their growing containerization needs and long-term cloud strategy, Itaú recently renewed their contract with Docker, securing unlimited licenses with premium support. This renewal was facilitated through a strategic partnership with Google, which provides access to complementary cloud services while delivering significant financial savings.

The combination of Docker’s containerization capabilities with Google’s cloud services has enhanced team productivity substantially, allowing existing teams to deliver more value without requiring additional headcount—generating considerable soft money benefits for the organization. This strategic approach demonstrates how thoughtful vendor partnerships can amplify technology investments while optimizing operational costs.

Measurable Impact

Secure Scaling and Accelerated Development

Cloud Migration back on track

About 65% of the workloads have been migrated to AWS, with Docker helping to accelerate a safe transition to the cloud.

Since reintegrating Docker Business, Itaú has achieved significant technical and operational gains.

Hardened security across 4,000 developer workstations

SSO, Hyper-V, Registry Access Management, and IAM policies brought centralized governance to containerized workflows.

12,000+ repositories standardized

Using Docker Desktop and Compose, developers now work in consistent, compliant environments, reducing deployment errors and boosting code quality.

Build times reduced by 90%

Initial testing with Docker Build Cloud slashed image compilation times 90%, significantly improving developer velocity.

Strategic cost optimization

Unlimited Docker licenses with premium support through Google partnership delivered financial savings while enabling comprehensive containerization capabilities.

Productivity amplification

The Docker and Google services combination enhanced team output without requiring additional headcount, generating substantial soft money benefits.

Results

A Foundation for Future-Ready Cloud Transformation

With around 65% of its infrastructure migrated and 12,000+ repositories running in secure, standardized containers, Itaú Unibanco is well on track to reach its 2028 cloud goal. The Docker-Itaú partnership is more than a tooling choice, it’s a shared commitment to secure innovation at scale.

By prioritizing developer productivity, enterprise governance, and container security, Docker has helped the bank minimize operational risk while empowering thousands of developers to build securely and efficiently.

As the bank continues expanding its digital services, Docker remains a key component in its cloud-native architecture. With unlimited licenses and premium support secured through their Google partnership, the bank is well-positioned to optimize Docker’s capabilities further while maintaining cost efficiency. This strategic foundation ensures that their infrastructure remains secure, scalable, and ready for future growth while maximizing the value of their technology investments.

“Beyond the technology, Docker and Itaú collaborate to accelerate adoption and embed containerization best practices across the organization. The availability of a skilled, dedicated team is a key factor in choosing the right containerization platform.” — Pedro Ignacio, Sr. Platform Engineer, Itaú Unibanco

]]>
InCred https://www.docker.com/customer-stories/incred/ Fri, 05 Sep 2025 16:09:56 +0000 https://www.docker.com/?post_type=customer&p=75993
gray
Case Study

Scaling Secure Fintech Innovation: How InCred Finance achieved scale and operational efficiency with Docker

Automated manual CI/CD cycles, achieving 10x deployment frequency, and >80% spot instance utilization with Docker
logo incred

Company: InCred Finance
Industry: Fintech (NBFC)
Headquarters: India
Employees: 2,700+
Tech Stack: Javascript, Java, Python, Kotlin, React, Angular, GitHub, Kubernetes, AWS, Docker Desktop, Docker Hub, Docker Scout

Introduction

InCred Group is a diversified financial services firm headquartered in Mumbai, boasting a workforce of over 3500+ employees and 170+ branches across India. Founded in 2017, InCred set out to deliver inclusive finance to underserved sectors in India. The group comprises three distinct businesses – ‘InCred Finance’ which is a new-age lending-focused NBFC; ‘InCred Capital’ which is an integrated institutional, wealth and asset management platform and ‘InCred Money’ the integrated B2C and B2B2C digital investment distribution platform targeting mass-affluent and retail segments. Given the complexity of its operations, the need for operational resilience and scalability was non-negotiable.

The tech team, headquartered in Bangalore, builds and maintains an infrastructure platform that enables multiple business products with widely varying workflows and actors. Despite its lean size, this team is responsible for building and deploying over 80 microservices that serve hundreds of thousands of users across India every day.

Challenges

The Challenge: Fragmented Systems, Audit Pressure, Developer Friction

By 2019, what began as a monolith had evolved into a sprawling microservices architecture. Managing 80+ services across seven programming languages became a source of friction. Local environment mismatches, dependency conflicts, and ad hoc provisioning delayed delivery and eroded confidence.

“We started with one application that worked – until it didn’t. Managing 80+ services across 7+ languages and frameworks without containerization wasn’t just hard. It was unscalable.” – Dheeraj Arani, Head of DevOps

At the same time, security, compliance, and regulatory expectations soared. InCred Finance operates under constant scrutiny from the external and internal reviews. At any point in time, there is at least one audit happening. The need to quickly answer questions around image provenance, change approvals, and infrastructure access prompted the team to formalize processes.

Scaling development while meeting audit requirements and ensuring resilient delivery pipelines had become the defining challenge.y.

Solution

Solution Overview: Docker as the Engine for Secure Velocity

After briefly looking into alternatives, InCred chose Docker to standardize and scale its development operations. Docker’s extensive documentation, functionalities, and strong community support were key for the decision-making process. With Docker, InCred achieved targeted benefits across three key pillars:

1. Developer Productivity

Docker Desktop became the local environment of choice, enabling fast, reproducible setups for all developers. Docker Compose allowed microservices to be orchestrated in development exactly as they run in production, eliminating environment drift.

Developers could now test services locally, validate ideas instantly, and ship code that mirrors production conditions, all without depending on centralized provisioning. GitHub was tightly integrated for CI/CD, allowing every commit to flow through a Docker-based pipeline and reach production within three minutes.

“The turnaround time from commit to deployment is just three minutes on average across our 50+ backend applications. It’s a hard metric we track, and one we were able to achieve after adopting Docker.” – Dheeraj Arani, Head of DevOps

2. Stronger Software Supply Chain Security & Regulatory Compliance

With Docker Business, InCred implemented centralized access controls – including SSO enforcement and auditable logs, ensuring that all image and registry interactions are permissioned and auditable. Docker Scout brought embedded vulnerability scanning into Docker Desktop, allowing developers to surface and remediate dependency risks early in the development process. Local enforcement was further strengthened by Kubernetes-level OPA policies, creating an end-to-end compliance and security framework.

Hardened Docker endpoints reduced attack surfaces on developer machines, while proactive governance tools transformed InCred’s supply chain from reactive to secure-by-design. As a result, InCred has successfully passed multiple audits with zero infrastructure violations, crediting Docker’s role in enabling a trustworthy, observable, and compliant DevSecOps pipeline.

“Docker gave us more than containers. It gave us control. Now, every developer has a consistent, secure environment, and we ship confidently every day.”
Dheeraj Arani, Head of DevOps, InCred

3. Infrastructure Cost Efficiency Through Containerization

By adopting Docker and shifting to a container-native architecture on AWS, InCred has dramatically improved infrastructure efficiency. The company now runs over 80% of its cloud workloads on spot instances, a level of cost optimization rarely achieved in the Indian fintech landscape. This was made possible by Docker’s role in enabling stateless, resilient services that can seamlessly restart and scale across availability zones. With containers as the foundation, InCred designed applications for fault tolerance and elasticity—unlocking massive savings without compromising reliability or performance.

Measurable Impact

Accelerated Developer Velocity

– Automated manual environment setups, reducing CI/CD pipeline to an average of 3-minutes per build, test, and deploy cycle, across 80+ microservices
– Enabled developers to instantly validate ideas locally with Docker Compose
– Reduced provisioning delays with on-demand environments

10x Increase in Deployment Frequency

– Moved from 2 manual deployments/week to 10-15 automated deployments/day
– Achieved over 1,200+ deployments/year, up from ~100
– Leveraged standardized workflows to eliminate CI/CD issues across 50+ microservice

Stronger Security and Compliance

– Integrated Docker Scout into dev toolchain for real-time vulnerability detection
– Instituted image and registry access controls via Docker Business Image Access Management
– Hardened developer endpoints using Docker Desktop with SSO and container isolation

Infrastructure Optimization

– Achieved 82% spot instance utilization on AWS, reducing infra costs
– Enabled multi-cloud flexibility across AWS, GCP, and Azure
– Migrated all local development to cloud-native environments

Results

Docker as the Blueprint for Resilient Fintech

For InCred, Docker became more than a development tool – it became the foundation for secure, scalable growth. From unlocking developer autonomy to meeting audit demands with confidence, Docker enables InCred to deliver secure finance at the speed of digital expectation.

InCred’s journey also showcases a thoughtful evolution. Initially on Docker Teams, they quickly recognized the need for tighter control and user management, leading to an upgrade to Docker Business. Now with built-in SSO and access management, Docker usage is regulated and secure at scale. Developers rely on Docker Scout for real-time insights into security risks during local development, and new services can be deployed faster than ever.

The roadmap ahead is equally ambitious. InCred is preparing to adopt Docker Hardened Images and Docker Build Cloud, with expectations to further reduce build times and enhance image trustworthiness. Both initiatives align with their long-term vision: to scale securely, innovate continuously, and operate with confidence in a heavily regulated market.

With Docker, InCred has created a replicable model of secure fintech modernization – one that others in the sector are now looking to emulate.

]]>
NTT DATA INTRAMART https://www.docker.com/customer-stories/ntt-data-intramart/ Tue, 12 Aug 2025 14:33:57 +0000 https://www.docker.com/?post_type=customer&p=75137
gray
Case Study

5 releases a day, 10-Minute setups: NTT DATA INTRAMART’s path to development velocity with Docker

logo intra mart black

Company: NTT DATA INTRAMART
Industry: Enterprise software (Low-code BPM/DX
platforms)
Headquarters: Tokyo, Japan
Employees: 500+
Solutions Used: Docker
Desktop, Docker Hub, Docker Compose, AWS EKS
Partners: 200+ global system
integrators
Customers: 10,000+ across all industries

Highlights

  • 5x daily internal deployments
  • 380 container builds/day
  • 17% failure rate and falling
  • Environment setup time cut from hours to 10 minutes

Introduction

NTT DATA INTRAMART, born from an internal venture at NTT Data, is a recognized leader in enterprise digital transformation through its low-code platform “intra-mart.” With over 10,000 customers and the top domestic share in workflow/BPM for 18 consecutive years, NTT DATA INTRAMART delivers agility without compromising control.

Challenges

Bridging Legacy and Modern with Speed and Control

As cloud-native development and DevOps best practices became central to customer expectations, NTT DATA INTRAMART faced several urgent challenges:

  • Environment setup times stretching across hours due to complex stacks (PostgreSQL, Cassandra, Solr, HTTPd, Resin)
  • Inconsistencies across development and partner teams
  • Difficulty supporting legacy versions across multiple middleware options
  • Rising demand for secure, reproducible environments from both internal teams and over 200 partner companies

Docker was selected as the platform to standardize development workflows, simplify onboarding, and increase delivery velocity.

Solution

A Case Study in Unifying the Development Platform with Docker and Achieving DevOps Automation

In 2024, Podman and Rancher Desktop had been introduced internally, but to improve operational efficiency, NTT DATA INTRAMART decided to consolidate its tooling into a single solution. Faced with challenges such as the need to modify existing scripts and pipelines or issues with volume permission settings, the team chose Docker Desktop due to better compatibility and improved functionality, which has since become central to both their internal development and partner delivery workflows.

Today, Docker is used throughout their development stack:

  • Docker Desktop Powers local development
  • Docker Compose automates service orchestration across legacy middleware
  • Multi-stage builds help optimize image size
  • Docker Hub centralizes over 2,000 reusable container modules
  • AWS EC2/EKS run production workloads based on Docker
  • CI/CD pipelines integrate Docker to enable daily automated deployments

Security is also front of mind. NTT DATA INTRAMART has plans to test Docker Scout with the aim to verify container images and integrate security scans into their delivery pipelines – a key move as customers increasingly scrutinize software supply chains.

Beyond technical enhancements, Docker’s standardization helped unify operations across a distributed engineering team and a global partner ecosystem. With many partners now requesting container-based environments, Docker enabled NTT DATA INTRAMART to deliver consistent, pre-configured development templates to ensure a shared foundation for building and deploying applications.

Docker’s compatibility with their existing tools, including CI/CD platforms and AWS infrastructure, eliminated the need for retraining or re-architecting. 

“Before we incorporated containers into our DevOps environment, we faced challenges such as having to perform builds manually, which resulted in a lack of progress in terms of automation. However, after implementing containers and continuing to refine our internal processes over a long period of time, we have definitely seen positive, quantifiable results.”
— Jun Enomoto, Chief Technical Lead, NTT DATA INTRAMART

Measurable Impact

Docker has helped NTT DATA INTRAMART improve every key DORA metric while building a more secure, scalable engineering culture:

Deployment Frequency

From periodic builds to 5 internal releases/day

Lead Time for Changes

 Reduced drastically via automated Dockerized environments

Change Failure Rate

Down to 17% with better reproducibility – on par with elite teams

Build Velocity

Now averaging 380 Docker-based builds/day across 2,000+ modules

Onboarding Time

Reduced from 1–2 days to ~1 hour

 Environment Setup

From multiple hours to 10 minutes with no manual work

Docker enabled developers to work in fully standardized environments regardless of location or hardware. Onboarding a new engineer no longer involved hours of manual configuration, troubleshooting local differences, or relying on outdated documentation. Instead, developers can spin up a fully configured environment with a single command, allowing them to focus immediately on contributing code.

Even for teams supporting multiple middleware stacks and versions, an ongoing necessity given the broad enterprise customer base, Docker has been instrumental in improving reproducibility. When bugs are reported, engineers can quickly spin up the relevant stack and replicate the issue, vastly reducing the time required for diagnosis and resolution.

Security visibility also improved. Docker enabled NTT DATA INTRAMART to better trace, verify, and isolate container images. This has built greater confidence in their DevSecOps practices, and gave NTT DATA INTRAMART the confidence to test new features that will further improve their security practices.

Docker also streamlined feature development. Teams can now rapidly test across middleware configurations by swapping containers, rather than rebuilding environments from scratch. This has helped accelerate cycle times for complex feature validations and integrations.

Results

Operational Transformation Beyond Development

The benefits of Docker go beyond just developers. NTT DATA INTRAMART’s support, QA, and partner enablement teams also experienced significant gains. Dockerized environments are now being shared with over 200 partner companies, many of whom had previously required extensive onboarding and troubleshooting support.

By delivering pre-packaged, Docker-based environments to partners, NTT DATA INTRAMART ensures that external teams can run consistent, supported configurations. This has both reduced inbound support load and shortened customer delivery timelines. Templates are maintained centrally, making updates and patches easy to propagate.

Additionally, Docker has become a core element of knowledge sharing within NTT DATA INTRAMART. Internal documentation, wikis, and onboarding programs have been updated to reflect containerized best practices, and 2025 will see a formal training program for 30+ engineers (10 new grads + 20 experienced staff). These efforts are helping to deepen internal capability and create a shared operational language across teams.

NTT DATA INTRAMART is also preparing to expand Docker’s reach through templated offerings and paid setup services—turning an internal investment into a potential new value-add for enterprise customers.

A Foundation for Secure, Scalable Innovation
Docker has transformed how NTT DATA INTRAMART builds, tests, and delivers software—from laptops to cloud clusters. With faster onboarding, more consistent environments, and visibility into container security, the company is now positioned to scale its partner network and enhance internal productivity.

As they continue to modernize their delivery models and expand their partner footprint, Docker will remain essential for increased software supply chain security, and the standardization of partner-facing deployment blueprints.

With Docker, NTT DATA INTRAMART is not just transforming its own engineering organization. It’s enabling a new generation of enterprise agility – secure, automated, and ready for scale.

]]>
How Siimpl Reduced Build Times by 90% with Docker Build Cloud and GitHub Actions https://www.docker.com/customer-stories/siimpl/ Thu, 13 Feb 2025 20:03:12 +0000 https://www.docker.com/?post_type=resources&p=57481
gray
Case Study

How Siimpl Reduced Build Times by 90% with Docker Build Cloud and GitHub Actions

logo siimpl color

About Siimpl: Siimpl delivers efficient, high-quality software solutions for forward-thinking businesses. We specialize in microservice architecture, API design, and automation and have expertise in modern cloud/on-prem environments. Our world-class developers excel in the continuous delivery and integration of infrastructure with proficiency in a multitude of technologies and platforms. With a core team of former Microsoft and Amazon engineers, we offer the agility of a startup and the experience of a large firm, ensuring seamless integration and innovative solutions.

Learn more at siimpl.io or contact solutions@siimpl.io

Highlights

  • 90% faster build time: Achieved by replacing Docker Buildx and QEMU emulation with Docker Build Cloud and self-hosted GitHub runners, reducing multi-architecture build times by 90% compared to the previous configuration.
  • 50% reduction in lead time: Improved DORA metrics for Lead Time for Changes and Time to Restore by approximately 50%.
  • 30% faster MTTD & MTTR: Shortened Mean Time to Detect (MTTD) and Mean Time to Resolve (MTTR) by around 30%, with potential improvements up to 50-60% through further optimization.

 

“Docker Build Cloud helped us cut build times by 90%. Before, we were stuck dealing with QEMU emulation, which slowed everything down, but now, it’s a night-and-day difference.” – Neal Patel, Siimpl

Introduction

A leading East Coast-based cybersecurity company faced severe performance bottlenecks as it scaled, which threatened its ability to meet growing client demands. Siimpl, a cloud-first solutions provider, assisted the cybersecurity company in implementing solutions to improve its DORA (DevOps Research and Assessment) metrics, such as Lead Time for Changes and Time to Restore.

The cybersecurity firm, known for protecting digital assets, faced several operational inefficiencies. The root of these challenges lay in outdated infrastructure and misaligned development environments. As the company grew, its systems struggled to handle increasing demand, leading to disjointed environments, slow build processes, and unreliable incident response mechanisms that were not designed for the complexity of their expanding operations.

The journey from identifying problems to implementing solutions highlights strategic innovations and technical expertise. Neal Patel from Siimpl explains, “We managed to turn these obstacles into stepping stones, enhancing our overall performance and stability.”

Challenges

Overcoming operational hurdles in cybersecurity

Efficient development and deployment
One major issue was more synchronization between developers’ environments and the CI/CD pipeline. This disconnect made it difficult for engineers to confidently test changes, resulting in a slow and cumbersome deployment process. The deployment process was slow, with each deployment taking up to several hours to complete. Engineers often faced delays due to testing inconsistencies across development environments, causing bottlenecks in getting code from development to production. As a result, development, QA, and operations teams were all affected, leading to frustration and reduced productivity.

The misalignment negatively affected key DORA metrics, notably the Lead Time for Changes and Time to Restore, which are critical for measuring development efficiency. The company needed a seamless and consistent environment across development, testing, and production stages to streamline its workflow and boost productivity.

Reliable incident response
Another significant challenge was the inability to roll back to stable versions during deployment issues quickly. The company’s existing container infrastructure was not optimized for efficient rollbacks. Without a properly automated container versioning system, reverting to stable builds required manual intervention, slowing down incident response and extending downtime.

In the chaotic moments following a deployment problem, the team struggled to revert to previous stable versions swiftly, which threatened their goal of achieving 99.99% uptime. Establishing a robust infrastructure for quick and efficient rollbacks was essential to maintaining system reliability and minimizing downtime.

Comprehensive telemetry
The third challenge revolved around telemetry and observability. Despite efforts from the Site Reliability Engineering (SRE) team to implement telemetry collection and publishing, the tools in place were ultimately not effective due to low adoption. The system relied on fragmented and outdated tools that required too much manual setup, discouraging developers from fully integrating them into their workflows.

Consequently, the company faced delays in detecting and resolving issues, increasing risk for their business and clients. To address this, they needed to standardize telemetry configuration and simplify the setup of auto-instrumentation libraries. This would improve developer experience and enable actionable alerts to reduce Mean Time to Detect (MTTD) and Mean Time to Resolve (MTTR).

Solution

The right tools for optimized operations

To address the cybersecurity company’s challenges, Siimpl implemented strategic solutions centered around Docker Build Cloud and GitHub Actions. These targeted interventions streamlined development workflows, reduced build times, stabilized incident response reliability, and improved telemetry and observability across the organization.

CI/CD configuration with self-hosted GitHub runners and Docker Build Cloud

Initially, the company’s CI/CD pipeline relied on Docker Buildx and QEMU to emulate different architectures, significantly slowing down build times. The implementation was further improved with the adoption of Docker Build Cloud. Optimizing local builds is bottlenecked by the developers’ local chip architecture, but Docker Build Cloud allows the Docker engine to seamlessly integrate with remote builders. This allows developers to take advantage of native architecture build speeds with minimal overhead.

“With Docker Build Cloud, we don’t have to worry about local hardware holding us back. Developers can build and test natively across different architectures, and it just works,” Patel says. Find specific details on implementation in Siimpl’s GitHub repository.

Leveraging SemVer-tagged containers for easy rollback

The unpredictable nature of deployments often required quick rollbacks. Siimpl introduced a Semantic Versioning (SemVer) tagging strategy to manage container images. This approach enabled the company to quickly revert to previous stable versions when issues arose, minimizing downtime. DevOps teams configured automated jobs using AWS CLI commands to update Amazon Elastic Container Service (Amazon ECS)  services with the desired image tags, ensuring quick recovery and minimal operational disruption.

Configuring sidecar containers in Amazon ECS for aggregating and publishing telemetry data 

Addressing the telemetry challenges, Siimpl used Terraform modules to embed extensive configuration into the client’s infrastructure. Sidecar containers were defined in Amazon ECS task definitions to run OpenTelemetry (OTel) collectors, which aggregated and published telemetry data from application containers. This setup decoupled the telemetry collector from the runtime container, ensuring application stability even during telemetry failures.

Additionally, multi-stage builds configured in Dockerfiles were used to standardize the initialization of auto-instrumentation libraries across the client’s Node.js microservices, resulting in clean and efficient images. “We configured sidecar containers to ensure our main applications remained stable, even if telemetry systems encountered issues,” Patel says.

Task definition example

By implementing multi-stage builds, the team was able to tackle varied build processes efficiently. These Dockerfiles separated the build environment from runtime, ensuring images were clean and optimized. The process involved installing OpenTelemetry libraries during the build and copying them during runtime, providing a consistent, reliable workflow across applications.

Key benefits

The solutions implemented by Siimpl addressed the cybersecurity company’s challenges by introducing several key features. These features resolved immediate issues and laid the groundwork for a more efficient and robust engineering operation.

90% faster build times from switching to Docker’s native node strategy with self-hosted GitHub runners, eliminating the performance lag caused by Docker Buildx and QEMU emulation.

Instant rollbacks and minimized downtime from implementing Semantic Versioning (SemVer) for container images, enabling quick reverts to stable builds with automated recovery through AWS Command Line Interface (AWS CLI).

Increased system stability during telemetry failures from decoupling telemetry from runtime containers using Amazon ECS sidecar containers and OpenTelemetry collectors, improving system observability and health monitoring

Cleaner and more efficient images from using multi-stage Dockerfiles, which separate build and runtime stages, standardizing auto-instrumentation across the company’s Node.js microservices.

Results

Achieving operational excellence

The solutions implemented by Siimpl resolved the key technical challenges and drove noticeable improvements across the business and development teams. These changes resulted in faster development cycles, more reliable systems, and smoother operations.

One of the most substantial impacts was the 90% reduction in local build times, achieved by switching to Docker Build Cloud remote builders and self-hosted GitHub runners. Engineers could now confidently test and deploy code across multi-architecture environments, freeing them from the previous delays caused by Docker Buildx and QEMU emulation. The accelerated builds meant development teams faced fewer bottlenecks and faster iterations in the CI/CD pipeline, translating to faster delivery of new features to customers.

Implementing Semantic Versioning (SemVer) for container images made it possible to quickly revert to stable versions during deployment issues, which had been a significant source of downtime. For the business, the ability to minimize downtime — helping the company achieve its goal of 99.99% uptime — improved service reliability and reduced the risk of negative customer impact. “With Docker’s containers, automating rollbacks became so much easier,” Patel explains. “Now, when there’s an issue, the team can recover quickly without a lot of manual work, which really cuts down on downtime.” Additionally, the AWS CLI automation further streamlined the recovery process, making the rollback strategy efficient and reliable.

Using sidecar containers and OpenTelemetry collectors, Siimpl helped improve telemetry and system observability, which was previously fragmented. Developers now had real-time insights into system health, enabling faster detection and resolution of issues. The result was a 30% reduction in Mean Time to Detect (MTTD) and Mean Time to Resolve (MTTR), with the potential for further optimization up to 50-60% through alert tuning and automation. Improved observability meant fewer incidents and faster recovery times, resulting in less customer disruption and more proactive system monitoring. “Using Docker for versioning has saved us so much time. We can roll back to stable versions almost instantly, which is important to stay at 99.99% uptime,” Patel says.

The impact of these changes is invaluable across the organization. Customers are happier with the pace at which feature requests are released. The product team has gained more confidence in the engineering team thanks to the improved rollback strategy and targeted alerting. Engineers are excited about the ease of instrumentation for observability and the improved build times.

About Siimpl: Siimpl delivers efficient, high-quality software solutions for forward-thinking businesses. We specialize in microservice architecture, API design, and automation and have expertise in modern cloud/on-prem environments. Our world-class developers excel in the continuous delivery and integration of infrastructure with proficiency in a multitude of technologies and platforms. With a core team of former Microsoft and Amazon engineers, we offer the agility of a startup and the experience of a large firm, ensuring seamless integration and innovative solutions.

Learn more at siimpl.io or contact solutions@siimpl.io

“Docker Build Cloud helped us cut build times by 90%. Before, we were stuck dealing with QEMU emulation, which slowed everything down, but now, it’s a night-and-day difference.”

“Docker Build Cloud’s remote builders have been incredible. Developers don’t have to worry about their local machine specs anymore—they just offload the builds to Docker, and builds are completed much faster.”

“With Docker Build Cloud, our developers can stay in their local terminals, and builds happen seamlessly without needing to push code or create a pull request. It’s all done right in Docker Desktop, which is a massive time saver.”

“One of the best things about Docker Build Cloud is that it reduces the overhead. Our team doesn’t have to wait for a GitHub Actions job to finish. Everything happens natively in Docker Desktop, which keeps the workflow fast and efficient.”

“The developer productivity gains we’ve seen by using Docker Build Cloud are massive. We’re talking about a 50-75% improvement in cycle time for builds, which is a game changer for our team.”

“The performance gains we’ve seen using Docker Build Cloud have been huge, particularly with multi-architecture builds. Developers now have access to fast, native builds without the delays caused by emulation.”

]]>
How Exodus Orbitals Simplifies Satellite and Space Prototyping with Docker https://www.docker.com/customer-stories/exodus-orbitals/ Tue, 26 Nov 2024 19:57:00 +0000 https://www.docker.com/?post_type=resources&p=65272
gray
Case Study

How Exodus Orbitals Simplifies Satellite and Space Prototyping with Docker

logo

Industry: Defense & Space
Location: Richmond Hill, Ontario, Canada
Company: Exodus Orbitals offers a “satellite-as-a-service” platform, allowing businesses and developers to host and run satellite applications in space. The company provides rentable satellite services, allowing users to develop, test, and deploy satellite software without needing costly hardware, thereby lowering entry barriers.

Highlights

  • Development time reduced from years to days: Docker and Exodus Orbitals cut development timelines of satellite software for onboard processing from years to days, allowing Exodus Orbitals to deploy functional applications rapidly.
  • Significant cost savings: By eliminating the need for expensive satellite hardware during testing, Docker reduced development costs significantly, making satellite development more accessible to smaller teams and independent developers.
  • Access for non-experts: Docker’s containerization enabled developers without specialized aerospace knowledge to create functional satellite applications. This approach democratized space innovation, allowing experts from GIS, ML, and other fields to contribute effectively.

 

“Docker made it feel like we were building a web app, not space software.” — Hackathon participant

Introduction

Exodus Orbitals’ primary audience includes software developers with experience in GIS and machine learning (ML), even without prior aerospace expertise. Typically, satellite software development projects take between 12 to 36 months and are closely tied to mission progress and hardware dependencies. However, with Exodus Orbitals’ software developer kit (SDK), validated by the European Space Agency (ESA), and Docker’s OS-level virtualization, the company has cut development timelines by an order of magnitude, transforming the landscape of space-tech innovation.

Satellite industry software development practices traditionally require large investments, deep technical expertise, and long development timelines. Exodus Orbitals recognized this challenge as an opportunity to democratize access to space, using Docker technology to streamline the development process. 

Simplifying satellite-specific complexities allowed non-expert developers to build, test, and deploy onboard data processing satellite software quickly and efficiently. Faster development cycles give developers more opportunities to experiment and innovate within the space industry.

Challenge

Inaccessibility of space development

High costs, technical barriers, and lengthy development timelines have traditionally characterized the development of software for the space segment of satellite missions. Only large organizations with specialized engineering teams and multimillion-dollar budgets could build and deploy such types of software applications. For smaller companies and independent developers, the complex knowledge required — ranging from satellite-specific hardware to orbital mechanics — made entry into the field nearly impossible.

The stakes were exceptionally high for Exodus Orbitals, whose mission is democratizing access to Earth observation satellite capabilities. Without a solution, Exodus Orbitals risked failing to open the satellite industry to a broader community of developers. This would stifle innovation and limit opportunities for new, agile applications that could accelerate advancements across multiple industries — from insurance and finance to agriculture and supply chain logistics as well as media and cybersecurity.

Moreover, the traditional approach came with enormous financial costs — often running into the millions — and timelines spanned several years. The lack of a streamlined development process meant small teams were excluded, and industry-wide progress was slow.

Exodus Orbitals faced a clear challenge: to scale satellite software development in a way that lowered costs, accelerated timelines, and enabled developers without specialized knowledge to contribute to space innovation.

The solution

Satellite software development with Docker

Faced with the challenge of making satellite software development more accessible, Exodus Orbitals chose Docker, a leading containerization platform, to streamline the process. Docker’s containers provided an abstraction layer that shielded developers from the satellite-specific complexities, enabling them to focus on building applications without needing expertise in satellite hardware or orbital mechanics.

Hackathons

To demonstrate the potential to simplify satellite software development, Exodus Orbitals hosted a series of hackathons. These events served as a real-world proving ground for Docker’s ability to enable developers — with no prior satellite engineering experience — to build fully functional applications in a fraction of the time previously required.

Hackathon participants used Docker’s platform to create application prototypes that can process Earth Observation imagery and other data quickly and effectively directly on satellites onboard computers. “One of the biggest challenges was processing real satellite data, but Docker made it possible for us to work in an environment that was easy to set up and consistent across all participants,” a participant said.

The solution centered around three core components:

  • Containerization: Docker containers allowed developers to package their applications into isolated environments, ensuring that each app could run consistently across different satellite systems. This approach significantly reduced the time spent on configurations and testing.
  • Pre-built templates: Exodus Orbitals provided a library of Docker templates that mimicked satellite conditions, enabling developers to bypass initial setup phases. These templates included essential components like satellite telemetry receivers, orbit calculation libraries, and environmental control systems, reducing the need for satellite-specific knowledge. Developers could launch applications with minimal setup time, cutting the development phase from months to days.
  • Virtualized testing: Developers could simulate a satellite’s environment locally and run exhaustive tests before deploying directly to a real satellite using the sandbox environment. “We applied OpenCV libraries and the BFMatcher algorithm to extract features from satellite images, matching differences and outputting results. Docker allowed us to run this efficiently,” one hackathon participant said.

The implementation process was straightforward yet impactful. By eliminating the need for developers to understand satellite hardware, Docker enabled non-experts to contribute to satellite software innovation. Exodus Orbitals also ensured a reliable and efficient deployment pipeline where applications could be developed and tested locally before being deployed on satellites in orbit.

The reduction in development time made space technology more accessible to a wider community of developers across multiple industry sectors in their preferred programming languages.

The development process typically involves three key phases:

  • Hackathon development: An initial version of the app is created through a hackathon, taking just days, using Docker and the Exodus Orbitals SDK.
  • Integration and validation: The app is then finalized and integrated into the satellite vendor’s platform by Exodus Orbitals, requiring no additional code from the original developer. This phase takes 1-3 months.
  • Mission launch and operations: After the satellite mission shakedown activities are complete, the app is ready for launch and deployment.

This approach allows Exodus Orbitals to reduce the traditional development schedule by an order of magnitude (as much as 10x), transforming a multi-year project into one that takes just a few months.

These hackathons underscored Docker’s ability to lower the barriers of entry, enabling developers to create solutions in just a matter of days. By leveraging Docker’s containerization, Exodus Orbitals successfully demonstrated that even complex tasks like satellite image and signal processing could be made accessible to a broader community, unlocking new opportunities for innovation in space technology.

Key benefits

The implementation of Docker by Exodus Orbitals led to several crucial benefits that reshaped how the development of onboard processing satellite software is approached:

Reduced development time

Docker containers dramatically shortened the timelines for satellite mission software development. What previously took years now takes just days. Developers in Exodus Orbitals’ hackathons could build and deploy functional software applications for satellite onboard computers in a fraction of the time traditionally required.

Lower development costs

By leveraging Docker and Exodus Orbitals’ SDK, the overall costs of satellite software development were cut by an order of magnitude — reducing project timelines from 12-36 months to as little as 2-3 months. This allowed smaller teams and independent developers to enter the space-tech industry at a fraction of the cost traditionally required.

Accessibility for non-experts

Docker’s containerization enabled developers with little or no satellite engineering experience to build and test software applications for satellite onboard computers. By abstracting the technical complexities, Docker enabled dozens of new developers to create applications traditionally the domain of highly specialized teams.

Improved collaboration and portability

Docker containers ensured that all developers worked in a consistent environment, regardless of location. This enhanced team collaboration across multiple projects and satellites, with applications being easily shared and scaled across teams. As a result, development teams saw a 50% increase in efficiency.

Smooth testing and deployment

Docker’s portability allowed developers to test applications locally in virtual satellite environments before deploying them to real satellites. This eliminated deployment issues, reducing errors and deployment times by 50%.

Results & outcomes

Implementing Docker immediately benefited Exodus Orbitals, completely reshaping its satellite software development process. Development times, which once spanned years, were reduced to 2-3 months for full space missions, with initial prototypes developed in days through hackathons. By removing the need for expensive hardware during testing and simplifying the development pipeline, Exodus Orbitals slashed costs by an order of magnitude.

This approach was first tested in partnership with the European Space Agency on the OPS-SAT mission and has demonstrated its effectiveness. Currently, Exodus Orbitals is working with a key industry partner and hackathon sponsor to integrate apps from hackathon winners into operational satellite platforms.

The process of building and deploying satellite software — which traditionally took multiple years — was reduced to a few days, representing a 90% decrease in development time. In real-world hackathons, developers with no prior satellite experience created and tested fully functional applications in just 48 hours. This rapid development cycle allowed Exodus Orbitals to iterate and create at an unprecedented pace.

Using Docker’s virtual environments, Exodus Orbitals was able to remove the need for costly satellite hardware during testing. This reduced overall development costs by an estimated 25%, allowing the company to reinvest resources into further innovation.

Docker’s portability and containerization enabled Exodus Orbitals to easily scale its applications across various satellite platforms. Multiple teams could work in parallel without needing to reconfigure their environments, resulting in a 200-300% increase in development efficiency.

More than 50 developers participated in hackathons organized by Exodus Orbitals, using Docker’s pre-built templates to build satellite applications. “Using OpenCV and Docker, we were able to process satellite images and detect changes, like counting containers in port areas,” one participant said. Many developers who had no prior satellite engineering experience expressed surprise at how quickly they could develop and deploy applications.

Conclusion

Docker has fundamentally transformed how Exodus Orbitals approaches satellite software development, cutting project timelines from years to days and reducing costs by 50%. With Docker’s containerization, non-expert developers can build and deploy functional applications, thereby expanding access to the space-tech industry.

Docker will continue to play a pivotal role as Exodus Orbitals plans additional hackathons and deployments. “We’re partnering with satellite operators to eventually run these applications in space, taking the solutions from the hackathon to real-world satellite deployments,” says Dennis Silin, CEO of Exodus Orbitals. By expanding satellite software development, Exodus Orbitals is opening the doors for more industries — from disaster prevention to climate change monitoring — to use space-based applications for real-world impact.

Learn more

Exodus Orbitals actively seeks partnerships with enterprises, startups, and academic teams that have in-house software development expertise, and either already take advantage of various data originating from space (e.g. satellite imagery) or aspire to access extended satellite capabilities (including various instruments, data processing capabilities, and more). Interested to learn more?

“We’ve been testing our platform through a series of hackathons where we allow developers without previous experience in the space industry to create solutions that can run on satellites”

Dennis Silin

CEO of Exodus Orbitals

“Most of the solutions used Docker and OpenCV, which is the plan we’re going forward with.”

Dennis Silin

CEO of Exodus Orbitals

“We’re partnering with satellite operators to eventually run these applications in space, taking the solutions from the hackathon to real-world satellite deployments.”

Dennis Silin

CEO of Exodus Orbitals

“It was my first time joining this kind of hackathon, and I really liked working with space technology and machine learning.”

Edie

Hackathon participant

“Using OpenCV and Docker, we were able to process satellite images and detect changes, like counting containers in port areas.”

Edie

Hackathon participant

“We applied OpenCV libraries and the BFMatcher algorithm to extract features from satellite images, matching differences and outputting results. Docker allowed us to run this efficiently.”

Pankaj

Hackathon participant

“One of the biggest challenges was processing real satellite data, but Docker made it possible for us to work in an environment that was easy to set up and consistent across all participants.”

Pankaj

Hackathon participant

]]>
How Docker IT Deploys Docker Desktop https://www.docker.com/customer-stories/docker/ Fri, 04 Oct 2024 17:28:49 +0000 https://www.docker.com/?post_type=resources&p=57887
gray
Case Study

How Docker IT Deploys Docker Desktop

logo docker blue

Location: San Francisco, CA with employees globally
Using Docker Business features, Docker IT securely deploys Docker Desktop to developers, support teams, and technical sellers with unique requirements.

Key highlights

  • 24-hour deployment: Docker IT quickly deployed Docker Desktop to hundreds of macOS and Windows computers in 24 hours.
  • Security compliance: Admins deployed enforcement and settings via MDM for centralized control, ensuring more secure deployments to Docker employees.
  • Visibility with Insights Dashboard: The Docker Desktop Insights Dashboard provided better visibility into version data and container usage, building deeper understanding and more effective policy management.

 

“From setup to deployment, it took 24 hours. We started on a Monday morning, and by the next day, it was done.” — Jeffrey Strauss, Head of Docker IT.

Introduction

At Docker, we’re constantly looking for ways to improve how we develop and deploy applications. As a leading company in containerization technology, we strive to simplify complex workflows, enhance the developer experience, and accelerate innovation.

With a core team of eight admins and engineers, the Docker IT department manages everything from infrastructure and network security to ensuring all employees have the correct software installed. However, our unique circumstances require deploying Docker Desktop across various non-engineering teams for customer demonstrations, support, and detailed documentation. While our previous methods were functional, they involved considerable manual effort, making us eager to adopt a more streamlined and user-friendly approach to improve security and user management.

Recently, we improved the capabilities for managing Docker Desktop deployments and implemented them across our organization.  As soon as the combined tools were available internally, our IT team refined our deployment processes. We transitioned from `registry.json` files to registry keys and the new MSI installer for Windows, and configuration profiles with the upcoming PKG installer for macOS. With this transition, we simplified deployment and provided better control to our administrators. Alongside these tools, the Docker Desktop Insights Dashboard provided critical data and visibility, improving management across the board. 

The introduction of these features — some already available and others on the roadmap — was another step toward improving our internal workflows and living up to our core values of innovation and ease of use. “It’s about making the process, stability, and usability of the deployment easier and more efficient for everyone involved,” Jeffrey Strauss, Head of Docker IT, explains.

The opportunity

Enhancing Docker Desktop management

Currently used internally and in preview with select customers, the Insights Dashboard offers admin comprehensive telemetry and insights about how their teams are using Docker. When organizations enforce login, they unlock the full potential of Docker Desktop, gaining comprehensive visibility into its usage across their teams. With our new Insights Dashboard, this new data is surfaced in an intuitive way to empower organizations to make informed decisions.  For example, Admins now have a complete picture of which versions are installed, so they can make informed decisions regarding updates, resource allocation, and compliance. 

Previously, the Docker IT team used `registry.json` files to ensure users were logging into the Docker organization while using Docker Desktop for security and compliance purposes. Although this method was functional, it required additional effort and needed to be more user-friendly. “Deploying .json files can be more cumbersome because they might require customization prior to deployment,” Strauss explains. Recognizing this, we aimed to find a more efficient and streamlined approach to Docker Desktop management.

Our existing mobile device management (MDM) tools managed Docker Desktop through configured installation flags and managed enterprise login. This process, especially on Windows systems, required additional steps and dependencies. “There are a couple of little nuances, particularly on the Windows side, where you have to install some dependencies before deployment,” Strauss says.

With the release of Docker Desktop 4.34, the MSI Installer and new login enforcement alternatives became generally available. These enhancements and upcoming features in the Docker roadmap aim to streamline administration, improve security, and enhance the user experience for Docker Business subscribers. 

The Docker Desktop MSI Installer assists with mass deployments and customizations using standardized silent install parameters. Additionally, the updated login enforcement features help enterprises of all sizes increase user logins, simplify administration, and reduce learning curves for IT administrators.

These updates provided an opportunity to refine our Docker deployment processes further. “The goal was to reduce administrative overhead, improve usage tracking, and integrate seamlessly with our MDM tools. By proactively addressing these areas, we improved our internal workflows and positioned ourselves to better support our customers in doing the same,” explains Steven Novick, Docker Principal Product Manager.

The solution

Refining Docker Desktop deployment

We implemented a new, streamlined solution to capitalize on the opportunity to improve Docker Desktop management. We transitioned from using `registry.json` files to registry keys for Windows, configuration profiles for macOS, and MSI and PKG installers for easier deployment.

Steps taken to deploy Docker Desktop in under 24 hours:

  • Packaged Docker Desktop (DD) for deployment across macOS and Windows devices.
  • Set up smart groupings within MDM to detect where DD was installed.
  • Packaged DD for streamlined deployment and management across macOS and Windows using InTune and Jamf, ensuring all devices had DD installed and users were logged in with the Docker organization account.

This change allowed us to give better control to administrators and simplify the deployment process. “The switch to registry keys and configuration profiles gives a little bit more control to the administrators and removes a little bit of control from the users, even when they have admin access to machines,” Strauss says.

The implementation process was straightforward and efficient. We communicated the changes early and often through Slack and email to ensure everyone was well-informed. The actual deployment was completed within 24 hours. “From setup to deployment, it took 24 hours. We started on a Monday morning, and by the next day, it was done,” Strauss says.

Key features of our new solution include:

  • Docker Desktop Insights Dashboard: Combined with enforced login, this new feature offered visibility into version installations, image pushes and pulls, build stats, and more, allowing us to drive better development practices beyond version upgrades.
  • Enforced login using registry keys (Windows) and configuration profiles (macOS): This provided centralized control and compliance with security policies.
    “The value of the login enforcement isn’t the focus since it’s been possible for a long time. What’s more important is how easy it is to do now,” Strauss says.
  • Seamless integration with MDM tools and new installer packages: We integrated with Microsoft Intune for Windows and Jamf for Mac, simplifying the deployment process and reducing administrative effort.

Throughout the implementation, we faced and addressed several unique situations at Docker. Key considerations included ensuring seamless updates without disrupting users and managing exceptions for specific configurations, such as authenticating during testing. “Because our customer success team or engineering needed to work on multiple versions of Docker Desktop, we’ve had to do things like create opt-out scenarios where users can go and opt-out using some of our tooling,” Strauss says.

Testing the new deployment method, transitioning to MSI and PKG files, and establishing an opt-out process for users were critical milestones. “When we first tested a release candidate sent to us as a PKG file, I breathed a sigh of relief because it’s so easy to deploy,” Strauss says.

Key benefits

Implementing our refined solution for Docker Desktop management has delivered several key benefits, enhancing our internal processes and positioning us to support our customers better.

Improved visibility with Insights Dashboard

Our new Insights Dashboard provides detailed data on Docker usage, ensuring all our users are connected to our organization. This feature offers clear visibility into usage patterns, aiding in better decision-making.

Efficient deployment

We drastically improved deployment efficiency by transitioning to registry keys, configuration profiles, and MSI and PKG installers. We managed to deploy Docker Desktop to hundreds of computers within 24 hours.

Enhanced security

The new solution has strengthened our security posture. Enforcing login combined with Single Sign-On (SSO) and System for Cross-domain Identity Management (SCIM) ensures centralized control and compliance with security policies. “With the new solution, deployment was simpler and tamper-proof, giving a clear picture of Docker usage within the organization,” Novick says. This centralization is crucial for maintaining secure operations.

Reduced administrative overhead

Compatibility with MDM tools like Intune for Windows and Jamf for macOS, which will be available to Docker customers soon, has streamlined management tasks. This simplification has significantly cut down on administrative work.

Seamless user experience

Docker IT prioritized a smooth user transition by communicating changes early and transparently. This proactive approach minimized disruptions and ensured users were well-prepared for the updates.

Results

Transitioning to registry keys, configuration profiles, and MSI and PKG installers facilitated faster deployment times and minimized administrative efforts with Docker Desktop. This change yielded more streamlined management, making operations more straightforward and secure.

Adopting SSO and SCIM fortified our security infrastructure. These integrations ensure stringent adherence to security protocols, enhancing overall operational security. And, our newly implemented Insights Dashboard offers comprehensive analytics on Docker utilization, significantly improving resource distribution and management decisions.

We are committed to continuous improvement and innovation in Docker Desktop management. Check out new Docker Desktop releases to gain access to these new features. By staying ahead of the curve, we aim to maintain our leading edge in technology deployment and support our customers in achieving their goals with Docker.

Learn more

“Our communication strategy is always socialized early, often, and transparently.”

Jeffrey Strauss

Head of Docker IT

“At Docker, we place a lot of focus on getting internal testing right and making it a priority because we are proud to be at the industry-leading company for containers.”

Jeffrey Strauss

Head of Docker IT

“It’s also a very important aspect for me personally that IT has some kind of influence in driving top-line revenue.”

Jeffrey Strauss

Head of Docker IT

“The switch to registry keys and configuration profiles gives a little bit more control to the administrators and removes a little bit of control from the users, even when they have admin access to machines.”

Jeffrey Strauss

Head of Docker IT

“The value of the login enforcement isn’t the focus since it’s been possible for a long time. What’s more important is how easy it is to do now.”

Jeffrey Strauss

Head of Docker IT

“When we first tested a release candidate sent to us as a PKG file, I breathed a sigh of relief because it’s so easy to deploy.”

Jeffrey Strauss

Head of Docker IT

“From setup to deployment, it took 24 hours. We started on a Monday morning, and by the next day, it was done.”

Steven Novick

Principal Product Manager

“By enforcing login, we can see who is using Docker within the company. With our upcoming Insights Dashboard, we get additional data on how people and teams are using Docker.”

Steven Novick

Principal Product Manager

“Once the policy is pushed, the next time they open Docker, they must log into the Docker Business Subscription. With SSO and SCIM enabled, it’s seamless.”

Steven Novick

Principal Product Manager

“Our IT team is committed to ensuring everyone is up-to-date by pushing new versions of Docker Desktop to all users within 24 hours of each release, so everyone is on the same page with the latest and most secure updates.”

Steven Novick

Principal Product Manager

“With the new solution, deployment was simpler and tamper-proof, giving a clear picture of Docker usage within the organization.”

Steven Novick

Principal Product Manager

“The goal was to reduce administrative overhead, improve usage tracking, and integrate seamlessly with our MDM tools. By proactively addressing these areas, we improved our internal workflows and positioned ourselves to better support our customers in doing the same.”

Steven Novick

Principal Product Manager

]]>
Overcoming Insurmountable Debt with Stride Conductor GenAI and Docker at a leading US e-commerce company https://www.docker.com/customer-stories/stride/ Wed, 02 Oct 2024 16:50:45 +0000 https://www.docker.com/?post_type=resources&p=57520
gray
Case Study

Overcoming Insurmountable Debt with Stride Conductor GenAI and Docker at a leading US e-commerce company

logo stride

About Stride: Leverages cutting-edge technology and prioritized strong values to build sustainable success for clients.
Industry: Leading American e-commerce company with an annual revenue of $8 billion.

Highlights

  • 83% of linting errors were resolved, drastically reducing manual effort and improving code quality.
  • 90x faster resolution of errors, reducing time from 6 minutes to 4 seconds, significantly accelerating the development process.
  • An 80% projected cost reduction, from $300,000 to $65,000, will allow the company to reallocate resources more effectively.

 

“Stride Conductor has changed the ROI equation for us to take care of these errors, now and in the future.” — VP of Engineering, E-commerce Company

Introduction

Stride was approached by a leading American e-commerce company, dominating its sector with an annual revenue of $8 billion. Faced with a critical technical challenge, their PHP codebase had accumulated over 17,000 linting errors. These errors were not just minor nuisances; they hindered upgrades to their testing libraries, introduced security vulnerabilities, and caused system downtimes, costing the company over $1 million annually.

Describing the situation as “an insurmountable amount of tech debt,” the VP of Engineering outlined a daunting prospect: manually resolving these errors would take one developer an entire year, costing approximately $300,000. Seeking a faster, more cost-effective solution, the company turned to Stride and their proprietary multi-agent GenAI tool, Conductor, supported by Docker, to tackle this issue. Docker’s secure containerization facilitated rapid deployment and compliance with stringent security protocols, making it an ideal choice for this complex challenge. The technical debt also threatened customer satisfaction and overall business growth, making it a critical problem to address swiftly.

The Challenge

Overcoming “Insurmountable tech debt”

An $8 billion American e-commerce company faced a significant technical issue: a staggering backlog of over 17,000 linting errors in their PHP codebase. These errors presented multiple challenges:

  • Technical debt: The accumulation of linting errors severely hindered the company’s ability to upgrade its testing libraries, leading to outdated and inefficient testing processes.
  • Security vulnerabilities: The unresolved linting errors introduced vulnerabilities, exposing the system to potential security threats and risking data integrity and confidentiality.
  • System downtime: Frequent system downtimes caused by these errors had a detrimental impact on operations, with an estimated annual cost of over $1 million.
  • Resource intensity: Addressing this backlog would require one developer to work for an entire year, costing approximately $300,000.

The VP of Engineering described the situation as “an insurmountable amount of tech debt.” This technical debt threatened operational efficiency and risked customer satisfaction and overall business growth. The scale and complexity of the problem necessitated a faster, cost-effective solution. The company was considering advanced GenAI tools to tackle these issues more efficiently. It brought Stride to leverage its proprietary multi-agent GenAI tool, Conductor, supported by Docker, to find a robust solution. Without addressing these errors, the company faced escalating costs and heightened security threats in the future.

Solution

Stride introduced Conductor, their proprietary multi-agent GenAI tool, supported by Docker, to address the client’s backlog of 17,000 linting errors. The Conductor tool was specifically chosen for its advanced capabilities in automating error resolution while ensuring the client’s stringent security requirements were met.

Conductor leverages sophisticated GenAI techniques, including configurable agents, few-shot prompt engineering, ctags, and Python scripting, to efficiently resolve linting errors across millions of lines of code. This tool was tailored to fit seamlessly into the client’s existing workflow through its multi-agent workflows. These workflows informed the client’s team, allowing them to review the work in progress and customize the agents according to their preferences, standards, and success criteria. This ensured the automation process was accurate and aligned with the client’s development goals.

One of Conductor’s standout features is its ability to generate traceable and verifiable outputs. This feature was crucial for maintaining high code quality and security standards, as it allowed the client’s team to inspect and verify the automated fixes before they were fully integrated into the codebase. This transparency was vital for building trust in the automated process and ensuring the changes met the client’s rigorous standards.

To complement the capabilities of Conductor, Stride employed Docker to facilitate a smooth and secure onboarding process. By using a Dockerized snapshot of the client’s codebase, Stride could bypass the usual hurdles of system access. This containerized approach limited access to only the necessary portions of the codebase, ensuring compliance with the client’s stringent security policies. Additionally, Docker enabled rapid onboarding, allowing Stride to begin value-adding work within a day, compared to the typical week-long setup process. Docker’s containerization technology enabled Stride to create isolated environments, ensuring that only the necessary parts of the codebase were accessed, significantly enhancing security.

This isolation also facilitated parallel development, as multiple application versions could be worked on simultaneously without interference. For instance, Conductor identified and corrected a recurring syntax error in multiple files, significantly improving code quality and consistency. Key team members, including developers and security experts, were actively involved in customizing and implementing the solution, ensuring it met all the client’s requirements.

The combined use of Conductor and Docker addressed the immediate issue of linting errors and provided a framework for future error resolution and code maintenance. Conductor managed to resolve 83% of the linting errors, reducing the manual effort required to just 17%. The time taken to fix each error was drastically reduced from 6 minutes to 4 seconds, achieving a 90x increase in speed. This approach significantly reduced the total effort required from one developer working for a year to just eight weeks, bringing the cost down to $65,000—only 21.6% of the previously estimated manual cost.

In summary, Stride’s solution using Conductor and Docker swiftly and cost-effectively resolved the linting errors and ensured that the client’s codebase remained secure and high-quality. This innovative approach laid the groundwork for more efficient and automated error handling in the future, underscoring the transformative potential of GenAI tools in software development.

Key benefits

83% error resolution, drastically cutting manual effort.

90x speed increase, reducing error resolution time from 6 minutes to 4 seconds.

Cost reduction by nearly 80%, from $300,000 to $65,000.

Enhanced code quality and security with traceable and verifiable outputs.

Rapid onboarding and minimal disruption, thanks to Docker.

Results

The implementation of Conductor, supported by Docker, yielded remarkable results for the $8 billion American e-commerce company. The overwhelming backlog of 17,000 linting errors in their PHP codebase was significantly reduced, driving operational and financial benefits. By leveraging Docker, Stride ensured a secure and rapid onboarding process, crucial in achieving the 90x speed increase and 80% cost reduction. Docker’s ability to enable parallel development allowed the client to maintain operational continuity while resolving the linting errors.

Efficiency gains
Conductor resolved 83% of the linting errors, dramatically decreasing the manual effort required from 100% to just 17%. Its automation capability reduced the time to fix each error from 6 minutes to a mere 4 seconds, marking a 90x increase in speed. This rapid resolution enhanced the development process and minimized disruptions, enabling the team to focus on more strategic tasks.

Cost savings
The project demonstrated substantial cost efficiency. Initially, the manual resolution of the linting errors was estimated to cost $300,000, requiring a developer to work for an entire year. With Conductor, the total cost was slashed to $65,000, representing only 21.6% of the projected manual costs. This significant expense reduction allowed the company to reallocate resources toward other critical initiatives.

Operational continuity
Using Docker ensured the project adhered to the client’s stringent security protocols. By leveraging a Dockerized snapshot of the codebase, Stride facilitated a secure and efficient onboarding process. This approach allowed Stride to begin work within a day, compared to the typical week-long setup and enabled parallel development, ensuring that the client’s ongoing projects were not disrupted.

The VP of Engineering praised Conductor’s impact, stating, “Conductor has changed the ROI equation for us to take care of these errors, now and in the future.” The successful resolution of the immediate technical debt mitigated current risks and laid a robust foundation for future error handling and code maintenance. The client is now positioned to handle similar challenges more efficiently, demonstrating the long-term value and transformative potential of integrating GenAI and Docker into their development processes.

About Stride
Founded in 2014, Stride is a software development consultancy that works closely with client teams to deliver scalable solutions. With a focus on collaboration, Stride’s engineers, designers, and product managers help solve complex technical challenges. Operating remotely with hubs in NYC and Chicago, Stride integrates easily into any team. Learn more about Stride and Docker.

“Stride Conductor has changed the ROI equation for us to take care of these errors, now and in the future.”

VP of Engineering

“The integration of Conductor into our workflow was seamless and efficient, drastically cutting down our manual efforts.”

Lead Developer

“The speed and accuracy of Stride Conductor in resolving errors have significantly improved our development process.”

CTO

“Docker’s secure and efficient onboarding process allowed us to start seeing value within a day.”

Security Lead

“This innovative approach has laid the groundwork for more efficient and automated error handling in the future.”

VP of Engineering

]]>
How a Beauty Giant Achieved 25% Cost Savings with Container-First Development https://www.docker.com/customer-stories/beauty-giant/ Tue, 01 Oct 2024 18:42:32 +0000 https://www.docker.com/?post_type=resources&p=58806
gray
Case Study

How a Beauty Giant Achieved 25% Cost Savings with Container-First Development

logo docker blue

Top beauty company: A top cosmetics retailer known for innovation in beauty and personal care products.
Modernization effort: Transitioned from monolithic infrastructure to modern, containerized architecture.
Competitive edge: Achieved faster deployments and cost efficiency, maintaining industry leadership.

Key highlights

  • 60% faster deployments: Reduced average deployment times by 60%, enabling the company to rapidly push updates and new features to market, maintaining a competitive edge in the fast-paced beauty industry.
  • 25% lower infrastructure costs: Through enhanced CPU and memory efficiency, Docker slashed infrastructure costs by 25%, freeing up resources for further innovation and growth.
  • 1% deployment failure rate: Deployment failure rates were reduced from 8% to 1%, ensuring consistent, high-quality service delivery.

Introduction

As a global leader in the beauty and personal care industry, this company’s growth was feeling the impact of its outdated, monolithic infrastructure. With increasing digital demands, the company faced significant challenges — deployment cycles dragged on and inconsistencies across development environments led to delays and errors. These issues threatened the company’s ability to innovate and maintain customer satisfaction and led to frustration among developers, risking the loss of top talent essential for driving future growth.

The company recognized the need for a modernized approach to address these challenges. Transitioning to a container-first development strategy with Docker, it aimed to streamline operations, reduce costs, and enhance scalability. Such a container-first approach prioritizes containerized applications from the outset, ensuring consistency and scalability across all development stages. This strategic shift resolved immediate operational inefficiencies and positioned the company for sustained growth and continued market leadership.

Challenge

Modernizing a legacy tech stack

As the company’s digital operations expanded, its aging legacy infrastructure quickly became a bottleneck, leading to significant operational inefficiencies. The core issues stemmed from inconsistent development environments, monolithic applications, and cumbersome deployment processes that needed to be revised. These once-sufficient legacy systems had become outdated and were no longer capable of supporting the demands of a modern, fast-paced market.

The monolithic architecture also made scaling individual components challenging, resulting in frequent deployment delays and a high rate of errors. What should have taken hours stretched into days, leading to frustration within the team. The inflexibility of the legacy systems made it increasingly difficult to maintain the company’s reputation for timely product rollouts and quick responses to market trends.

These challenges were not only technical but also personal for the team members. Before Docker, developers spent a lot of time getting their local setups to match production. Teams needed to be developing new features but instead lost valuable time troubleshooting. These ongoing frustrations took a toll on productivity, dampened team morale, and made it increasingly difficult to maintain a positive and productive work culture.

Scalability was another primary concern. As the number of applications and services grew, the existing infrastructure struggled to scale, compounding the deployment challenges. The system architecture for the monolithic applications was tightly connected, so even minor changes required redeployment of the entire application. This outdated process slowed updates and increased the chance of errors, putting the company at risk of falling behind competitors who could move more quickly and nimbly with a microservices approach.

The consequences of these technical challenges were far-reaching. Delayed deployments slowed innovation and threatened customer satisfaction, a critical factor in an industry driven by consumer expectations and rapid change. The company recognized that its ability to compete and grow in an increasingly digital market without addressing these legacy infrastructure issues was at serious risk.

Solution

Recognizing the limitations of its monolithic infrastructure, the company transitioned to a microservices architecture as part of its modernization strategy. Containers were critical to this shift, and the company chose Docker for its reliability in local development. Docker enabled developers to build consistent, portable containers, ensuring that each microservice could be developed and tested independently. This approach allowed for quicker updates, better scalability, and more reliable performance, setting the stage for continued innovation and growth.

The implementation was carried out in phases, starting with non-critical applications to minimize risk and allow teams to learn and adapt. Docker Engine was used to containerize these applications, ensuring consistent environments that drastically reduced errors and deployment delays. This phased approach allowed the company to refine the process. It contributed to a renewed enthusiasm among developers, who could now focus on creative problem-solving rather than tedious manual setups. The team was motivated by the opportunity to embrace new technology with more collaborative practices. As teams gained confidence, Docker Compose was introduced to simplify the management of multi-service applications and helped devs work together more effectively across teams. 

The experience of adding Docker to the CI/CD process was smooth, aided by Docker’s reliable and robust documentation, established standards, and extensive integration partners. As the phased implementation progressed successfully, the company extended microservice architecture to more critical applications, ensuring its entire digital infrastructure benefited from the newfound efficiency. Now, teams deploy several times a day without worrying about stability.

Throughout this process, the company’s development and operations teams collaborated closely, continuously refining their container-first approach to maximize the benefits of their new toolset. Engineering leaders encouraged shared ownership and promoted open communication. The result was a streamlined, scalable solution that addressed the core challenges and positioned the company to meet the demands of a rapidly evolving market.

Key benefits

Implementing our refined solution for Docker Desktop management has delivered several key benefits, enhancing our internal processes and positioning us to support our customers better.

Faster deployments with layer caching

Docker’s layer caching feature sped up builds, reducing deployment times by up to 67%. What used to take nearly an hour now only takes a few minutes. This faster process helps the team release updates quickly and work on more tasks, increasing productivity and saving money.

Standardized environments with Docker Engine

By embracing a container-first approach, the company created consistent environments across development, testing, and production stages, significantly reducing deployment errors and leading to a smoother and more reliable process.

30% increase in developer satisfaction

The shift to modernized, containerized infrastructure significantly boosted developer morale, fostering a more collaborative work environment crucial for the company’s ability to deliver high-quality products.

Simplified management with Docker Compose

Docker Compose allowed for the orchestration of multi-container applications, streamlining the management of complex services and improving overall operational efficiency.

Increased deployment frequency

The improvements in consistency and build speed allowed the company to increase its deployment frequency from bi-weekly to multiple times daily, enhancing its responsiveness to market demands.

Optimized resource utilization

The company reduced CPU and memory usage by 50% through its container-first strategy. By deploying applications in Docker containers across several hosts, they could consolidate more instances onto each server. Containerization architecture reduced infrastructure costs by 25% and led to more efficient management of resources.

Deployment failure reduced from 8% to 1%

The standardization and simplification provided by containerization reduced the deployment failure rate from 8% to 1%, ensuring greater reliability in the company’s operations. With Docker, the testing environments are exactly like production, ensuring teams can be sure their tests are valid.

Results and outcomes

The container-first development approach profoundly transformed the company’s operational efficiency and overall business performance. One of the most significant improvements was in the deployment process, where average deployment time was reduced by 60%. This increase in speed enabled the company to push updates and new features to market much more rapidly, which was crucial for maintaining its market advantage in the fast-paced beauty and personal care industry.

Enhanced scalability and cost efficiency

The company’s scalability saw a substantial boost, with the number of Docker containers in operation expanding from 100 to more than 1,000. This growth ensured the infrastructure could comfortably handle increasing demand without sacrificing performance. In addition to enhanced scalability, the company achieved a 25% reduction in infrastructure costs, driven by a 50% increase in CPU and memory efficiency. These savings allowed for more effective resource allocation, supporting ongoing innovation and operational improvements.

Improved stability and reliability

The company saw a significant boost in the reliability of its operations, with deployment failure rates dropping from 8% to just 1%. Alongside this improvement, system uptime increased from 99% to 99.8%, significantly reducing downtime and ensuring that services were consistently available to customers. These enhancements minimized disruptions and helped the company maintain its reputation for delivering reliable, high-quality services. The combination of fewer errors and better service availability was crucial in a competitive market where reliability is critical to success.

Business and cultural impact

Beyond these quantitative improvements, adopting Docker had a broader impact on the company’s strategic objectives. The accelerated deployment times and enhanced scalability enabled the company to maintain its leadership position, ensuring it could more effectively meet customer expectations and outpace competitors. The cost efficiencies gained from optimized resource utilization provided greater financial flexibility, which the organization could reinvest in future growth and innovation initiatives.

Furthermore, the cultural benefits of implementing more modern software practices were significant. Developer satisfaction increased by 30%, reflecting the smoother, more efficient processes. This boost in morale fostered a more collaborative and productive work environment, driving further innovation and enhancing the overall work culture within the development teams.

Conclusion

The transition to a container-first development approach, executed in multiple phases, has driven substantial improvements across the company’s development and operations. By cutting deployment times by 60%, enhancing scalability, and reducing infrastructure costs by 25%, teams effectively resolved the operational challenges threatening the company’s ability to compete. These results addressed immediate issues and set the stage for long-term success in a fast-moving market.

The improvements in scalability and the significant reduction in deployment errors have fortified the company’s infrastructure, ensuring it can confidently meet growing customer demands. Additionally, a 30% increase in developer satisfaction reflects the positive cultural shift within the organization, fostering a more innovative and efficient work environment.

Adopting Docker wasn’t just about solving the current problems; it was about setting up operations for the future. The organization sees the forward-looking benefits of Docker’s implementation as a step in line with its broader growth and innovation strategy to meet market demands.

These outcomes create a solid foundation for continued growth and innovation. With Docker now integral to its operations, the company is better equipped to navigate the complexities of an increasingly digital landscape. This success ensures the company can maintain its leadership in the beauty and personal care industry, driving further innovation and responding to market demands with enhanced agility.

Embracing Docker within a container-first development strategy has addressed the company’s immediate operational needs and paved the way for sustained success, ensuring the company remains a competitive force in the industry.

Learn more

This case study was contributed by Vladimir Mikhalev, also known as Valdemar, a DevOps Consultant and Team Lead at Ataccama. With more than 20 years of experience at companies like IBM and Amazon, Vladimir’s expertise in Docker and containerization shines through. His practical insights offer a clear path for organizations to optimize their operations with modern IT practices.

Docker’s layer caching feature sped up builds, reducing deployment times by up to 67%. What used to take nearly an hour now only takes a few minutes.

The improvements in consistency and build speed allowed the company to increase its deployment frequency from bi-weekly to multiple times daily, enhancing its responsiveness to market demands.

Containerization architecture reduced infrastructure costs by 25% and led to more efficient management of resources.

The standardization and simplification provided by containerization reduced the deployment failure rate from 8% to 1%, ensuring greater reliability in the company’s operations.

One of the most significant improvements was in the deployment process, where average deployment time was reduced by 60%.

Developer satisfaction increased by 30%, reflecting the smoother, more efficient processes.

By cutting deployment times by 60%, enhancing scalability, and reducing infrastructure costs by 25%, teams effectively resolved the operational challenges threatening the company’s ability to compete.

In addition to enhanced scalability, the company achieved a 25% reduction in infrastructure costs, driven by a 50% increase in CPU and memory efficiency.

]]>
Why Bitso Returned to Docker Business: Security, Efficiency, and Developer Experience https://www.docker.com/customer-stories/bitso/ Mon, 30 Sep 2024 19:00:30 +0000 https://www.docker.com/?post_type=resources&p=59053
gray
Case Study

Why Bitso Returned to Docker Business: Security, Efficiency, and Developer Experience

logo BITSO

Industry: Cryptocurrency exchange
Location: Latin America
Team: 250+ engineers, with 100+ onboarded in the past 8 months. Transacts monthly with more than 1,700 companies

Highlights

  • Onboarding time reduced: Switching back to Docker reduced onboarding time from two weeks to a few hours per engineer, saving an estimated 7,700 hours in the 8 months while scaling the team.
  • Cost-effective: Returning to Docker after spending almost two years with the alternative open source solution proved more cost-effective, decreasing the time spent onboarding, troubleshooting, and debugging.
  • Zero new support tickets: After transitioning back to Docker, Bitso has experienced zero new support tickets related to Docker, significantly reducing platform support burden.

 

“Docker just works. You don’t have to explain to anyone how to use it. It just does what everyone expects it to do in the way that everyone expects it to.” — Sebastian Montini, Platform Engineering Director at Bitso.

Introduction

Bitso, the leading financial services company powered by cryptocurrency in Latin America, is known for making crypto accessible, secure, and easy to use. The company offers various services, from personal digital platforms for earning returns and making payments to sophisticated blockchain-based solutions for institutional clients. In 2023 alone, Bitso processed $4.3 billion in transactions between the US and Mexico, reflecting a 60% increase in annualized volume.

To maintain its rapid growth and leadership in the cryptocurrency sector, Bitso consistently evaluates and refines its operational tools. The company’s engineering team, which has grown to more than 250 members — including 100 new hires in the past eight months — depends on the Platform Engineering team to provide the necessary tools and environments for seamless development. “For every aspect that has to do with the developer experience and the tooling needed to build software at Bitso, the engineering team relies on our organization to make that happen,” Sebastian Montini, Platform Engineering Director at Bitso, explains.

As part of this ongoing improvement, Bitso revisited its tooling strategy, focusing on improving security, efficiency, and the overall developer experience. Initially, Bitso decided to explore an alternative solution to Docker, as the company values using the best tools available and frequently experiments with new technologies. However, this switch introduced complexities and massively slowed down daily work for developers, particularly during onboarding, which could leave developers waiting for weeks. Recognizing the need for a more efficient solution, Bitso revisited its tooling strategy.

Problem

Bitso initially relied on Docker to streamline its development processes. However, in keeping with their culture of exploring innovative tools, the team decided to experiment with an alternative solution. This move to an alternative solution, while driven by the desire to stay ahead with cutting-edge tools, introduced unforeseen challenges, particularly with Apple silicon laptops.

Technical and operational challenges

The alternative solution introduced unexpected technical and operational hurdles for Bitso’s specific environment:

  • Compatibility issues: Engineers experienced frequent bugs and disruptions with Apple silicon laptops.
  • Complex setup process: The detailed 16-step setup often took up to two weeks, causing delays.
  • Security concerns: Non-centralized management raised security issues, particularly with single-sign-on (SSO) and SCIM integration with Okta.

“Part of the onboarding process was to go to a Confluence page with like 16 different steps to make [our previous solution] work on Apple silicon laptops,” Montini explained. The process was time-consuming and prone to frequent issues, requiring extensive support interactions.

Impact on productivity

The operational challenges were broader than onboarding delays. Engineers would frequently encounter issues, leading to ongoing interruptions and a need for continuous troubleshooting. “It was sort of self-service, but issues were being found all the time,” Montini said. Issues included:

  • Hardware compatibility: The switch to the alternative solution caused issues when the team transitioned from Intel-based MacBooks to Apple silicon laptops. Some bugs were being addressed, but not at the pace needed, leading to complaints from engineers.
  • ​​Workarounds and support delays: Engineers had to rely on workarounds and frequently spent up to a week going back and forth with teams supporting the alternative solution to resolve issues, which added unnecessary complexity.
  • Troubleshooting overhead: Engineers spent considerable time troubleshooting and maintaining the setup, diverting focus from strategic initiatives.
  • Onboarding delays: Most of the issues surfaced during the intricate onboarding process for new engineers, which significantly slowed down the setup of their development environments and caused engineers to spend time waiting, not engineering.

“The cost of onboarding delays and slowed productivity is way more than the cost of the licensing,” Montini said.

The ongoing interruptions and continuous troubleshooting underscored the urgent need for a more streamlined and reliable solution. As Bitso prepared to scale, the alternative solution created growing inefficiencies, particularly with onboarding. It took up to two weeks per engineer to set up development environments fully. With plans to onboard more than 100 engineers in the next eight months, Bitso anticipated significant lost productivity due to long onboarding times. Returning to Docker helped avoid these inefficiencies, allowing engineers to contribute faster.

To support their rapid growth and maintain leadership in cryptocurrency, Bitso needed a faster, more efficient solution to minimize troubleshooting and streamline onboarding.

Solution

After evaluating the challenges with the alternative solution deployed in 2021, and recognizing the need for a more efficient development environment, Bitso returned to Docker in 2023.

The decision to return to Docker aimed to streamline onboarding, improve efficiency, and enhance security. Bitso implemented 194 Docker Business licenses and integrated Docker with their existing SSO solution, Okta.

The Platform Engineering team carefully planned and rolled out the transition back to Docker over two weeks. Once the move back to Docker was decided, deploying the 194 Docker Business licenses was smooth, facilitated by Docker’s user-friendly setup.

“It just works” onboarding

Docker Desktop‘s intuitive interface reduced onboarding time from weeks to just a few hours per engineer. Onboarding more than 100 new engineers to Docker Business over the last 8 months took just a few hours each, compared to the previous tool’s two-week process.

Since switching back to Docker, Bitso has significantly reduced the time spent onboarding engineers, allowing them to start contributing to key projects within hours instead of weeks.

A crucial benefit of working with Docker is the reduction in maintenance overhead. Docker’s simplicity and reliability mean that extensive internal documentation and continuous support interactions are no longer necessary. Engineers can now quickly download and install Docker Desktop and begin work almost immediately.

“We don’t need the pages and pages of internal setup documentation anymore. We just let people know that licenses have already been purchased, and they can just go and download them,” Montini said. After downloading Docker Desktop, Bitso employees log in using Okta, automatically gaining access to Docker Business features. This eliminates complex setup procedures, allowing developers to focus on strategic work and begin contributing faster, while maintaining robust security through integration with SSO.

Secure development tools

In addition to the streamlined onboarding, integrating Docker with Okta SSO significantly enhanced security and access management. This centralized approach to user management supports Bitso’s security and access management, ensuring compliance with regulatory standards. The reduced maintenance overhead allowed Bitso’s Platform Engineering team to focus on strategic initiatives.

Key benefits of Docker

Onboarding time reduced

Switching back to Docker reduced onboarding time from two weeks to a few hours per new engineer, saving substantial time and enabling faster productivity for the 100+ engineers onboarded in the last 8 months.

Improved compliance and security

Integrating Docker with Okta SSO ensured secure and streamlined access management, strengthening compliance with regulatory standards.

Reduced maintenance overhead

Docker’s simplicity and reliability eliminated the need for extensive internal documentation and continuous support interactions, allowing the Platform Engineering team to focus on strategic initiatives.

Zero new support tickets

The reduction in onboarding complexity and support requirements led to no new support tickets related to Docker Desktop, boosting productivity.

Significant cost savings

Streamlined processes and reduced maintenance needs resulted in notable cost savings, proving Docker more cost-effective than the alternative open source solution.

Improved collaboration

Docker’s industry-standard tools ensured seamless operation, regardless of engineers’ previous experience, improving collaboration and development speed.

Results and outcomes

The implementation of Docker at Bitso has yielded significant improvements, both quantitatively and qualitatively. These results underscore the effectiveness of Docker Business in enhancing Bitso’s operational efficiency, security, and developer experience.

Onboarding efficiency

One of the most immediate and measurable impacts was the drastic reduction in onboarding time. Before Docker, the onboarding process could take up to two weeks. With Docker Desktop, this time was reduced to just a few hours, allowing new engineers to start contributing almost immediately. This streamlined process significantly boosted productivity and reduced the time new hires spent on setup.

No support tickets needed

Since transitioning to Docker, the Bitso Platform Engineering team has seen zero new support tickets related to Docker Desktop. This shift has allowed the team to focus on strategic projects rather than troubleshooting setup issues. Docker Desktop’s intuitive interface and standardized setup have minimized the need for extensive documentation and support interactions, freeing up valuable engineering resources.

“We have no support tickets or people complaining about things not working, which is really our end goal. So we’re happy,” Montini said.

Cost savings and maintenance

Returning to Docker has also proved more cost-effective than maintaining their alternative open source solution. The streamlined processes and reduced maintenance needs have resulted in significant cost savings. Docker’s simplicity and reliability have eliminated the need for extensive internal documentation and continuous support interactions, allowing Bitso’s Platform Engineering team to focus on more strategic initiatives.

Enhanced security and compliance

Integrating Docker with Okta SSO has significantly enhanced security and access management. This centralized approach to user management ensures that only authorized personnel have access to critical systems, bolstering compliance with regulatory standards. The integration has made security management more straightforward and efficient, contributing to overall operational stability.

“Integrating with Okta, which is heavily audited and monitored by our teams, definitely makes everyone’s lives easier,” Montini noted.

Improved developer experience

The feedback from Bitso’s developers has been overwhelmingly positive. Docker Desktop’s intuitive interface and industry-standard tools have significantly improved the developer experience. Engineers appreciate the reduced friction in their development processes, which has led to higher overall satisfaction and productivity.

Conclusion

Overall, the implementation of Docker at Bitso has been a resounding success. Developers at Bitso have reported higher satisfaction and fewer interruptions to their work processes thanks to Docker’s intuitive tools, which have helped streamline their daily development tasks and foster collaboration across teams. The significant reduction in onboarding time, elimination of support tickets, cost savings, increased productivity, and improved developer experience have greatly enhanced Bitso’s operational efficiency.

Looking ahead, Docker will continue to play a crucial role in supporting Bitso’s rapid growth and innovation in the blockchain space. By providing a reliable and efficient development environment, Docker enables Bitso to maintain its leadership position and explore new expansion opportunities because Docker can help the company scale as it grows.

“Docker is one of those industry standards. It just works,” Montini said.

Learn more

“We have no support tickets or people complaining about things not working, which is really our end goal. So we’re happy.”

Sebastian Montini

Platform Engineering Director, Bitso

“The cost of onboarding delays and slowed productivity is way more than the cost of the licensing.”

Sebastian Montini

Platform Engineering Director, Bitso

“Docker is one of those industry standards. It just works.”

Sebastian Montini

Platform Engineering Director, Bitso

“Integrating with Okta, which is heavily audited and monitored by our teams, definitely makes everyone’s lives easier.”

Sebastian Montini

Platform Engineering Director, Bitso

“It just works. You don’t have to explain to anyone how to use it. It just does what everyone expects it to do in the way that everyone expects it to.”

Sebastian Montini

Platform Engineering Director, Bitso

]]>
How JWP Balances Dev and Security Priorities with Docker Scout https://www.docker.com/customer-stories/jwp/ Thu, 26 Sep 2024 20:11:21 +0000 https://www.docker.com/?post_type=resources&p=58531
gray
Case Study

How JWP Balances Dev and Security Priorities with Docker Scout

logo jwp

About: JWP, a pioneer in video streaming and player technology, empowers publishers and broadcasters with an end-to-end platform for delivering and monetizing exceptional video content across web, OTT applications, and CTV platforms. Serving more than 7,000 clients globally, JWP powers video for more than 1 billion users and generates over 9 billion impressions and 8 billion video plays each month.
Industry: Software as a service — video technology.
Location: Headquartered in New York City, USA, with offices in London, Eindhoven, and Skopje

Highlights

  • Fixed vulnerabilities: Fixed thousands of vulnerabilities; improved security and efficiency.
  • Ignored noise: Ignored tens of thousands of non-critical issues; reduced noise and improved prioritization.
  • Deployed rapidly: Enabled over 400 repositories in under an hour with seamless integration and quick setup.

 

“Docker Scout has been more than just a tool for us; it’s been a strategic asset.” — Stewart Powell, Engineering Manager.

Introduction

A year ago, JWP, a global leader in video streaming, shared its initial success story with Docker Scout on their blog. At the time, they had enabled more than 300 repositories for Docker Scout within an hour, showcasing the ease and efficiency of integrating Docker Scout into their development workflow. This move was part of their broader strategy to enhance security without compromising delivery speed or operational efficiency.

Fast-forward to today, and JWP’s journey with Docker Scout continues to evolve. With a mission to empower their customers through monetization, engagement, and seamless video delivery, JWP’s services have facilitated the streaming of more than 860 billion videos. During the past year, Docker Scout has helped JWP fix thousands of vulnerabilities and ignore tens of thousands of non-critical issues, thereby significantly reducing noise and improving efficiency. A robust technical infrastructure, including thousands of nodes and multiple Kubernetes clusters, supports this remarkable achievement.

JWP’s journey with Docker Scout highlights the importance of adaptable security tools in modern software development. By balancing developer autonomy with centralized security oversight, Docker Scout has helped JWP maintain a secure and innovative development environment, paving the way for future advancements and continued success.

Challenge

Balancing cross-team security collaboration and prioritization

As JWP enabled Docker Scout across more than 400 repositories, the company faced the challenge of developing securely without slowing down their developers. This was further complicated by shifting security responsibilities to development teams, a strategy common among many organizations aiming to empower developers.

However, this approach presented challenges, particularly due to the overwhelming volume of security alerts developers had to manage. Having to cut through this noise made it difficult for developers to prioritize and address vulnerabilities effectively.

JWP needed to balance security responsibilities more evenly between their centralized security teams and development teams. This balance was crucial for optimizing the time and effort of both teams while addressing JWP’s specific security needs. This required a strategic approach to prioritize vulnerabilities and ensure compliance while optimizing the development workflow. The main challenge was establishing a collaborative environment where the security team had the necessary visibility without inundating developers with alerts.

Solution

Leveraging Docker Scout for continuous vulnerability analysis

Docker Scout provided a balanced solution. It integrated seamlessly with JWP’s CI pipelines, offering real-time feedback and a centralized dashboard. This dashboard allowed the security team to oversee the entire landscape, ensuring compliance and strategic vulnerability management. 

JWP now operates a decentralized development model where each team owns its CI pipelines. Docker Scout’s centralized dashboard offers a unified view of all vulnerabilities across their container images. “The centralized dashboard has been a game-changer for us. It gives our security team the visibility and control they need without micromanaging each development team’s processes,” says Stewart Powell, Engineering Manager at JWP.

Following early adjustments, Docker Scout’s VEX (Vulnerability Exploitability eXchange) policy statements have proven invaluable in prioritizing and managing vulnerabilities effectively. These features allowed JWP’s security team to strategically prioritize vulnerabilities based on real-world risk rather than theoretical scenarios. 

This shift was significant in environments where particular vulnerabilities might exist but pose minimal risk due to how JWP’s Kubernetes clusters are configured — such as not running privileged containers or running as root. “VEX statements have helped us understand and manage vulnerabilities more practically,” Powell explains.

Furthermore, Docker Scout’s real-time feedback loop has significantly streamlined JWP’s workflows. Developers receive immediate feedback during the build process, ensuring that potential issues are addressed promptly. During the past year, Docker Scout has helped JWP fix thousands of vulnerabilities and ignore tens of thousands of non-critical issues. This process has fostered a culture of proactive security within the development teams, who are now more receptive to feedback from the security team.

The user-centered design of Docker Scout also played a crucial role. It has helped build trust and cooperation between the security and development teams, shifting to a collaborative dynamic. The security team can now make informed decisions about vulnerabilities in context and focus on actionable insights. “Docker Scout has really improved how our teams work together,” says Powell. “It’s not just about finding vulnerabilities; it’s about understanding them in context and prioritizing what matters most.”

“Docker Scout has enabled JWP to maintain our rapid development pace while ensuring a robust security framework, ultimately supporting our mission of delivering seamless and secure video streaming experiences to their global audience. Docker Scout has been more than just a tool for us; it’s been a strategic asset,” Powell says. “It helps us deliver on our mission while keeping our systems secure and our development teams empowered.”

Key benefits of Docker Scout

Simple integration

Quick setup within Docker Hub enabled hundreds of repositories in under an hour. Docker Scout’s integration required minimal effort. “With Docker Scout, we were able to go into Docker Hub, check a box, and with virtually no effort from my team whatsoever, provide developers with a comprehensive software supply chain and image vulnerability management program,” says Powell.

Unified dashboard

Docker Scout’s dashboard provided real-time visibility into vulnerabilities, allowing the security team to prioritize effectively and improve team communication. This centralized approach reduced friction in handling security alerts.

VEX policy statements

Effective prioritization of vulnerabilities based on exploitability and context. VEX policy statements helped the security team distinguish between critical and less urgent vulnerabilities.

Real-time developer feedback

Get immediate insights during image builds to address security issues proactively. Docker Scout provides real-time feedback to developers, allowing them to address issues on the fly. The tool contextualizes vulnerabilities, helping the security team focus on pressing issues.

Faster vulnerability resolution

The ability to identify and prioritize vulnerabilities quickly has led to faster remediation times.

Increased developer efficiency

Real-time feedback and contextual risk assessment have reduced the noise for developers, allowing them to focus on critical issues without being overwhelmed by alerts.

Enhanced security compliance

With Docker Scout, JWP has maintained compliance with security standards, such as PCI DSS Level 1, ensuring a robust security framework.

Results and outcomes

One year after integrating Docker Scout, JWP has transitioned from focusing on initial vulnerability detection and fixes to maintaining a strong, ongoing security posture. As showcased in this article, the integration of Docker Scout enabled hundreds of repositories within an hour, illustrating the tool’s efficiency and ease of adoption. The sustained impact of Docker Scout on JWP’s operations today highlights its effectiveness in ensuring long-term security and development efficiency.

Strengthened security posture

Docker Scout has played a pivotal role in improving JWP’s security posture. The tool offers real-time visibility into vulnerabilities across all container images through a centralized dashboard. This has enabled the security team to prioritize and address vulnerabilities more effectively, leading to a more secure environment. 

“Our security team is very competent and motivated to fix issues. They now have more context on what is fixable, what should be prioritized, and how risks should be viewed in context,” says Powell.

Enhanced team collaboration

Adopting Docker Scout has fostered better collaboration between JWP’s development and security teams. The centralized dashboard provides a unified view, facilitating clear communication and coordinated efforts to manage vulnerabilities. Development teams receive real-time feedback on container health and security, allowing them to address issues promptly. This collaboration has been vital in maintaining a high-security standard without compromising development speed.

Streamlined vulnerability management

A standout feature of Docker Scout involves the VEX policy statements, which help the security team prioritize vulnerabilities based on their exploitability and context. This information has enabled JWP to focus on critical vulnerabilities that pose real risks while managing less critical issues appropriately. “The concept of a vulnerability that exists but can’t be fixed is tricky, but VEX policy statements have gone a long way in helping us manage these effectively,” Powell notes.

Conclusion

JWP is poised to continue leveraging Docker Scout to maintain and enhance its security posture. The tool’s ability to provide real-time insights and facilitate team collaboration ensures that JWP can remain agile and responsive to emerging security threats.

“Trusting the experts to know best and moving some of that thinking back to the security team in terms of prioritizing vulnerabilities has been crucial,” Powell says. As JWP continues to evolve, Docker Scout remains a critical component in the company’s strategy to deliver secure, high-quality streaming services.

Learn more

“With Docker Scout, we were able to go into Docker Hub, check a box, and, with virtually no effort from my team, provide developers with a comprehensive software supply chain and image vulnerability management program.”

Stewart Powell

Engineering Manager, JWP

“The centralized dashboard has been a game-changer for us. It gives our security team the necessary visibility and control without micromanaging each development team’s processes.”

Stewart Powell

Engineering Manager, JWP

“The real-time feedback from Docker Scout has been invaluable. It helps our developers catch and fix issues early, making the whole process much smoother.”

Stewart Powell

Engineering Manager, JWP

“Docker Scout has really improved the way our teams work together. It’s not just about finding vulnerabilities; it’s about understanding them in context and prioritizing what matters most.”

Stewart Powell

Engineering Manager, JWP

“What’s nice about a tool like Scout is that our security team is really competent, very engaged, and very motivated to get stuff fixed. But they’ve also got a little bit more context now on what is fixable, what makes sense to prioritize, and what this risk looks like in context.”

Stewart Powell

Engineering Manager, JWP

“Part of our philosophy up to this point has been shifting the responsibility for container security to development teams. And now we’re getting back to a place where we are more focused on sharing responsibility between centralized security teams and engineering teams.”

Stewart Powell

Engineering Manager, JWP

“Initially, we focused on shifting security left to the developers. But we soon realized there needed to be a balance, and we’ve learned valuable lessons from that experience.”

Stewart Powell

Engineering Manager, JWP

“Our security team is competent and motivated to get stuff fixed. With Docker Scout, they have more context on what is fixable and what makes sense to prioritize.”

Stewart Powell

Engineering Manager, JWP

“The ability to prioritize vulnerabilities based on our specific requirements has had a significant impact on our business.”

Stewart Powell

Engineering Manager, JWP

“Getting that real-time feedback from Scout as you’re building images is great.”

Stewart Powell

Engineering Manager, JWP

“Trusting the experts to know best and moving some of that thinking back to the security team in terms of prioritizing vulnerabilities, has been crucial.”

Stewart Powell

Engineering Manager, JWP

“Any bit of information that gets us closer to ensuring our systems are patched and secure keeps us closer to our objective.”

Stewart Powell

Engineering Manager, JWP

“What’s powerful about Scout is with a very high-level overview about the nature of a vulnerability, we can make decisions about security in terms of our specific operating requirements.”

Stewart Powell

Engineering Manager, JWP

“Docker Scout provides feedback beyond what’s kept in code, giving another level of visibility that’s accessible to more than just developers.”

Stewart Powell

Engineering Manager, JWP

]]>