I Knew Containers Would Win (I Just Got the Company Wrong)
Docker 1.0 shipped in June 2014. Four months earlier, I was already tweeting about curating Docker images and calling it "FAR more productive" than VMs. Sometimes you can see the winner before the race is decided.
The Thread
This was before 1.0. Before Docker was "production ready" by any official measure. Before most enterprises would touch it. I was already building infrastructure around it because the value was obvious.
Google announced Kubernetes at DockerCon in June 2014. The hyperscalers were betting on containers. The platform war was effectively over. Everyone else just hadn't figured that out yet.
Container Camp, September 2014. A sea of MacBooks — and every single person in that room who wanted to run Docker had to fire up boot2docker first, a VirtualBox VM bolted onto macOS, because containers were a Linux technology that Mac hardware couldn't run natively. Docker for Mac wouldn't exist for another two years. The irony of a room full of Apple laptops running Linux VMs to simulate Linux containers wasn't lost on me.
Why It Was Obvious
I'd been doing infrastructure for a decade before Docker existed. By 2010, I was deep into Puppet and configuration management — managing FreeBSD boxes, Windows nodes, anything I could point it at.
By 2013, the love affair with Puppet was over. Chef wasn't any better. I hated the DSLs and the master/agent architecture was a headache at scale.
Ansible was the answer to config management done right — agentless, SSH-based, YAML playbooks that actually read like what they did. Red Hat ended up acquiring them in 2015 — you'll see a pattern forming here, more on that later. But even Ansible couldn't solve the fundamental problem.
The problem: How do you package an application with all its dependencies and run it consistently across environments?
The old answer: Virtual machines. Heavy, slow to start, each one a full operating system to maintain.
The slightly better answer: Configuration management. Define the state, let Puppet or Chef or Ansible make it so. Better — Ansible much better — but you're still managing the underlying OS, dealing with dependency conflicts, praying that your staging environment actually matches production because someone used "latest" and didn't pin a version.
There was even an attempt to bridge the gap — ansible-bender, which let you build container images using Ansible playbooks instead of Dockerfiles. Reuse all your existing roles and config to produce OCI images. Elegant idea. Never took off. Turns out people just wanted to write Dockerfiles.
Docker's answer: Container images. Package once, run anywhere. The image is immutable. What you test is what you ship. The container runtime handles the isolation.
The first time I built a Docker image and ran it locally, then pushed it to a server and ran exactly the same thing, I knew VMs were dead for application workloads. The experience was just better. Qualitatively different, not incrementally better.
What Early Adoption Looks Like
Being early on Docker in 2014 meant:
No production documentation. You were figuring out networking, storage, orchestration from first principles and GitHub issues.
No ecosystem. Docker Hub was new. Most images were poorly maintained or abandoned. You built your own.
No orchestration. This was before Docker Swarm was usable, before Kubernetes was production-ready. Multi-host deployments were held together with scripts and hope.
Skepticism from everyone. "It's not production ready." "It's just LXC with nice packaging." "We'll wait until the enterprise version." The sensible people were waiting. The early adopters were building.
The early adopters won. By the time Docker was "enterprise ready," the architecture patterns were established by the people who'd been running it for two years. The best practices, the anti-patterns, the real-world edge cases - all documented by people who'd been there before the official guidance existed.
The Platform Bet
This was bigger than "Docker is useful for my projects." This was "containers are going to be the foundation of how infrastructure works for the next decade."
It wasn't only Docker I was betting on. That same summer I was all over CoreOS — a Linux distribution built from the ground up around containers.
CoreOS was opinionated in all the right ways: immutable infrastructure, automatic updates, etcd for distributed coordination. It was the operating system that took containers seriously as a first-class primitive, not an afterthought bolted onto a general-purpose Linux distro. Red Hat clearly agreed — they acquired CoreOS in 2018 and folded it into OpenShift, which became their entire container strategy.
Platform bets are different from technology bets. When you bet on a technology, you're betting it works. When you bet on a platform, you're betting on ecosystem effects. The value compounds as more people build on it.
Docker in 2014 had the ecosystem momentum. Google backing Kubernetes meant container orchestration would have hyperscaler investment. The cloud providers would build managed services. The tooling would mature. The hiring pool would grow.
The bet wasn't "is this container thing going to be a thing?" The bet was "containers are going to be the foundation of how infrastructure works for the next decade — and the time to build expertise is now."
That bet paid off. Not perfectly — Kubernetes became the real platform, and Docker's own orchestration efforts failed. But the core insight was right: containerisation was the future of application deployment, and 2014 was the time to go all in.
The Pattern
The pattern here is the same one that plays out with every platform shift:
- New technology solves a real problem in a way that's qualitatively better
- Early adopters see the value while mainstream opinion is skeptical
- There's a window where expertise is cheap to acquire
- The window closes as the technology becomes mainstream
- The early adopters have compounding advantages in the new landscape
In 2014, anyone could learn Docker. It was open source, well-documented, easy to run locally. The investment was time and attention, not money.
By 2016, Docker expertise was a hiring filter. By 2018, it was assumed. The window for being early had closed.
What I Learned
Trust your hands-on assessment. I knew Docker was going to win because I'd used it, not because I'd read about it. The analysts and the enterprise architects were waiting for maturity. The people building things had already decided.
Ecosystem momentum matters more than features. Docker wasn't technically superior to alternatives like LXC or rkt. It was better packaged, better documented, and had better network effects. That's what wins platform wars.
"Not production ready" is often wrong. The people saying Docker wasn't production ready in 2014 were right in a narrow technical sense and wrong in every way that mattered. By the time their criteria were satisfied, the early adopters had years of experience.
Being early is uncomfortable. There's no social proof. The documentation is incomplete. You're making decisions without consensus. That discomfort is the price of the advantage.
Where Docker Is Now
Docker just shipped Model Runner — local AI model execution baked into Docker Desktop. To all intents and purposes, everything it does can already be done with Ollama in a container. Yes, they've repackaged AI models as OCI images for distribution, but that's going to cost serious money to sustain. It smells like a company searching for relevance. Hey, it's AI right? Spend everything now to stay relevant before the singularity.
Docker Inc has been in the doldrums. The Docker Desktop licence change — charging businesses over 250 employees — pushed a wave of teams toward Podman — another Red Hat project, to be fair they basically cloned Docker with niceties and 100% API compatibility — which does the same job without the licensing headaches. The irony: Docker's great contribution was lowering the barrier to containerisation, and now their business model is raising it again.
Docker is still the most recognised name in containers. It's still what people mean when they say "containerise it." But recognition isn't the same as value. Kubernetes won the orchestration war at enterprise scale. Podman — now owned by IBM via Red Hat — is eating the desktop. Where exactly does Docker Inc fit? They don't own the orchestration layer. They don't own the runtime alternative. They keep bolting on features — AI model runners, cloud build services — that feel like a company trying to justify its valuation rather than solving problems people actually have. Death throes of a once wonderful company.
Meanwhile, remember that Red Hat pattern? Ansible in 2015. CoreOS in 2018. Podman growing quietly in the background. Then in 2019, IBM — yes, IBM — acquired Red Hat for $34 billion. The same IBM most people associate with mainframes, research papers, and selling the ThinkPad division to Lenovo. Before all that, IBM gave us the PC architecture, pioneered enterprise research, and drove early mobile computing. A company that's been reinventing itself since the 1960s.
Then they picked up HashiCorp in 2024 — Terraform, Vault, Consul. IBM now quietly owns the most comprehensive infrastructure automation stack on the planet: Ansible for config management, Terraform for provisioning, Podman for containers, OpenShift for orchestration, Vault for secrets. While Docker was bolting on AI model runners, IBM was buying the actual infrastructure layer.
The technology bet was right. The platform bet was right. The company bet? Docker Inc is searching for relevance. IBM, of all people, is the one standing tall.
Need help making the right platform bets for your infrastructure? Let's talk.
This article is part of The Long View — spotting signals and patterns before they're obvious.