On 24th March 2023, I tweeted a single line: "2027๐Ÿค–". In January 2025, I doubled down: "Still calling AGI for 2027. If you're in any other business, pivot now."

This isn't a casual prediction. I've been in tech for over two decades. I've been early before. I've been wrong before. I've learned to tell the difference. And I've restructured my entire business around this call.

What I Said

I've been in technology for over 25 years. I've watched hype cycles come and go. I've seen "the next big thing" fizzle more times than I can count. But I've also been early enough on the real shifts to recognise the difference.

Still calling AGI for 2027. If you're in any other business, pivot now. The singularity has well and truly left the station, but you might still survive riding its coattails.

In April 2025 โ€” two years after my tweet โ€” Daniel Kokotajlo, Scott Alexander, Eli Lifland and others published AI 2027, a heavily researched forecasting scenario laying out a month-by-month roadmap to AGI. Their conclusion? Superhuman coders by early 2027, AI agents automating AI research shortly after, an intelligence explosion that could exceed the impact of the Industrial Revolution. We're already seeing that shift โ€” spec-driven development is replacing traditional coding right now. They'd spent months modelling what I'd condensed into four characters and an emoji. I'm not saying they follow me on X โ€” but you definitely should.

I'm not hedging. I'm not saying "sometime this decade" or "within our lifetimes". I'm saying 2027, give or take a few months.

If I'm wrong, what have you lost? You'll have a business that's strategically aligned, leaner, and better positioned for whatever comes next. That's not a bad bet.

If I'm right, you'll wish you'd listened sooner.

What I Mean by AGI

Let me be precise. I'm not talking about a system that passes some arbitrary benchmark. Not a sentient superintelligence, not Skynet wiping out humanity in a single dramatic afternoon. I'm talking about AI that can perform any intellectual task that a human can, with comparable or better capability, across domains it wasn't specifically trained for.

The kind of intelligence that makes most knowledge work redundant. The kind that fundamentally breaks the economic model we've built our careers on.

So what's worse โ€” Skynet, or death by a thousand digital cuts?

Why 2027

The trajectory is visible if you're paying attention:

The capability curve is not linear. Each generation of models isn't incrementally better; it's qualitatively different. GPT-3 to GPT-4 wasn't 30% better. It crossed thresholds that seemed years away.

The infrastructure is scaling. The compute being deployed for AI training is growing faster than Moore's Law ever did. Every major tech company is going all in on this.

The research is converging. Reasoning, planning, tool use, multimodality, memory. The pieces that were missing are clicking into place. Not perfectly, but fast.

The money knows. When the smartest capital in the world is moving this aggressively, pay attention. They have access to information you don't.

I could be wrong about the exact timing. 2028, maybe. But not 2030. Not "sometime next decade". The window is narrow and it's closing fast.

What This Means for Business

If AGI arrives in 2027, the businesses that survive will be those that positioned for it in 2025 and 2026. Not 2027, when it's obvious. By then, it's too late.

This is why I'm restructuring what we do at AxisOps. Coding as a billable service has a limited shelf life. The value is shifting upstream, to the strategic layer: understanding what to build, why to build it, how to integrate AI into organisational structure.

The companies that will thrive are the ones that figure out how to put AI at the top of their org chart, not just in their tooling. That's a human leadership problem, not a technical one. And that's where advisors who understand both technology and business will be essential.

The Uncomfortable Implications

I'm not going to pretend this is all upside. If I'm right about 2027:

  • Most knowledge work becomes automatable within a few years
  • The economic displacement will be faster than any previous industrial transition
  • The safety and alignment problems become genuinely urgent, not theoretical
  • The geopolitical implications are destabilising

I don't have solutions to all of that. Nobody does. But I'd rather be positioning for reality than pretending it isn't coming.

Hold Me to It

That's the point of The Long View. If we get to summer 2028 and I was wrong, I'll write the post-mortem. I'll explain what I missed. That's how you learn.

But I don't think I'm wrong. And if I'm right, the businesses and individuals who took it seriously in 2025 and 2026 will be the ones still standing in 2030.

What I might be getting wrong: the timeline. I'm confident about the direction but the gap between "impressive demos" and "actually replaces knowledge work at scale" might be wider than I think. The infrastructure bottlenecks alone could add a year or two. I'd rather be early and prepared than late and scrambling.

The singularity has left the station. The question is whether you're on the train.

Thinking about how to position your business for what's coming? Let's talk about AI readiness.

If you're thinking about what AGI means for your business โ€” whether that's strategy, infrastructure, or just making sense of the noise โ€” I'd like to hear what you're planning.


This article is part of The Long View โ€” spotting signals and patterns before they're obvious.