In 2019, MMC Ventures found that 40% of European startups claiming to use AI had no actual AI in their products. They just said they did because it helped with fundraising.

The term is "AI washing," and it's everywhere.

What AI washing looks like

I see it constantly in vendor pitches, investment decks, and product demos. Some patterns:

The feature rename. That search function you've had for years? Now it's "AI-powered search." The recommendation engine that uses basic collaborative filtering? "Machine learning recommendations." Rules-based automation? "Intelligent process automation."

None of these are necessarily lies. They're just using the broadest possible definition of AI to catch the hype wave.

The API wrapper. Company builds a thin interface on top of OpenAI or Anthropic models, adds their branding, charges a premium. The "AI" isn't theirs. They're reselling someone else's capability with a markup.

This isn't always bad. Sometimes the value is in the integration, the UX, or the domain-specific prompting. But you should know what you're buying.

The demo that doesn't scale. The live demo works beautifully. The pilot project delivers results. Then you try to deploy it across the organisation and discover it requires a PhD to maintain, costs 10x what you budgeted, and breaks on edge cases the demo conveniently avoided.

The vaporware roadmap. "We're adding AI capabilities in Q3." Translation: "We haven't built anything yet, but we need to say AI in our pitch deck."

Questions that cut through the noise

When I'm evaluating AI claims for clients, whether for due diligence or vendor selection, I ask specific questions:

"Show me the training data." If they can't explain where the AI learned what it knows, they either don't have real AI or don't understand their own product. Both are problems.

"What happens when it's wrong?" Every AI system makes mistakes. Good implementations have guardrails, human review processes, and graceful degradation. Bad implementations just serve confident wrong answers.

"What's the latency and cost per query?" Real AI, especially large language models, has real costs. If they're claiming AI capabilities at commodity SaaS pricing, either they're losing money or they're not actually running AI on your queries.

"Can you run this on our data without sending it to external services?" If the answer is no, you need to understand exactly where your data is going and who can see it. This matters enormously for regulated industries.

"What's the accuracy on your actual use case?" Not the benchmark numbers from the model provider. Not the cherry-picked examples. The real accuracy on the messy, edge-case-filled data that exists in the real world.

When AI is genuinely valuable

You'd be forgiven for thinking I'm anti-AI. Quite the opposite. I've seen it deliver genuine value in specific contexts:

High-volume, pattern-based decisions. Fraud detection. Content moderation. Quality control. Anywhere you have thousands of similar decisions that follow learnable patterns.

Document processing at scale. Extracting structured data from unstructured documents. Summarising long texts. Translating between formats.

Augmenting human expertise. Not replacing the expert, but handling the routine parts so the expert can focus on what requires judgment.

Generating first drafts. Code, copy, analysis. Not the final product, but a starting point that's faster than a blank page.

The common thread: AI works when you have clear inputs, measurable outputs, and tolerance for imperfection. Read more real-world examples I've been involved in on the AxisOps insights page.

When AI is probably not the answer

And contexts where the hype outpaces the reality:

Strategic decisions. AI can provide inputs to strategy. It can't set strategy. If someone's selling "AI-powered strategic planning," be skeptical.

Low-volume, high-stakes situations. AI needs data to learn from. If you're making a few critical decisions per year, you don't have enough signal for AI to add value.

Deeply creative work. AI can generate variations on existing patterns. It struggles with genuine novelty. The best creative AI outputs are still ones where a human had meaningful input.

Anything requiring accountability. When something goes wrong, "the AI did it" isn't an acceptable answer. If you can't explain why a decision was made, you probably shouldn't automate that decision.

The vendor evaluation framework

For clients evaluating AI vendors, I use a simple framework:

Is the AI core or cosmetic? Does the product genuinely require AI to function, or is AI a marketing wrapper on something simpler?

Is the AI theirs or borrowed? Do they train their own models, or are they calling someone else's API? Neither is inherently better, but the cost structure and dependency risk are different.

Is the AI contained or sprawling? Does the AI do one thing well, or is it supposedly doing everything? Narrow, focused AI tends to work. General-purpose magic AI tends to disappoint.

Is there a human in the loop? Good AI implementations have clear points where humans review, override, or take over. Fully autonomous AI in business contexts is usually a red flag.

Can they explain the economics? AI has real costs: compute, data, expertise. If the pricing doesn't reflect those costs, something doesn't add up.

The honest conversation

Here's what I tell clients when they ask about AI:

AI is a real technology with real applications. It's also overhyped, oversold, and frequently misunderstood. Both things are true.

The goal isn't to adopt AI for its own sake. The goal is to solve business problems. Sometimes AI is the right tool. Often it isn't.

An independent assessment can help you tell the difference. Because vendors will never tell you their AI is smoke and mirrors. They need you to believe.

Evaluating AI vendors or considering AI investments? Let's have an honest conversation about what's real and what's hype.