I've reviewed dozens of technology companies for PE and VC firms. The questions they ask me to answer are rarely the questions that matter.

"Is the code good?" "Is the architecture scalable?" "Are they using modern technologies?"

These are reasonable questions. They're also the wrong starting point.

The real questions

After years of doing this, I've learned that technical due diligence isn't really about technology. It's about risk, capability, and trajectory.

Can this team ship? Not "have they shipped" but "can they keep shipping?" A codebase can be mediocre and still support a successful business if the team can iterate fast. A beautiful architecture means nothing if the team that built it has left.

Where are the landmines? Every technology stack has them. Technical debt, security gaps, scaling cliffs, vendor dependencies. The question isn't whether they exist. The question is whether the company knows about them and has a plan.

What happens when something breaks? Because something will break. The difference between a minor incident and an existential crisis is how the team responds. Do they have monitoring? Runbooks? On-call rotations? Or does everything depend on one person who happens to know where the bodies are buried?

What the codebase actually tells you

Yes, I look at the code. But not for what most people think.

Consistency matters more than cleverness. A codebase where everything looks the same (naming conventions, error handling, testing patterns) tells me the team has discipline. A codebase full of clever solutions tells me they have heroes. Heroes leave.

Tests tell the truth. Not the test coverage number, but what they test and how. Are they testing business logic or just chasing metrics? Do the tests actually run in CI, or are they broken and ignored?

Git history is a timeline of decisions. I can see when corners were cut, when priorities shifted, when the team was under pressure. The commit messages tell me whether they were thoughtful or panicking.

Documentation signals maturity. Not comprehensive documentation (that's often a warning sign of bureaucracy), but the right documentation. Architecture decisions. Runbooks for common failures. Onboarding guides. The things that help new people become productive.

Red flags that predict failure

Some patterns I've seen correlate strongly with troubled investments:

The "genius" architecture. If I need the original architect to explain how the system works, that's a liability, not an asset. Good architecture is boring. It's obvious. Anyone competent can understand it.

Security as an afterthought. If security was bolted on rather than built in, the remediation costs will be significant. I've seen companies spend more on security remediation post-acquisition than on the entire original development.

The deployment spreadsheet. If deployments require manual steps documented in a spreadsheet, the company isn't ready to scale. Every manual step is a failure waiting to happen.

Tribal knowledge. "Oh, only Sarah knows how that works" is a statement that should terrify investors. What happens when Sarah gets sick? Gets a better offer? Goes on holiday?

No monitoring in production. If the team finds out about problems from customers rather than from alerts, they're flying blind. This is shockingly common, even in companies that have raised significant capital.

Green flags that predict success

And some patterns that make me optimistic:

Boring technology choices. PostgreSQL, not the hot new database. Standard frameworks, not custom everything. The team that picks boring technology is focused on the product, not on resume-building.

Fast, frequent deployments. If they can deploy multiple times per day with confidence, they can iterate. If deployments are scary events that happen monthly, they're slow.

Incident culture. Post-mortems that focus on systems, not blame. A team that learns from failures is a team that improves.

Clean boundaries. Services that do one thing well. Clear interfaces between components. The ability to change one part of the system without breaking everything else.

Pragmatic technical debt. Every company has technical debt. The good ones know exactly where it is, why it exists, and when they plan to address it. The bad ones are surprised when I find it.

Why independence matters

Here's the uncomfortable truth: internal technical assessments almost always miss things.

The CTO has incentives to present the best possible picture. The engineering team doesn't want to admit the shortcuts they took. The founders genuinely believe their architecture is brilliant (they built it, after all).

An independent review sees what's actually there, not what everyone wishes was there.

I've had conversations that go like this:

"The team tells us the platform can scale to 10x current load."

"They're probably right, but there's a database pattern that will break at 3x. They either don't know about it or don't want to tell you."

That's not because the team is dishonest. It's because they're too close to see it, or they've normalised the risk, or they're hoping to fix it before it matters.

What good due diligence delivers

A useful technical assessment doesn't just list problems. It contextualises them.

This is critical: Fix before acquisition, or materially adjust valuation.

This is significant: Budget for remediation in first year post-acquisition.

This is normal: Technical debt that every company accumulates. Factor into integration planning.

This is actually fine: Things that look concerning but are reasonable trade-offs given the company's stage and priorities.

The goal isn't a perfect score. The goal is no surprises after the cheque clears.

The timing question

When should investors bring in independent technical review?

Too early: During initial screening. You'll spend money on deals that don't progress for non-technical reasons.

Too late: After term sheet. The power dynamics shift, and you're under pressure to close.

Just right: During serious due diligence, before final terms. Enough commitment that the company will cooperate fully, enough flexibility that findings can influence the deal.

The best engagements I've done were ones where findings led to better deals for both sides. Risks were identified, priced appropriately, and addressed in the integration plan. Everyone went in with eyes open.

If you're evaluating a technology investment and want an honest assessment of what's actually there, let's talk about technical due diligence.

Related: Due diligence is about spotting patterns. Here's what 25 years of reading signals looks like in The Long View.