Due Diligence Blind Spots
“The questions investors don't ask — and why those omissions reveal more about a company than the ones they do.”
I've sat on both sides of the due diligence table.
I've been the investor asking questions about market size, team composition, unit economics, competitive moat. I've watched founders prepare for those questions for weeks, deliver polished answers, and walk away with term sheets.
I've also been on the operator side — being asked, having to decide in real time what to reveal and what to frame carefully.
And I've built an AI system that processed 172 deal memos from investments with known outcomes, looking for patterns in what investors asked, what they didn't, and which gaps predicted problems.
Here is what I've learned: the questions investors don't ask matter more than the ones they do.
The Standard Checklist Is Mostly Fine
Market size, competitive landscape, unit economics, team background, technology defensibility. These questions are useful. They're on every checklist for a reason.
The problem isn't the checklist. It's the assumption that checking the boxes constitutes diligence.
It doesn't. The checklist gives you a picture of what the company wants you to see. The omissions and inconsistencies give you a picture of what it is.
The Uncomfortable Questions
"Show me your cap table history and walk me through every decision."
Not just the current state — the full history. Who got what, when, and why? Were there messy situations? Promises that weren't kept? Investors who got pushed out?
Most investors look at the current cap table. What they should be looking at is how it got that way. The cap table is a record of how this founder handles complexity, competing interests, and hard conversations under pressure. That's what you're actually evaluating.
"What are you pretending not to know?"
This is the question that separates founders who've done the hard internal work from founders who've done the hard external work (which is to say, the pitch).
Founders who can answer honestly demonstrate a kind of self-awareness that compounds. They've already found the weak points in their own thesis. They know the risks. They're building with clear eyes.
Founders who deflect — with jargon, optimism, or a redirect to something stronger — are often protecting something. Sometimes it's a genuine problem. Sometimes it's just inexperience with direct conversations. Either way, you want to know which.
"Why hasn't someone else built this already?"
Not the softball version about timing. The hard version: what is actually preventing this? If it's purely "nobody thought of it" — extremely unlikely — say so. If there's a structural reason this is hard (regulation, data access, distribution, physics), what makes you confident you can navigate it when others haven't?
The founders who've grappled with this seriously have a specific, reasoned answer. The ones who haven't will discover the reason the expensive way.
What We Systematically Over- and Under-Index On
We over-index on credentials. We under-index on evidence of building.
Top school, top company, top prior investors — these correlate with success in aggregate but they don't cause it. The more useful question: has this person ever built something from nothing? Not managed something. Built.
The self-taught founder who's been hacking on their problem for two years before approaching anyone for capital is a categorically different bet from the credentialed founder who decided six months ago that their sector was interesting.
Both can succeed. But they're different risks, and they deserve different evaluation frameworks.
We over-index on TAM. We under-index on early adopter reality.
Market size calculations are among the most manipulable numbers in any pitch. The useful question isn't "how big could this be?" It's "do enough people care about this problem right now to be your first hundred customers, and can you actually reach them?" That second question requires much more specific knowledge than a TAM model.
We over-index on technology complexity. We under-index on whether the complexity is necessary.
Technical sophistication can look like a moat and be a boat anchor. The question isn't whether the technology is impressive — it's whether this level of complexity is required to solve the problem, whether it can be maintained as the team grows, and whether it creates competitive advantage or just raises costs.
The Reference Check Failure
Most reference checks are theater.
You call the references the founder provides. They say prepared, positive things. You note that the references were positive and move on.
The reference check that tells you something: ask the founder for contact information for their last three co-workers, including someone who left on bad terms. Someone they had to let go. Someone who reported to them and didn't get the outcome they wanted.
If they won't provide these, that's data. If they will, the conversations tell you what it's actually like to work with this person — which is the most important thing you need to know that you can't get from the pitch.
The questions that matter: What would need to be true for you to work with this person again? When did you last disagree seriously, and what happened? What do they systematically get wrong?
Most references will answer these honestly if you ask directly. Most investors don't ask.
The Financial Model Problem
We spend significant time on financial projections — despite knowing they're largely fiction.
The model isn't useful because the numbers are accurate. It's useful because of how the founder thinks about it.
The questions that reveal thinking quality: What's your burn rate actually driven by — not projected, actual? When did you last revise the model and why? What assumptions create the most variance in outcomes? If revenue is half what you project in year two, what changes?
The third question is particularly useful. Founders who can identify their most important assumptions have a fundamentally different relationship with uncertainty than founders who present the model as if the assumptions are facts.
The Deal That Taught Me This
One deal I was involved in — a web3 real estate tokenisation platform — passed most of the standard due diligence questions. The product was real, the market was emerging, the team had relevant experience.
What we didn't ask enough about: exit expectations and operational philosophy.
The founder had a specific vision of how the deal should be structured and how quickly it should move. That vision turned out to be incompatible with how the investors thought about governance and timelines. Not because anyone was wrong — because no one had asked the direct questions early enough to surface the misalignment.
I failed to close that deal. I think I was right to. But we could have reached that conclusion in week two instead of month four if the due diligence had included a direct conversation about what success looked like for everyone in the room.
That's on me.
The Meta-Question
The biggest blind spot might be the assumption that a thorough due diligence process is a comprehensive one.
It isn't. No one's is. Every process has gaps shaped by what you're comfortable asking, what you've been trained to evaluate, and what past experience has made you sensitive to.
The question is whether you're actively working to close those gaps — or assuming that because you worked hard, you looked at the right things.
The uncomfortable questions are uncomfortable for a reason: they surface things that might kill the deal. But better to surface them in diligence than discover them after the wire has cleared.
Ask the questions you're avoiding. The answers are the ones that matter.
Related: The Scoring Model Trap — on why structured evaluation frameworks can make you worse at seeing what matters. Reading Founder Conviction — on the single most underweighted signal in early-stage investing.
Tick Jiang is the technical co-founder of NUVC (nuvc.ai), an AI-native venture capital intelligence platform built in Melbourne. She writes on capital, decision quality, and building across the Asia-Pacific.