Libraryai vc

Why I'm Betting on AI-Native Venture

I built NUVC — 8 AI agents, 13 intelligence layers, 7,900+ investors. The empirical research changed everything I thought I knew about how venture capital actually works.
Tick Jiang
11 min readBest read slowly

I didn't just observe this shift from the outside. I built inside it.

NUVC — the platform I co-founded with Duan Duan, who leads brand, creative, and design — runs an 8-agent AI pipeline across 13 intelligence layers that I designed and built from scratch. It scores pitch decks in under 60 seconds using a multi-signal fusion engine that cross-checks LLM reasoning against deterministic rules, calibrated against 172 real VC deal memos and validated on 85 known-outcome companies including SpaceX, Stripe, Canva, Theranos, and FTX. Its investor database covers 7,900+ investors and 2,200+ funds across 35 countries.

The entire system has no traditional employee headcount. That's not a cost structure. It's an architectural choice that forces a different way of thinking about what work actually requires humans.

And that experience changed how I evaluate every company I look at now.

The math of venture capital is broken—not metaphorically, but structurally.

The model was built on an assumption that no longer holds: that creating significant software businesses requires enormous teams, long development cycles, and correspondingly large capital deployment. You raised $150 million over twelve years and, if everything went right, built something worth a billion.

That assumption is being dismantled in real time.

Small AI-native teams are reaching $100 million in annual recurring revenue in under two years. Companies with fewer than ten people are shipping products that would have required engineering organizations of fifty a decade ago. The relationship between capital deployed and value created has been fundamentally restructured.

If you're investing with the old math, you're investing in the wrong thing.

Building NUVC also produced research that I didn't expect. After scoring 673+ pitch decks and analyzing 172 real VC investment memos, the data challenged the conventional wisdom I'd been taught: product execution is a stronger predictor of expert VC assessment than team (r²=0.77 vs 0.49). Conviction — the qualitative signal that experienced investors say matters most — is simultaneously the most underweighted factor in formal decision frameworks (r=0.598, yet only 10%-weighted in most structured processes).

The industry has been optimising for the wrong signals. That finding shapes how I invest.

What AI-Native Actually Means

There's a lot of confusion about what "AI-native" means as an investment thesis, so I want to be specific.

An AI-native company is not a company that uses AI. Every company uses AI now. Microsoft uses AI. Your local accounting firm uses AI. "Uses AI" is not a differentiator.

An AI-native company is built from the ground up on the assumption that AI agents can perform tasks that previously required human labor—and that this fundamentally changes what the company needs to hire, how it ships product, and what its cost structure looks like.

The difference is architectural. A traditional SaaS company built on top of AI is still, fundamentally, a human-managed system with AI tools. An AI-native company is a system where AI does the work and humans do the strategy, judgment, and relationship management that AI can't yet replicate.

The economic implications of this distinction are not subtle.

A traditional software company building a legal document review product might need a team of ten engineers, a head of product, a sales team, and several legal experts to validate outputs. The marginal cost of serving each new customer involves human review, quality assurance, and sales effort.

An AI-native legal document review company has three engineers and a model. It serves ten times the customers with one tenth the headcount. The margin profile is categorically different.

This is not the future. It's the present. Companies like this exist right now and are growing at rates that traditional SaaS benchmarks can't adequately capture.

The Three Layers That Matter

The infrastructure of the agent economy is being built across three distinct layers, and the investment opportunities are different at each.

The tools layer is where AI development tools live: the infrastructure that makes it possible to build, test, and deploy AI-native applications. This is a high-conviction area for me because it has clear analogues in software history. When cloud computing emerged, the companies that built developer tools and infrastructure—AWS, Stripe, Twilio—captured enormous value by reducing the friction of building on the new platform. The same dynamic is playing out in AI. Neon, the serverless Postgres company, reported that AI agents are creating databases at four times the rate of human developers. That's not a product metric. That's a signal about the scale of the infrastructure demand.

The orchestration layer is where I spend most of my analytical energy. This is the coordination layer—the systems that manage multiple agents working in parallel, maintain context across tasks, enforce rules and permissions, and enable AI systems to operate reliably in real-world environments. This is the hardest layer to build and the least understood by most investors. It's also the layer where I believe the most durable moats will form. Trust infrastructure—how AI agents establish identity, verifiable provenance, and accountability—is a deep problem that will take years to solve and will produce significant winners.

The application layer is where most of the attention is currently focused, and where I'm most selective. Vertical AI applications—AI-native companies in specific industries—are compelling when the industry has high domain complexity (making it hard for generalist AI to compete), high-value workflows (making the economics work), and structural barriers to competition from incumbents. Healthcare, legal, finance, and government technology all fit this profile. Generic horizontal applications with thin domain moats are more fragile than they appear.

Why This Changes the Economics of VC

Here's the uncomfortable truth for traditional venture: if great companies can be built with dramatically less capital, the traditional VC fund model faces structural pressure.

The $500 million fund that needs to write $20 million checks to deploy its capital cannot write $2 million checks into AI-native companies that will never need a Series B. The incentive structure of large funds pushes toward large checks into companies that need large capital—which increasingly means older-model companies with large human organizations and correspondingly large capital requirements.

The funds that are best positioned for AI-native venture are smaller, more focused, and comfortable with companies that may grow to significant size without ever needing to raise $50 million.

Brainworks Ventures articulated this well when they launched their AI-native fund: the traditional playbook of $150 million over twelve years is being replaced by $6 million over four years achieving comparable outcomes. The velocity of value creation has changed. The capital requirements have changed. The fund structure that serves this shift is different from what most of the industry built.

For me, this means a few things in practice. I'm looking for companies that can be formidably competitive with small teams. I'm suspicious of hiring plans that project large human headcount without strong justification. I'm interested in the founders who understand that the AI they deploy is a form of capital—not a cost, not a tool, but a scalable productive asset that compounds over time.

What I Look For in AI-Native Founders

The founder profile for AI-native companies is genuinely different from the founder profile for traditional software companies, and I think most investors haven't fully updated their pattern matching.

Traditional SaaS investors often look for product-led growth mechanics, enterprise sales experience, and the ability to build a distributed go-to-market organization. These skills are valuable but increasingly insufficient.

What I look for now:

Comfort with asymmetric leverage. AI-native founders think about every workflow in terms of what can be automated and what must remain human. They don't hire a person for a task if an agent can do it. This isn't a cost-cutting mindset—it's an architectural one. They design organizations the way good engineers design systems: for reliability, scalability, and minimal unnecessary components.

Understanding of where the moat actually is. In AI-native businesses, the sustainable competitive advantage is almost never the underlying model. The models are becoming commoditized faster than most people expected. The moat is the data, the domain expertise embedded in the product, the customer relationships that generate proprietary feedback loops, and the distribution channels that incumbents can't easily replicate. Founders who know this design for it from day one.

Ability to work with uncertainty. The AI capability frontier is moving faster than any product roadmap can account for. The founders who navigate this well are not the ones who have the most precise three-year plans. They're the ones who have clear principles about what they're building and for whom—and who can update their tactics rapidly when the technology shifts under their feet.

Orientation toward genuine problems, not AI features. The worst AI-native pitches I see are feature pitches dressed up as company pitches. "We're using GPT-4 to do X" is not a company thesis. The best pitches are entirely focused on the problem: "These people have this specific pain, they cannot solve it with current tools, and the reason we can solve it now is that AI has crossed a capability threshold that makes our approach feasible." The AI is the enabler, not the product.

Why I'm Doing This in the Asia-Pacific

There's a specific reason I'm pursuing this thesis from Australia and across the Asia-Pacific rather than from Silicon Valley.

The US AI application market is crowded and expensive. Hiring AI engineers in San Francisco costs three times what it costs in Melbourne or Bangalore. Distribution in a market where every sector already has a well-funded AI incumbent is structurally harder than entering markets that are still open.

The Asia-Pacific is different. The talent markets are deep and not yet fully priced. The industries where AI-native applications will create the most value—manufacturing, logistics, financial services, healthcare, government—are massive here, and less penetrated by the current generation of software than equivalent sectors in the US. That's a compressible gap, and it compresses fast when the right teams arrive.

The cross-border dimension is the most interesting of all. An AI-native platform that can serve a Japanese manufacturer, an Australian logistics company, and a Singaporean bank from the same underlying infrastructure has a geographic diversification advantage that US-centric companies can't easily replicate. That geographic spread is a moat. It just takes longer to build than a GitHub repo.

This is where I think the best risk-adjusted returns in AI-native venture will be over the next decade: not in the crowded centre of the US market, but at the frontiers where the transformation is still arriving.

The Honest Uncertainty

I want to be honest about what I don't know, because I think intellectual honesty is itself part of the investment thesis.

I don't know which specific applications will win. The application layer is too dynamic right now for high-conviction specific bets. I have views, but I hold them loosely.

I don't know how quickly the capability frontier will move. Models are improving faster than most people expected two years ago, and there are credible arguments that we're closer to AGI-level capability than the consensus believes. If that's true, some of the application bets that look good today will be commoditized by the underlying capability in three years.

I don't know how regulation will evolve. AI regulation is at a genuinely uncertain juncture across most major markets. Regulatory risk is real and not yet fully priced.

What I do know: the structural shift is real. The economic disruption of traditional labor-heavy business models is already happening. The companies that are building AI-native systems today are accumulating advantages—in data, in model fine-tuning, in workflow automation—that will compound over time. The window for establishing those advantages is not permanently open.

I'd rather be early and occasionally wrong about specifics than late and definitively right about nothing that matters.

That's the bet.


Related: Wiring the Bot Economy — the infrastructure layer underneath this thesis: what gets built when the application winners become clear. What Startmate Won't Teach You — because AI-native companies require a different fundraising map, and most of the standard playbook doesn't account for it.

Tick Jiang is the technical co-founder of NUVC (nuvc.ai), an AI-native venture capital intelligence platform built in Melbourne with co-founder Duan Duan. Her empirical research on VC decision-making is targeting publication at ICAIF, FAccT, and AAAI. She writes on capital, AI, and building across the Asia-Pacific.

AI-native venture capitalNUVCAI investment thesisventure capital researchAI pitch analysisfounder selectionmachine learning investingproduct vs team debatestartup evaluation AI
— ◆ —
Share this essay

Share this insight