AI64 - Mapping the Next Generation of Enterprise Software | Redpoint Ventures

razinmustafiz 6 views 25 slides Oct 25, 2025
Slide 1
Slide 1 of 25
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25

About This Presentation

Source: https://www.redpoint.com/ai64/


Slide Content

MAPPING THE NEXT GENERATION
OF ENTERPRISE SOFTWARE

Table of Contents
Introduction 4
From SaaS to Service‑as‑Software 8
Introducing the AI64: Companies Ushering in a New Era of Software 10
Where AI Works Today 14
Where AI Is Going Next 16
Moving Beyond Seats: Pricing for Usage and Outcomes 18
AI-Native GTM: Faster to Land, Harder to Expand 20
Building Moats in the AI Era 22
Conclusion 24
<pg>2©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.

<pg>3
History shows that every platform shift in technology–mainframe to PCs,
on-prem to cloud, desktop to mobile–has produced a new wave of market
leaders. With large language models (LLMs), we are entering another
foundational change. In this report, we examine the burgeoning AI application
market, the implications for builders, and the 64 companies Redpoint believes
are poised to define this generation.
Abstract
©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.

As of October 2025, most of the commercial value created
by LLMs has come from the infrastructure layer. Chipmakers
such as Nvidia, the single biggest beneficiary of the AI boom,
and other semiconductor players powering AI compute have
captured outsized value, with Nvidia alone now exceeding a
$4T market cap. Foundation model companies like OpenAI,
Anthropic, xAI, and Mistral are collectively worth over $700B.
In contrast, the companies building the business applications
on top of these models—such as Perplexity, Glean, Abridge and
the other 61 companies listed in this report—account for less
than $100B in value.
However, if history is a guide, this imbalance will not last.
Platform companies or LLM providers require extraordinary
capital and specialized talent to compete, which will
concentrate leadership among a small set of winners. The
application layer, by contrast, is where we expect to see an
abundance of opportunities. The scope of potential use cases
is vast, and founders with unique expertise can quickly build
and commercialize products. This is where technology meets
the realities of industry workflows, where business outcomes
are delivered, and where we believe the next decade of value
creation will occur.
The change won’t happen overnight. Facebook arrived eleven
years after the Mosaic browser. Uber launched two years
after the release of the iPhone. Cloud-native companies like
Snowflake and Datadog didn’t reach scale until nearly a decade
after AWS rearchitected how we store and process data.
Iconic applications tend to emerge years after the underlying
technologies reach maturity and the pathway to scale is clear.
But even at this early stage, the AI applications layer seems to
be growing faster than any technological transition we’ve seen.
ChatGPT reached 100 million weekly active users in just two
months—a milestone that took the internet and smartphones
more than sixteen years to achieve. AI usage, measured in
terms of tokens processed by hyperscalers, surged far beyond
what we saw with compute in the early cloud days (Figure 1).
FIGURE 1
Unprecedented Consumption.
AI Usage has surged far beyond what we saw in the earlier cloud days even accounting for cheaper inference.
<pg>4 ©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.
Introduction

In less than three years, the AI ecosystem has produced hundreds of venture-backed companies, fueling a Cambrian
explosion of experimentation and commercialization. Already, AI is drafting contracts, analyzing financials, generating code,
and answering customer queries with fluency and speed that rivals humans. Teachers are generating lesson plans tailored to
each student’s needs. Marketing, advertising, and branding professionals are using AI to brainstorm and iterate, serving as an
ever-ready creative assistant at their side.
AI startups are also scaling faster than any previous generation of software. Data from Stripe shows that on average, AI
startups reach $5M in annualized revenue nearly a year faster than the SaaS generation. In Redpoint’s dataset of top-tier
deals, the spread is even wider: leading AI companies are growing nearly 3x faster than their non-AI counterparts. Some are
moving at unprecedented speed: Lovable hit $100M ARR in eight months, Cursor in twelve, and Perplexity in under two years,
among the fastest ramps in software history. Investors are rewarding that velocity: top-tier AI companies are raising rounds
roughly 2x larger and at 2.7x higher valuations than non-AI peers (Figure 2).
FIGURE 2
AI startups are scaling faster and raising larger rounds at higher
valuations than non-AI startups.
Based on Redpoint’s dataset of Series B and Series C deals done by peer firms in 2024; similar trends at Seed and Series A as well.
Introduction
<pg>5
©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.

FIGURE 3
The AI Opportunity is Massive.
Morgan Stanley estimates global Gen AI enterprise software spend will increase to $401B by 2028, contributing ~22% of total
software spend, and growing at a CAGR of 124%.
Introduction
<pg>6 ©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.
The pace of change shows no signs of slowing, either. Morgan Stanley projects that enterprise spending on generative AI
could exceed $400B by 2028, just five years after the launch of ChatGPT (Figure 3). That implies 124% annual growth over
the next four years, compared with just 7% CAGR for the rest of the enterprise software market. By 2028, they expect AI to
account for 22% of global software spend. CIOs are planning accordingly: in a Goldman Sachs survey, technology leaders said
they expect to triple their gen AI budgets in the next few years.

Introduction
<pg>7
The Enablers Behind the AI Boom
Generative AI’s breakneck pace is being powered by plummeting inference costs, improving model capability,
massive infrastructure investment, and record levels of capital formation.
Plummeting costs. LLM inference costs have dropped by nearly 100x in the past two years—far faster than
compute costs in early cloud, which took six years to drop around 5x.
Surging model capability. Every new generation of LLMs delivers meaningful leaps in reasoning, reliability,
multimodal capacity, and adaptability through fine-tuning and retrieval techniques.
Exploding infrastructure investment. Hyperscalers are now investing hundreds of billions of dollars annually in
AI compute, dwarfing historical levels.
Capital infusion. In 2024 alone, about $8B was invested in AI-native application companies—more than double
the prior year. AI companies pulled in 60% of all US VC dollars compared to just 14% in 2022.
FIGURE 4
What Does $177B Spent on Nvidia Chips Imply?
The business case for spending on AI data center build-out
©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.
Yet even these projections represent only a fraction of what’s required to justify the scale of today’s infrastructure build-
out. Nvidia’s data-center GPU sales alone are expected to exceed $177 billion by FY 2026—one of the largest single-cycle
capital deployments in technology history. After accounting for buildout costs and margins, that investment base would need
to support roughly $1.2 trillion in AI revenues by 2030, and $1.5 trillion by 2032, to generate a reasonable return on capital
(Figure 4). Therefore, hyperscalers building out their data centers anticipate new markets roughly 2-3x the size of the current
enterprise cloud software market (~$600B) over the next 7 years. It is hard to overestimate the remarkable impact AI will
have on how consumers and businesses operate and on the value created during this time.

From SaaS to Service‑as‑Software
For the past two decades, enterprise software has largely
been about enablement--giving workers tools to do
their jobs better. Salesforce helped sales teams manage
customer relationships; QuickBooks handled accounting;
Asana organized projects; Figma enabled collaboration.
These SaaS systems digitized information, standardized
workflows, and improved coordination—but the human still
did the work.
That is changing with AI-native applications. No longer are
companies buying software to help with operations. They
purchase software to perform these operations. Rather
than equipping a legal team with tools to organize contracts,
companies like Harvey and Legora deploy agents that help
review and draft contracts. Rather than providing customer
support teams with macros and dashboards to manage
tickets, GigaML and Decagon utilize AI to triage and resolve
issues autonomously.
This shift, from tools to agents, from enablement to
execution, will fundamentally alter the nature of knowledge
work. Intelligent systems will handle workflows that once
required teams of analysts, associates, and coordinators,
unlocking new levels of speed, scalability, and cost
efficiency.
This expands the potential market profoundly for software.
Historically, software budgets have been a fraction of
enterprise spend, dwarfed by the vast sums devoted to
labor. In domains like engineering, law, and marketing, our
estimates indicate labor outpaces software spend by more
than 10x. Consider law: the US legal services market is
estimated to be ~$400B, whereas legaltech is less than a
tenth of that. As AI automates human tasks, software can
now tap into those much larger pools of spend. Globally,
knowledge workers earn an estimated $10T in wages.
1

Assuming a 6:1 conversion from payroll to software could
unlock ~$1.8T in new software opportunity from AI–three
times the current cloud market (Figure 5).
Some of this value will flow to incumbents as they integrate
AI into existing products, but a significant portion of net-
new category creation will come from AI native startups.
As AI moves into robotics, autonomous systems, and other
embodied intelligence, its impact on real-world productivity
could be even deeper.
FIGURE 5
AI Unlocks new markets 3x
the size of cloud
Software spend historically represented a small fraction
of enterprise budgets relative to payroll and services. As
LLMs replace workflows previously done by people, the
addressable market for AI expands significantly. If cloud
software is a $600B market, AI unlocks an opportunity at
least 3x larger.
<pg>8
$1.8T
AI
$600B
Cloud
1
Gartner estimates 100M knowledge workers in the US, assuming $85K annual salary equates ~$8.5T in wages in the US (non farm payroll is ~$11T according to BLS and ~75% of the workforce is
estimated to be knowledge workers which corroborates this number); scaled by 25% to get to global
©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.

<pg>9
From SaaS to Service-as-Software.
AI will fundamentally change how we do work.
SaSSaaS
Perform knowledge work
Software & services TAM
Copilots, agents, tech enabled services
Charging for usage and outcomes
Lower gross margins, larger opportunities
Easy to land, hard to expand
Assist knowledge workers
Software only TAM
System of record
Charging for access
Gross margins above 90%
Hard to land, built in expand
From SaaS to Service-as-Software
©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.

Introducing the AI64: Companies Ushering in a
New Era of Software
<pg>10
The AI64 are the companies we believe are best positioned to define this new generation of software. We selected the AI64
based on four criteria:
Market Size: We prioritized companies tackling large and durable markets where AI can drive step-change efficiency
gains and unlock spend historically reserved for labor or services.
Approach: We looked for product approaches that go beyond incremental improvements—embedding AI directly into
workflows, automating outcomes, and solving the challenges of enterprise adoption.
Team: Founding teams were evaluated on their ability to move quickly, adapt to new architectures, and attract top
talent. Many of the AI64 are led by technical, AI-native founders with a track record of shipping and iterating at speed.
Momentum: Finally, we considered early traction—customer adoption, revenue growth, and product velocity—as
indicators of which companies are already proving AI’s value in the market.
AI64 at a Glance
©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.
$72B
aggregate valuation
$12B
total dollars raised
16
unicorns

<pg>11
Introducing the AI64: Companies Ushering in a New Era of Software
©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.

<pg>12
Introducing the AI64: Companies Ushering in a New Era of Software
©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.

Introducing the AI64: Companies Ushering in a New Era of Software
<pg>13©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.

Where AI Works Today
<pg>14
While AI’s potential applications are vast, adoption so far has clustered in functions where ROI is clear, data is rich, and
workflows are repetitive and measurable. Across the AI64, several categories stand out as the earliest proving grounds for AI
in the enterprise.
Software Development
AI has arguably had the most visible early impact in software development not just because of its capabilities, but because of
how quickly it has been adopted. Two-thirds of developers now use AI tools, with GitHub Copilot surpassing 20 million users.
Meanwhile, Cursor has grown to more than 1 million users in just 16 months, making it the fastest-growing AI-powered code
editor in the market. And with the rise of “vibe-coding” platforms like Replit and Lovable, non-engineers—from PMs and
designers to hobbyists—are now building and iterating with AI. Together, these shifts mark a fundamental change in who can
create software and how fast it can be done.
Customer Support
Customer support is one of the clearest proving grounds for AI because it combines massive ticket volumes, repetitive
workflows, and easily measurable ROI. Enterprises have long treated support as a cost center, with spend dominated by labor-
intensive BPOs and in-house teams. AI flips that model—automating common inquiries, scaling elastically, and delivering instant
responses that improve CSAT while lowering costs. The appeal is evident: AI can resolve issues end-to-end, adapt to vertical
compliance requirements, and operate 24/7 without linear headcount growth. Companies like Sierra and Decagon are leading
the way with horizontal support automation, Giga is innovating in voice-based customer support, and others like Gradient are
carving out vertical-specific use cases such as financial services.
Sales & Marketing
AI is remaking sales & marketing by automating the most time-consuming parts of the funnel and enabling far greater
personalization. Clay and Unify GTM are redefining outbound by enriching leads from hundreds of data sources and
orchestrating tailored outreach at scale. On the inbound side, Qualified’s AI SDR engages and qualifies prospects in real time,
ensuring no high-intent lead slips through the cracks. Meanwhile, next-generation CRMs like Attio eliminate manual data entry
by automatically capturing interactions and layering in intelligence to help teams prioritize accounts and pipeline. Together, these
tools shorten sales cycles, boost productivity, and make personalization at scale a reality.
©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.

<pg>15
Where AI Works Today
©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.
Legal
Law is language and LLMs are finally well-suited to it. Today’s models excel at drafting, summarization, structured extraction, and
research, the very backbone of legal work. Companies like Harvey and Legora are building copilots for elite law firms, already
reaching meaningful scale, while EvenUp and Finch are automating consumer law workflows, such as intake and demand letters.
With law firms generating more than $400 billion in fees annually in the U.S. alone and nearly 90% reporting plans to increase AI
investment, the opportunity extends well beyond copilots, toward AI systems that can replace entire portions of legal labor end-
to-end.
Healthcare
In healthcare, AI is already reshaping the physician’s workflow. Ambient notetakers like Abridge have spread rapidly, with adoption
reaching close to a third of physicians, and OpenEvidence is reportedly used daily by almost 40% of physicians. These adoption
rates are striking in an industry known for slow technology uptake. Beyond clinical documentation, startups such as Assort, Valerie
Health and Tandem are attacking the ballooning costs of administration by automating referrals, front-office tasks, and prior
authorizations. Given the disproportionate rise in administrative complexity and staffing costs over recent decades, it’s no surprise
that early AI wins are emerging in these pain points.

Where AI Is Going Next
<pg>16
Beyond these early leaders lies the next wave of opportunity—industries and workflows where adoption is nascent but
potential is enormous.
Horizontal functions include:
Finance: Finance teams remain buried under manual reconciliations, journal entries, reporting, and month-end closes. AI can
act as a junior accountant automating transaction coding, reconciliations, and forecasting, turning finance professionals into
reviewers rather than doers. Companies like Rillet, Everest , and Maxima are tackling some of the largest workflows in the
enterprise, targeting not just software budgets but billions in labor spend.
IT: IT service management (ITSM) is plagued by ticket backlogs and manual escalations. AI can resolve common tickets
instantly, surface root causes, and proactively detect incidents before they impact users. Serval and Console are reimagining
ITSM with AI-native workflows that challenge ServiceNow’s dominance, while Traversal is pioneering AI-powered SRE,
automating incident detection and response.
User Research: Traditional user research is costly, slow, and sample-limited. AI can continuously analyze customer interviews,
simulate personas, and surface insights in real time. Listen Labs and Aaru are rethinking how product teams gather feedback,
making research faster, cheaper, and scalable across the org.
Design: Creative teams spend huge amounts of time on iteration and formatting. AI-driven design tools like Gamma and Flora
Fauna make presentation building, prototyping, and content design instant and collaborative, unlocking productivity while
broadening who can contribute to creative workflows.
Recruiting: Hiring remains one of the most resource-intensive functions, with recruiters spending countless hours sourcing
candidates and conducting initial screenings. AI can now automate both—surfacing best-fit talent from vast pools and handling
first-round interviews or assessments—so recruiters can focus on closing and relationship-building. Companies like Juicebox are
leading this shift, turning recruiting into a scalable, automated process.
Compliance: Compliance monitoring has historically been reactive and resource-intensive. AI enables continuous monitoring,
real-time alerts, and embedded policy enforcement. Norm is building AI-native compliance infrastructure that reduces risk and
cost for regulated industries.
©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.

<pg>17
Industry verticals where software penetration has historically been low are seeing similar momentum:
Financial Services: From accounting to loan servicing, AI is eating into the most labor-intensive workflows. Basis is reinventing
accounting processes for CPA firms, Rogo is applying AI to investing, and Salient is streamlining loan servicing with voice agents
that automate outbound collection calls.
Insurance: At nearly $4T globally, it is one of the largest industries and also one of the most antiquated. Core workflows like
claims, servicing, and sales are still dominated by manual phone calls, paperwork, and armies of agents. Liberate is bringing AI
into this world by automating these processes end-to-end.
Field Services: Frontline industries like HVAC, plumbing, and landscaping still rely heavily on pen-and-paper workflows.
Probook and Avoca are modernizing operations, digitizing scheduling, dispatch, and field data collection, while Bobyard uses
AI to automate takeoffs, helping contractors chase more revenue opportunities by estimating faster.
Logistics: Logistics is the backbone of our economy. Nearly every product we consume passes through several nodes in the supply
chain before it reaches us. Coordinating those handoffs today requires endless phone calls, emails, and texts between shippers,
brokers, and carriers. Companies like Happyrobot, Boon, and Augment are applying AI to eliminate this manual back-and-forth and
freeing humans to focus on higher-value decisions.
Construction: Jobsites are information black holes, with critical data trapped in documents and conversations. Trunk Tools
automates data retrieval and reporting from construction sites, helping crews operate more efficiently and prevent costly re-work.
These archetypes illustrate the spectrum of how AI is entering the enterprise, ranging from augmentation (copilots) to automation
(agents) to full outcome delivery (hybrid AI and services). Founders need to be deliberate about where they play, as each model has
very different implications for pricing, defensibility, and go-to-market.
Where AI Is Going Next
©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.
Across the AI64, three dominant product archetypes are emerging along a spectrum of autonomy:
These assist humans in their daily tasks, often
through chat or embedded interfaces. The
human remains in control and often initiates
the workflow, while AI accelerates speed and
accuracy. Copilots make the most sense in fields
with nuance—law, coding, medicine—where
human judgment is indispensable.
Helps developers code faster
while keeping engineers in the
driver’s seat.
Example: Example: Example:
Copilots
These go further by initiating and carrying out
tasks directly, with humans reviewing exceptions.
Interfaces often resemble dashboards or queues
for flagged outliers. They are best suited to high-
volume, low-complexity work such as Tier 1/2
support, bookkeeping, or outbound calling.
These deliver outcomes by blending AI
with human oversight, especially in industries
where buyers prefer services over software.
Over time, AI absorbs more of the work.
This model resembles a BPO, but with an
AI-native foundation.
Automates claims, servicing and
sales workflows for insurance
carriers and brokers.
Automates administrative tasks
such as referrals and scheduling
using a combination of AI and
human-in-the-loop.
Agents Tech- enabled Services
Different models of automation

Moving Beyond Seats: Pricing For Usage and
Outcomes
<pg>18
The customer’s unit of value has also shifted. They increasingly expect to pay for software results, not software access. Tasks
completed, time saved, revenue generated—these are the KPIs of the AI era.
Looking across the AI64, we see a few models of pricing emerge (Figure 6):
The center of gravity has shifted away from seats. Only 19% of AI64 companies price primarily per seat, while 61% now tie at least
part of pricing to usage, outcomes, or hybrid models. This marks a clear break from the traditional SaaS playbook, where growth
depended on ever-expanding seat counts. Salesforce, Workday, Zendesk, and Intercom all scaled on predictable license expansion
as customers hired more people. AI breaks that dynamic as the very promise is fewer seats.
FIGURE 6
Pricing Models Across AI64 Companies
47%
of companies price on usage
in one way or another
©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.

Usage is the dominant pricing model. Nearly half (47%) of AI64 companies anchor pricing on some form of usage—whether
hybrid (platform + usage, seat + usage) or usage-only. Because inference costs scale with usage, this structure helps companies
protect margins. Among these models, platform fee + usage is most common (17%). By removing seat limits, this model
encourages broad adoption and experimentation—critical when tools are still new. For example, n8n supports unlimited users
and workflows but meters by “workflow executions,” while Clay tiers by number of searches. For Clay, decoupling value from
individual seats encourages adoption beyond core users in Sales to adjacent teams like RevOps and Growth, making the product
stickier and more valuable across the organization.
However, while usage-based pricing optimizes for adoption, it isn’t always precise. It treats simple and complex workflows as
equivalent—like paying every employee the same salary regardless of skill. For instance, voice AI vendors that charge per minute
can’t distinguish between a five-minute FAQ call and a high-stakes claims negotiation. Some companies are addressing this: for
instance, Cursor recently replaced fixed request limits with a credit-based system that better reflects underlying compute costs.
Outcomes-based pricing is still nascent. Though widely discussed, only 14% of companies use it today. It’s common in customer
support and tech-enabled services: Sierra, Decagon and Gradient Labs price per resolution, while EvenUp and Finch charge per
demand letter or pre-litigation workflow. The appeal is obvious—it’s easy to sell when pricing maps cleanly to business results—
but harder to operationalize. Vendors must build rigorous attribution, handle disputes over outcomes, and ensure product
reliability is high enough to replace human labor. That complexity has slowed adoption relative to usage-based hybrids.
Pricing maps to product archetypes.
• Agent-style companies like Decagon, Traversal, Salient, and Liberate that perform end-to-end work tend to favor
outcomes-based or usage-hybrid models.
• Copilots that assist humans typically use seat-based pricing with usage limits layered in, as seen with Cursor,
Runway, HeyGen, and Gamma.
• AI-enabled services often mirror BPOs, charging per completed unit of work, for instance, Finch, EvenUp, and
Valerie Health.
• System- of-record companies where AI is part of the solution but not the whole, such as Attio in CRM or Everest
and Rillet in ERP, generally stick with traditional SaaS models based on seats or license fees.
Revenue predictability is the trade-off. Usage and outcomes-based pricing make revenue lumpier. Aligning price with value
strengthens the business case but also raises expectations. Vendors must drive real usage and deliver measurable outcomes to
get paid. Landing a contract becomes easier; consistently meeting KPIs and earning expansion is harder.
The incumbent dilemma: For incumbents built on seat- or license-based models, this shift poses a structural challenge: as AI
reduces the need for human operators, it threatens their core revenue stream. To adapt, some are layering in hybrids: Intercom
now charges per resolution for its AI agent, Fin, on top of its seat-based product, while Salesforce has introduced “Flex
Credits,” letting customers pay per AI action or reallocate unused licenses toward usage. But these changes are incremental.
Retrofitting new models into legacy billing, compensation, and contract systems built for predictable license growth is messy.
More fundamentally, incumbents remain incentivized to defend seat revenue rather than replace it, leaving them straddling two
paradigms and creating an opening for AI-native challengers designed from the ground up for this paradigm.
<pg>19
Moving Beyond Seats: Pricing for Usage and Outcomes
©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.

AI-Native GTM: Faster to Land, Harder to Expand
<pg>20
If pricing defines how value is captured, GTM defines how it’s proven. AI-native companies are rewriting the SaaS playbook here
too. Sales motions are faster, more experimental, and more post-sales intensive—reflecting both buyers’ enthusiasm and the
category’s immaturity.
The Rise of the Forward-Deployed Engineer (FDE). Selling AI increasingly means deploying it. Many teams embed forward-
deployed engineers to scope customer problems, prototype solutions, and iterate in production. This model, borrowed from
Palantir, and adopted by players like Sierra and Decagon compresses feedback loops and accelerates time-to-value, but adds
operational costs and complexity as it scales. Job postings for “Forward-Deployed AI Engineer” roles have grown roughly
2,700% in the past year, underscoring how widespread the concept is becoming. The open question: is this a temporary
necessity born of immature products and buyers, or a lasting feature of how AI software is sold?
Sales Cycles Are Compressing: In a Redpoint survey of enterprise tech buyers, 46% reported procurement cycles of under two
months for new AI solutions—unusually fast for enterprise software (Figure 7). Top-down mandates to “adopt AI” and growing
executive pressure to show progress have accelerated deal velocity, giving vendors quicker entry but also raising expectations
for rapid ROI.
More Pilots, More Competition: Customers are piloting more AI tools in parallel, often pitting vendors head-to-head within
the same workflow. Proof-of-concept performance has become the new competitive arena: products must demonstrate value
quickly to survive procurement and earn expansion. With switching costs still low early on, building stickiness through workflow
embedding and deep integrations is essential.
Post-Sales Becomes the Growth Engine: Signing a deal is no longer the finish line—it’s the starting line. Experimental budgets
and outcome-linked pricing lower the barrier to entry: in Redpoint’s survey of enterprise AI buyers, 45% said they set aside at
least a quarter of their AI spend for testing (Figure 7). But vendors must now prove value to retain and grow accounts, shifting
the onus from pre-sales to post-sales. Iconiq data shows faster-growing AI-native companies dedicate 31–34% of headcount to
post-sales roles, roughly 10 percentage points higher than non-AI peers, a signal that sustained growth now depends more on
delivery than on acquisition.
©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.

<pg>21
AI-Native GTM: Faster to Land, Harder to Expand
FIGURE 7
Buyer Perspective:
Redpoint’s survey of enterprise tech buyers shows they are buying AI solutions quickly, but many of these purchases are
deemed “experimental”.
©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.

Building Moats in the AI Era
<pg>22
In the SaaS era, defensibility came from scale—distribution, network effects, and ecosystems that locked customers in. In the
AI era, the sources of advantage look different. The same underlying models are available to everyone, switching costs are
lower early on, and capital is abundant. Across the AI64, we see moats forming and compounding around context, domain
depth, trust, and speed.
1. Context Matters
If data was the moat of the cloud era, context is the moat of the AI era. AI is most valuable when it has the right context at the
right time. The strongest products make context a by-product of use, not an extra step. Cursor exemplifies this: as the editor
itself, it always knows what the developer is seeing, what they’ve just done, and what they’re likely to do next. It tracks the last
n actions to predict the (n + 1)th, giving the model a continuous, high-fidelity understanding of user intent. Products that own
the workflow own the context, and with it, precision, stickiness, and defensibility.
2. Data Moats Are Evolving
As foundation models improve and vendors like Mercor make large-scale datasets more accessible, owning proprietary
data is no longer enough. The real defensibility now lies in context-rich feedback loops: continuous signals that come from
being deeply embedded in a user’s workflow. When the model is the product, every interaction becomes an opportunity
to improve. It’s a break from the SaaS era, where incorporating feedback was slow and manual; in AI, learning can happen
seamlessly and in real time. That’s why distribution, i.e. embedding into the flow of work, has become the true data
advantage. Abridge’s performance, for instance, improves as it learns from real-world physician interactions across large-
scale deployments, capturing the linguistic and contextual nuances that drive clinical accuracy at scale. In AI, the edge is
shifting from owning proprietary data to learning from it faster.
3. Vertical Specialization
Depth is beating breadth. Nearly half (44%) of the AI64 companies are vertical specialists compared to just 14% in the
Bessemer Cloud Index, which tracks leading SaaS companies. This marks a shift toward domain depth over generality.
Companies like Legora in law, Liberate in insurance, and Gradient in financial services exemplify this. In AI, workflows
often involve agents making decisions or drafting actions. That requires deep domain context. Buyers want solutions that
understand their world out of the box, not a generic platform they need to retrofit. Vertical players tailor their solutions
to industry nuances, integrate with legacy systems of record, and align with domain-specific regulations and edge cases,
creating switching costs that horizontal tools can’t match.
4. Brand and Credibility
In a market flooded with hype and “demoware,” brand is a competitive advantage. Enterprises are moving from pilots to
production, but they’re cautious. Vendors that prove real ROI through published case studies, measurable outcomes, and
reference customers stand out fast. Companies like Sierra and Decagon have leaned into transparency, quantifying results
like resolution rates, accuracy, and time-to-value.
5. Capital as a Moat
In today’s market, investors are racing to anoint winners in key categories, often ahead of fundamental metrics. That
kingmaking effect gives select companies the resources to outspend and outbuild peers, investing aggressively in product,
GTM, and undercutting competitors to capture share. In a market moving this fast, perception can quickly become reality,
and capital itself becomes the moat.
©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.

<pg>23
The AI application wave is moving faster than any previous technology cycle. For founders, this creates unprecedented
opportunity but also a brutally competitive landscape where speed, focus, and credibility matter more than ever. Based on
what we’ve observed across the AI64 and the broader market, here are five principles to keep in mind.
1. Figure out what’s truly proprietary.
Foundation model companies are aggressively moving up the stack. OpenAI’s recent announcement that it is entering the
hiring market is only the latest example of how quickly today’s platforms can encroach on new categories. Founders must be
crystal clear about what makes their product defensible: proprietary feedback loops, vertical-specific workflows or solving
the last-mile problem of embedding AI reliably into enterprise workflows. Anything less risks being commoditized.
2. Outlearn the market.
Velocity has always mattered, but especially now when model capabilities shift by the week and yesterday’s breakthroughs
quickly become table stakes. The half-life of an AI advantage is short. The best teams treat speed not just as execution, but
as strategy: shipping fast, learning fast, and adapting faster than the market can settle.
3. Sell consultatively.
Many AI applications target areas of the business that buyers have never thought of as “technology categories.” That means
sales is not just about pitching a product, but about teaching buyers a new way of working. Founders who embed themselves
in customer processes, guide them through this transition, and make adoption feel low-risk will stand apart.
4. Invest in brand early.
With barriers to building products at historic lows, every category is flooded with competitors. The companies that break out
are the ones that establish credibility and trust early. That means cultivating prominent champions, publishing case studies,
and earning the right to be the default vendor in your category. In spaces without a clear leader, the opportunity is to “suck
the air out of the room” by becoming synonymous with the problem you solve.
5. Maximize your TAM.
It’s smart to start narrow—with a tightly defined workflow or customer segment. But to build a category-defining company,
you need to expand into a market large enough to sustain long-term growth. That requires building additional products,
broadening your scope, and ultimately becoming a platform. The best founders balance near-term focus with a clear
roadmap to owning a much larger market.
Lessons for Builders:
Building Moats in the AI Era
©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.

<pg>24
The last two years have made one thing clear: the real story of AI is just beginning. Foundation models may power the engines,
but it is the companies that reimagine workflows, build trust with enterprises, and deliver measurable outcomes that will
create enduring value. At Redpoint, we could not be more energized about the decade ahead. If you’re building in AI—whether
as an early founder with a bold idea or a scaling company pushing the frontier forward—we’d love to hear from you.
©2025 Redpoint Ventures. Proprietary and confidential. Do not copy or distribute without permission.
Conclusion

MAPPING THE NEXT GENERATION
OF ENTERPRISE SOFTWARE
Tags