The AI-Native Services Playbook - with Jake Saper, General Partner - Emergence Capital cover art

The AI-Native Services Playbook - with Jake Saper, General Partner - Emergence Capital

The AI-Native Services Playbook - with Jake Saper, General Partner - Emergence Capital

Listen for free

View show details

Summary

Our host, Ray Rike sits down with Jake Saper, General Partner at Emergence Capital, to unpack the firm's AI-Native Services Playbook. Jake brings a unique lens: 12 years at Emergence, early-stage bets on companies like Zoom, and a portfolio of seven AI-native services businesses already in the portfolio. The conversation covers what separates AI-native services from SaaS, why the business model is harder to execute than it looks, and the five metrics and structural choices that determine who wins.WHAT WE COVER IN THIS EPISODEDomain Expertise: Critical - But Not Required from the FoundersAI-native services companies are selling outcomes, not products. That means trust and credibility are the first sales. Domain expertise is non-negotiable, but it does not have to live in the founding team if two conditions are met: the founders go as deep as humanly possible on the service before launch, and they hire senior domain experts early. Emergence portfolio company Hanover Park, an AI-native fund administrator, is the case study. The founder interviewed 150 CFOs before writing a line of code and hired respected fund accounting veterans to sit alongside the AI. That combination unlocked enterprise trust from day one.Hire a Product Leader Before You Think You Need OneThe biggest structural trap in AI-native services is over-relying on human delivery while the product falls behind. Market pull is strong by design — if you promise faster, better, cheaper outcomes in an existing market, customers will buy. But if delivery is primarily human, you have a services company with venture capital financing and no AI leverage. The fix is a dedicated product leader whose sole KPI is productizing the service. The best AI-native services companies run a tight feedback loop between the doers (service delivery) and the builders (engineering), and the PM owns that loop.The Mirage of Product Market FitIn SaaS, fast growth plus strong net dollar retention meant you had product market fit. In AI-native services, those are necessary but not sufficient. Revenue growth powered by human labor is a false signal. True product market fit requires that AI is delivering the majority of the service value. Jake's framework: track both leading indicators (a North Star product metric showing AI leverage improvement, such as human review time per contract or time to migrate a line of code) and lagging indicators (revenue per FTE trending up quarter over quarter, and gross margin). The leading indicators tell you if you're building leverage. The lagging indicators confirm it.Outcome-Based Pricing: The Direction of TravelAI-native services companies that started with labor-based pricing will need to migrate toward outcome-based pricing over time, and the transition requires patience. Emergence portfolio company Prosper AI, an AI-native healthcare services provider handling prior authorization and benefits verification, navigated this by moving a portion of contracts to resolution-based pricing while keeping the remainder on a per-minute basis. That hybrid approach gave both sides the data and comfort to expand the outcomes-based portion at renewal. Jake's view: as AI does more of the work, downward pricing pressure is inevitable, but upward margin pressure offsets it.Revenue Per FTE and Gross Margin: The Two Metrics That Matter MostRevenue per FTE is the primary signal of AI leverage, but it needs to be benchmarked two ways: against the legacy service provider in the same vertical, and against itself quarter over quarter. The latter is more important. If revenue per delivery FTE is not improving each quarter, the AI is not compounding. On gross margin, the industry is still in the Wild West. Two common errors: allocating service delivery headcount to R&D instead of COGS because the team "helped train the model," and excluding inference spend from COGS. Both understate the true cost of delivery. Customer-specific model training belongs in COGS. Base model training belongs in R&D.The Moat QuestionBrand trust and proprietary data are the two sources of durable advantage. Brand matters because enterprises buying AI-delivered outcomes need a trusted guarantor. Data matters because high-volume AI-native operations accumulate transaction data that legacy providers, running at lower volume with more human overhead, simply cannot match. Emergence portfolio company Harper, an AI-native insurance broker, is outperforming brokerages ten times its size on placement speed and carrier-risk matching because its data volume is superior.LINKS Emergence Capital AI-Native Services Playbook: em.cap.comABOUT AI TO ROI Ray Rike is the Founder and CEO of Benchmarkit, the leading B2B SaaS and AI-native software benchmarking company. The AI to ROI podcast brings a metrics-first lens to enterprise AI adoption, ROI measurement, and the business models being built on top of AI. Subscribe on your favorite podcasting app and connect with Ray on LinkedIn.See Privacy Policy at ...
adbl_web_anon_alc_button_suppression_c
No reviews yet