AI to ROI (fka Metrics that Measure Up) cover art

AI to ROI (fka Metrics that Measure Up)

AI to ROI (fka Metrics that Measure Up)

By: Ray Rike
Listen for free

Summary

AI to ROI is a podcast that shares how enterprises translate AI investments into measurable business value. Hosted by Ray Rike, Founder and CEO of Benchmarkit, the show features senior enterprise leaders and AI software executives who share how AI initiatives move from pilots to production, and how ROI is actually measured and achieved. In addition, each week, we publish a bonus episode with AI to ROI Newsletter co-author, Peter Buchanan to discuss the Big Story of the Week.

The AI to ROI podcast is the evolution of the original "Metrics to Measure Up" podcast.

Economics Management Management & Leadership
Episodes
  • NVIDIA – The Full-Stack Maestro
    May 13 2026

    Five months ago, Ray and Peter called NVIDIA the maestro of the AI economy. Since then, NVIDIA has not just conducted the orchestra. It has rewritten the music and may be building the entire concert hall. In this episode, Ray and Peter revisit their October thesis, walk through everything NVIDIA unveiled at GTC, and break down what it all means for enterprise AI buyers navigating infrastructure, inference costs, and procurement strategy.

    What we covered in this episode:

    From GPU maker to full-stack AI platform: the transformation is complete

    NVIDIA's strategic intent is no longer just selling chips. It is embedding its technology across the entire AI stack and becoming the foundational layer on which the rest of the AI economy rests. Ray draws the only historical parallel he can find: what IBM was to enterprise technology from the 1960s through the 1980s. The difference is NVIDIA is moving faster, with more cash, and with a software flywheel IBM never had.

    GTC was not a product launch, it was a platform declaration

    NVIDIA unveiled the Vera Rubin platform, a fully integrated AI supercomputer with liquid cooling and a two-hour installation window. They licensed Groq's LPU architecture in a $20 billion deal that combines GPU and LPU chips to deliver 35x token throughput over current Blackwell systems. They launched NemoClaw (an enterprise-grade agent framework already partnered with Adobe, Salesforce, and SAP), Dynamo (an open-source inference operating system), and the Nemotron family of open-source frontier models. Jensen committed $26 billion over five years in free cash flow to build best-in-class frontier models with no outside funding required.

    The financial performance is in a category by itself

    Fiscal year 2026 revenue came in at $215.9 billion, up 65% year over year and 8x since 2022. Data center revenue exceeded $190 billion. Free cash flow hit $97 billion, translating to a 47% free cash flow margin. Combined with 65% growth, that is a Rule of 40 score of 109. Ray notes he has never seen anything like it at scale, and NVIDIA is a hardware company running 80% gross margins. CFO Colette Kress described their inference position as: "right now, we are the king of inference."

    The moat is not hardware. It is ecosystem lock-in

    Since 2022, NVIDIA has committed over $50 billion across 170 venture deals, with corporate deal volume growing from 12 deals in 2022 to 67 deals in 2025. Portfolio companies include OpenAI, Anthropic, xAI, CoreWeave, and Lambda. Sovereign AI contracts signed since October total $30 billion across France, the Netherlands, Canada, Singapore, and the Middle East. Hyperscalers still represent roughly 50% of revenue, but the faster-growing segments are sovereign entities, enterprise verticals, and NeoCloud providers, which is exactly the diversification NVIDIA needs as hyperscaler CapEx normalizes.

    The risks are real but manageable from where NVIDIA sits today

    Custom ASICs from Google, Amazon, Meta, and Microsoft represent the most credible competitive threat, though those chips are optimized for internal platforms and do not solve multi-cloud or on-premise deployment needs. Export control escalation remains a live risk, with NVIDIA restarting NH200 production for China. TSMC concentration is a structural vulnerability, especially given geopolitical risk around Taiwan. And three hyperscalers account for over half of NVIDIA's receivables, some of whom are actively building competing chips.

    What enterprise AI buyers should do right now.

    Ray and Peter close with four concrete takeaways for enterprise buyers: evaluate the full infrastructure stack, not just GPU cost; model inference economics carefully before deciding which models to run and where; pursue a strategic partnership with NVIDIA rather than transactional procurement, because partnership creates supply access standard customers do not get; and do not assume custom silicon from hyperscalers solves your problem, because data residency and on-premise requirements often mean NVIDIA needs to be part of the solution regardless.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Show More Show Less
    34 mins
  • The AI-Native Services Playbook - with Jake Saper, General Partner - Emergence Capital
    May 7 2026
    Our host, Ray Rike sits down with Jake Saper, General Partner at Emergence Capital, to unpack the firm's AI-Native Services Playbook. Jake brings a unique lens: 12 years at Emergence, early-stage bets on companies like Zoom, and a portfolio of seven AI-native services businesses already in the portfolio. The conversation covers what separates AI-native services from SaaS, why the business model is harder to execute than it looks, and the five metrics and structural choices that determine who wins.WHAT WE COVER IN THIS EPISODEDomain Expertise: Critical - But Not Required from the FoundersAI-native services companies are selling outcomes, not products. That means trust and credibility are the first sales. Domain expertise is non-negotiable, but it does not have to live in the founding team if two conditions are met: the founders go as deep as humanly possible on the service before launch, and they hire senior domain experts early. Emergence portfolio company Hanover Park, an AI-native fund administrator, is the case study. The founder interviewed 150 CFOs before writing a line of code and hired respected fund accounting veterans to sit alongside the AI. That combination unlocked enterprise trust from day one.Hire a Product Leader Before You Think You Need OneThe biggest structural trap in AI-native services is over-relying on human delivery while the product falls behind. Market pull is strong by design — if you promise faster, better, cheaper outcomes in an existing market, customers will buy. But if delivery is primarily human, you have a services company with venture capital financing and no AI leverage. The fix is a dedicated product leader whose sole KPI is productizing the service. The best AI-native services companies run a tight feedback loop between the doers (service delivery) and the builders (engineering), and the PM owns that loop.The Mirage of Product Market FitIn SaaS, fast growth plus strong net dollar retention meant you had product market fit. In AI-native services, those are necessary but not sufficient. Revenue growth powered by human labor is a false signal. True product market fit requires that AI is delivering the majority of the service value. Jake's framework: track both leading indicators (a North Star product metric showing AI leverage improvement, such as human review time per contract or time to migrate a line of code) and lagging indicators (revenue per FTE trending up quarter over quarter, and gross margin). The leading indicators tell you if you're building leverage. The lagging indicators confirm it.Outcome-Based Pricing: The Direction of TravelAI-native services companies that started with labor-based pricing will need to migrate toward outcome-based pricing over time, and the transition requires patience. Emergence portfolio company Prosper AI, an AI-native healthcare services provider handling prior authorization and benefits verification, navigated this by moving a portion of contracts to resolution-based pricing while keeping the remainder on a per-minute basis. That hybrid approach gave both sides the data and comfort to expand the outcomes-based portion at renewal. Jake's view: as AI does more of the work, downward pricing pressure is inevitable, but upward margin pressure offsets it.Revenue Per FTE and Gross Margin: The Two Metrics That Matter MostRevenue per FTE is the primary signal of AI leverage, but it needs to be benchmarked two ways: against the legacy service provider in the same vertical, and against itself quarter over quarter. The latter is more important. If revenue per delivery FTE is not improving each quarter, the AI is not compounding. On gross margin, the industry is still in the Wild West. Two common errors: allocating service delivery headcount to R&D instead of COGS because the team "helped train the model," and excluding inference spend from COGS. Both understate the true cost of delivery. Customer-specific model training belongs in COGS. Base model training belongs in R&D.The Moat QuestionBrand trust and proprietary data are the two sources of durable advantage. Brand matters because enterprises buying AI-delivered outcomes need a trusted guarantor. Data matters because high-volume AI-native operations accumulate transaction data that legacy providers, running at lower volume with more human overhead, simply cannot match. Emergence portfolio company Harper, an AI-native insurance broker, is outperforming brokerages ten times its size on placement speed and carrier-risk matching because its data volume is superior.LINKS Emergence Capital AI-Native Services Playbook: em.cap.comABOUT AI TO ROI Ray Rike is the Founder and CEO of Benchmarkit, the leading B2B SaaS and AI-native software benchmarking company. The AI to ROI podcast brings a metrics-first lens to enterprise AI adoption, ROI measurement, and the business models being built on top of AI. Subscribe on your favorite podcasting app and connect with Ray on LinkedIn.See Privacy Policy at ...
    Show More Show Less
    30 mins
  • The Role of the CAIO in a Managed Service Provider - with Jim Piazza, CAIO Ensono
    Apr 28 2026
    Ray Rike sits down with Jim Piazza, Chief AI Officer at Ensono, a managed services provider scaling AI across both its internal operations and customer environments. Jim brings a rare combination of deep infrastructure experience, nearly a decade at Meta scaling data center operations with machine learning, and a rigorous framework for connecting AI investments to business outcomes that executive operators can actually measure.Key Topics:Defining the Chief AI Officer Role in an MSP: Jim describes the CAIO role as a blend of CDO, CIO, and CTO with an AI lens, but with a critical distinction: the job is not to ask what AI can do. It is to identify where AI improves service delivery, customer outcomes, and financial performance. At Ensono, that meant starting small as VP of Predictive Systems, demonstrating results, and earning the mandate to expand. Prioritization, not ideation, is the core skill.Building AI Tools That Drive Internal Operational ROI: Ensono developed three production AI systems for internal use. Envision Predictive Engine analyzes telemetry data across systems to predict failures before they cause business impact, including one case where a problem was detected 144 minutes before it would have affected a major logistics customer outside Ensono's own scope of responsibility. Diagnose Now puts the right diagnostic data in front of engineers at the right moment and has delivered up to a 66% reduction in mean time to repair in A/B testing. ChangeGuardian assesses risk scores for the 8,000-plus changes Ensono executes monthly, auto-generating methods and procedures from a decade of historical change data to reduce both risk and manual effort.Structuring AI Governance: The Three Musketeers Model: Jim, the CTO, and the CIO operate as a deliberate leadership triad. The CTO owns the platforms. The CIO owns data quality and structure. The CAIO owns the build-versus-buy decision and solution development. Shared accountability, not siloed ownership, drives alignment. Each business unit also contributes one to two subject matter experts through a formal value stream mapping process to identify where AI should focus first.Measuring AI ROI Before Writing a Line of Code Jim's most consistent lesson: define your value metrics before touching the technology. AI use cases must tie back to core business metrics such as mean time to repair, customer satisfaction, SLA risk reduction, and gross margin improvement. Business unit leaders own the outcome measurement. The CAIO owns the budget and the technology. That separation of responsibility keeps AI programs anchored to results rather than activity.The CAIO and CIO Relationship: Where the Lines Get Drawn: For companies bringing in a Chief AI Officer alongside an existing CIO, Jim offers a practical delineation. The CIO owns data infrastructure and quality. The CAIO is a consumer and a builder who depends on that foundation. Without clean, accessible data, AI programs stall regardless of the use case. The CAIO's job is to surface missing or insufficient data and partner with the CIO to close the gap.Lessons Learned and Career Advice for the AI Era: Jim's framework for AI program success: start with one or two high-probability use cases where data is already in good shape, build credibility through results, then expand. Avoid the ten-pilot trap. Kill weak use cases early. For early-career professionals, his advice is equally direct: learn to work with AI, not compete with it. Build problem framing, critical thinking, and business judgment. Technical fluency matters, but business judgment is what separates the people AI replaces from the ones AI makes more valuable.This episode is essential listening for technology and operations executives navigating the practical reality of AI deployment inside complex enterprise environments. If you are a CIO, CTO, COO, or Chief AI Officer trying to figure out how to structure governance, measure impact, and build internal credibility for AI programs, Jim Piazza gives you a real-world operating model, not theory. For managed services leaders and enterprise buyers evaluating MSP capabilities, the Ensono case studies show what it looks like when an MSP moves from reactive service delivery to predictive, AI-driven outcomes. And for executives still debating whether to hire a Chief AI Officer, this conversation makes a direct case for what the role should own, how it should partner, and what success looks like when it is done right.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
    Show More Show Less
    29 mins
adbl_web_anon_alc_button_suppression_c
No reviews yet