Support Experience cover art

Support Experience

Support Experience

By: Krishna Raj Raja
Listen for free

About this listen

Customer support isn't just a cost center—it’s the heartbeat of your brand. Based on the principles of the book Support Experience, this podcast dives into the strategies that transform standard service into a competitive advantage.

Voice of the Customer is the lifeblood of every technology business. But most companies lose touch with it as they scale, leading to poor customer experiences and high churn.

Some companies, however, have taken a different path. They not only stay in touch with the Voice of the Customer... they amplify it with artificial intelligence and smart automation. Their secret? Building a world-class Support Experience.

Support Experience transforms customer support from a reactive cost center to a proactive profit center. It empowers your people to deliver exceptional support at scale. It turns customer conversations into tangible product improvements, fueling the long-term health of your business.

Krishna Raj Raja shares the blueprint for building a thriving business in the age of AI while making customer support more human than ever, with examples from iconic companies like Apple, Adobe, Google, Salesforce, Snowflake, VMware, and more. This podcast is for CEOs, Chief Customer Officers, Customer Support Leaders, Product Managers, and anyone looking to leverage AI for better customer experiences.

2026 Krishna Raj Raja
Economics
Episodes
  • How NICE Reinvented Knowledge Access with AI
    Feb 26 2026

    Enterprise support organizations face a growing challenge: knowledge is expanding, but access is shrinking. When knowledge ecosystems become too large and fragmented across different systems, both customers and internal teams struggle to find the answers they need.


    In this episode, we are featuring Chris Romrell from NICE to discuss how NICE tackled the widespread problem of "knowledge sprawl" head-on. Discover how NICE transformed their support experience by moving away from frustrating, keyword-based searches and static link lists, and instead embraced Generative AI and Precision RAG (Retrieval-Augmented Generation). Powered by SupportLogic’s Resolve SX, NICE successfully shifted from simply indexing knowledge to intelligently interpreting it.

    Chris pulls back the curtain on the economics of this transformation, sharing the exact ROI model they used to justify the investment. By aiming to deflect just 3% of their 50,000 annual support cases, NICE was able to generate substantial operational savings.


    Key Takeaways in this Episode:

    • From Links to Answers: How to deliver precise, contextually relevant generative answers directly within the customer journey instead of making users sift through search results.
    • Measuring True Deflection: Why traditional search metrics fail, and how NICE uses "search sessions" to track actual case deflection and measure resolution success without guesswork.
    • Rebuilding Customer Trust: Strategies for embedding intelligent search directly into the case creation workflow to seamlessly intercept tickets and rebuild trust in self-service portals.
    • Scaling Internal Knowledge: How to stop relying on "documentation heroes" by using AI to automatically extract and summarize solutions from resolved cases.
    • A Roadmap for AI Evolution: Actionable lessons for support leaders on how to start with the right problem, build an ROI model, and make space for experimentation.


    Tune in to learn how to turn support into a competitive advantage and elevate your customer experience from reactive to proactive

    Show More Show Less
    20 mins
  • The API Trap: Why Direct LLM Consumption Breaks the Enterprise
    Feb 26 2026

    In this episode we do a technical deep-dive for ML engineers, data architects, and technical CX leaders. We move past the prototype phase to tackle the hard infrastructure and architectural realities of deploying mission-critical Large Language Models (LLMs).


    We examine why direct LLM API consumption is an enterprise anti-pattern. By intentionally abstracting away infrastructure complexity, direct integrations introduce unacceptable compliance limitations, fragment governance, and tightly couple applications to individual vendors. We explore the necessity of building a centralized LLM Control Plane to sit between your applications and language models. Discover how this architecture enables deep observability (request-level tracing and token metering), dynamic failover routing, and decoupled prompt management where prompts are treated as centrally versioned application logic rather than static strings. We also unpack how to implement composable runtime guardrails and secure grounding inside a customer VPC to prevent data leakage and mitigate hallucinations.

    Next, we tear down the misconception that AI summarization is simply about compressing long text. In enterprise support, you must summarize distributed, heterogeneous systems—not human text. We dissect the architecture of the Ambient Decision Engine, revealing why the LLM is actually just the final "narrator" in a complex data pipeline. Join us as we explore the underlying technical stack:

    • Structured RAG: Executing SQL-like queries, aggregations, and cohort grouping over operational databases.
    • Data Fusion Layer: Normalizing, deduplicating, and aligning KPIs to synthesize massive signal sets into an interpretable insight graph.
    • Agentic Reasoning Layer: Running interpretation and inference over operational data to detect behavioral anomalies, evaluate SLA risks, and surface hidden cross-account trends.

    If you are tasked with building the intelligence engine for your enterprise, this podcast provides the architectural blueprints to move from fragile AI pilots to secure, scalable, and governed infrastructure

    Show More Show Less
    26 mins
  • SaaS at a Crossroads: Will Salesforce and ServiceNow Survive the AI Disruption?
    Feb 25 2026

    Are traditional Software-as-a-Service (SaaS) companies facing an existential threat? With the stock market valuations of many SaaS darlings dropping significantly, it is clear that Artificial Intelligence is massively disrupting how software is developed, shipped, and monetized. The winners and losers of this new era are still being decided, but one thing is certain: SaaS is at a crossroads.


    In this episode, we explore a talk given by Krishna Raj Raja at Qatar Web Summit, Founder and CEO of the AI-native startup SupportLogic and author of Support Experience, to unpack exactly what it takes to survive the "Intelligence Era". Krishna explains why surviving this disruption requires more than just plugging a Large Language Model (LLM) into your software. As he notes, an LLM is simply a powerful "Ferrari engine" that still needs the rest of the car—wheels, steering, and safety measures—to function effectively in the real world.

    Tune in as we dive deep into the transition from the legacy SaaS era to the new AI-first world, and discuss why companies must fundamentally rethink their business models, overcome the "Last Mile" problem, and reinvent their architectures to win the race.

    Key Topics Covered in This Episode:

    • The Four Eras of Computing: How the tech landscape has evolved from the SQL Database and SaaS eras into Big Data and today's Intelligence Era.
    • The Conversational UX Revolution: Why the transition from Graphical User Interfaces (GUIs) to Conversational User Interfaces is democratizing software and allowing anyone to seamlessly interact with computers.
    • The "Ferrari Engine" Illusion: Why foundation models alone aren't enough, and why mastering rare edge-case data to solve the difficult "Last Mile" problem is the true competitive differentiator.
    • Breaking Enterprise Silos: The challenge of overcoming disconnected Data, Signals, Context, and AI silos to build genuinely intelligent systems.
    • Mastering Context: Why next-generation AI architecture requires long-term contextual memory that spans across time, interactions, channels, people, and systems of record.
    • Beyond Cognitive Automation: Why the ultimate goal of the AI revolution shouldn't just be doing old tasks faster and cheaper, but creating entirely new products, services, and global economies
    Show More Show Less
    20 mins
No reviews yet