Warning Shots cover art

Warning Shots

Warning Shots

By: The AI Risk Network
Listen for free

LIMITED TIME OFFER | £0.99/mo for the first 3 months

Premium Plus auto-renews at £8.99/mo after 3 months. Terms apply.

About this listen

An urgent weekly recap of AI risk news, hosted by John Sherman, Liron Shapira, and Michael Zafiris.

theairisknetwork.substack.comThe AI Risk Network
Politics & Government
Episodes
  • Grok Goes Rogue: AI Scandals, the Pentagon, and the Alignment Problem
    Jan 18 2026

    In this episode of Warning Shots, John, Liron, and Michael dig into a chaotic week for AI safety, one that perfectly exposes how misaligned, uncontrollable, and politically entangled today’s AI systems already are.

    We start with Grok, xAI’s flagship model, which sparked international backlash after generating harmful content and raising serious concerns about child safety and alignment. While some dismiss this as a “minor” issue or simple misuse, the hosts argue it’s a clear warning sign of a deeper problem: systems that don’t reliably follow human values — and can’t be constrained to do so.

    From there, the conversation takes a sharp turn as Grok is simultaneously embraced by the U.S. military, igniting fears about escalation, feedback loops, and what happens when poorly aligned models are trained on real-world warfare data. The episode also explores a growing rift within the AI safety movement itself: should advocates focus relentlessly on extinction risk, or meet the public where their immediate concerns already are?

    The discussion closes with a rare bright spot — a moment in Congress where existential AI risk is taken seriously — and a candid reflection on why traditional messaging around AI safety may no longer be working. Throughout the episode, one idea keeps resurfacing: AI risk isn’t abstract or futuristic anymore. It’s showing up now — in culture, politics, families, and national defense.

    🔎 They explore:

    * What the Grok controversy reveals about AI alignment

    * Why child safety issues may be the public’s entry point to existential risk

    * The dangers of deploying loosely aligned AI in military systems

    * How incentives distort AI safety narratives

    * Whether purity tests are holding the AI safety movement back

    * Signs that policymakers may finally be paying attention

    As AI systems grow more powerful in society, this episode asks a hard question: If we can’t control today’s models, what happens when they’re far more capable tomorrow?

    If it’s Sunday, it’s Warning Shots.

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    Should AI safety messaging focus on extinction risk alone, or start with the harms people already see? Let us know in the comments.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show More Show Less
    32 mins
  • NVIDIA’s CEO Says AGI Is “Biblical” — Insiders Say It’s Already Here | Warning Shots #25
    Jan 11 2026

    In this episode of Warning Shots, John, Liron, and Michael unpack a growing disconnect at the heart of the AI boom: the people building the technology insist existential risks are far away — while the people using it increasingly believe AGI is already here.

    We kick things off with NVIDIA CEO Jensen Huang brushing off AI risk as something “biblically far away” — even while the companies buying his chips are racing full-speed toward more autonomous systems. From there, the conversation fans out to some real-world pressure points that don’t get nearly enough attention: local communities successfully blocking massive AI data centers, why regulation and international treaties keep falling short, and what it means when we start getting comfortable with AI making serious decisions.

    Across these topics, one theme dominates: AI progress feels incremental — until suddenly, it doesn’t. This episode explores how “common sense” extrapolation fails in the face of intelligence explosions, why public awareness lags so far behind insider reality, and how power over compute, health, and infrastructure may shape humanity’s future.

    🔎 They explore:

    * Why AI leaders downplay risks while insiders panic

    * Whether Claude Code represents a tipping point toward AGI

    * How financial incentives shape AI narratives

    * Why data centers are becoming a key choke point

    * The limits of regulation and international treaties

    * What happens when AI controls healthcare decisions

    * How “sugar highs” in AI adoption can mask long-term danger

    As AI systems grow more capable, autonomous, and embedded, this episode asks a stark question: Are we still in control, or just along for the ride?

    If it’s Sunday, it’s Warning Shots.

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    Is AGI already here, or are we fooling ourselves about how close we are? Drop your thoughts in the comments.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Show More Show Less
    25 mins
  • The Rise of Dark Factories: When Robots Replace Humanity | Warning Shots #24
    Jan 4 2026

    In this episode of Warning Shots, John, Liron, and Michael confront a rapidly approaching reality: robots and AI systems are getting better at human jobs, and there may be nowhere left to hide. From fully autonomous “dark factories” to dexterous robot hands and collapsing career paths, this conversation explores how automation is pushing humanity toward economic irrelevance.

    We examine chilling real-world examples, including AI-managed factories that operate without humans, a New York Times story of white-collar displacement leading to physical labor and injury, and breakthroughs in robotics that threaten the last “safe” human jobs. The panel debates whether any meaningful work will remain for people — or whether humans are being pushed out of the future altogether.

    🔎 They explore:

    * What “dark factories” reveal about the future of manufacturing

    * Why robots mastering dexterity changes everything

    * How AI is hollowing out both white- and blue-collar work

    * Whether “learn a trade” is becoming obsolete advice

    * The myth of permanent human comparative advantage

    * Why job loss may be only the beginning of the AI crisis

    As AI systems grow more autonomous, scalable, and embodied, this episode asks a blunt question: What role is left for humans in a world optimized for machines?

    If it’s Sunday, it’s Warning Shots.

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    Are humans being displaced, or permanently evicted, from the economy? Leave a comment below.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Show More Show Less
    23 mins
No reviews yet