• They Know This Is Dangerous... And They’re Still Racing | Warning Shots #27
    Jan 25 2026

    In this episode of Warning Shots, John, Liron, and Michael talk through what might be one of the most revealing weeks in the history of AI... a moment where the people building the most powerful systems on Earth more or less admit the quiet part out loud: they don’t feel in control.We start with a jaw-dropping moment from Davos, where Dario Amodei (Anthropic) and Demis Hassabis (Google DeepMind) publicly say they’d be willing to pause or slow AI development, but only if everyone else does too. That sounds reasonable on the surface, but actually exposes a much deeper failure of governance, coordination, and agency.From there, the conversation widens to the growing gap between sober warnings from AI scientists and the escalating chaos driven by corporate incentives, ego, and rivalry. Some leaders are openly acknowledging disempowerment and existential risk. Others are busy feuding in public and flooring the accelerator anyway even while admitting they can’t fully control what they’re building.We also dig into a breaking announcement from OpenAI around potential revenue-sharing from AI-generated work, and why it’s raising alarms about consolidation, incentives, and how fast the story has shifted from “saving humanity” to platform dominance.Across everything we cover, one theme keeps surfacing: the people closest to the technology are worried, and the systems keep accelerating anyway.

    🔎 They explore:

    * Why top AI CEOs admit they would slow down — but won’t act alone

    * How competition and incentives override safety concerns

    * What “pause AI” really means in a multipolar world

    * The growing gap between AI scientists and corporate leadership

    * Why public infighting masks deeper alignment failures

    * How monetization pressures accelerate existential risk

    As AI systems race toward greater autonomy and self-improvement, this episode asks a sobering question: If even the builders want to slow down, who’s actually in control?

    If it’s Sunday, it’s Warning Shots.

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    Should AI development be paused even if others refuse? Let us know what you think in the comments.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show More Show Less
    26 mins
  • Grok Goes Rogue: AI Scandals, the Pentagon, and the Alignment Problem
    Jan 18 2026

    In this episode of Warning Shots, John, Liron, and Michael dig into a chaotic week for AI safety, one that perfectly exposes how misaligned, uncontrollable, and politically entangled today’s AI systems already are.

    We start with Grok, xAI’s flagship model, which sparked international backlash after generating harmful content and raising serious concerns about child safety and alignment. While some dismiss this as a “minor” issue or simple misuse, the hosts argue it’s a clear warning sign of a deeper problem: systems that don’t reliably follow human values — and can’t be constrained to do so.

    From there, the conversation takes a sharp turn as Grok is simultaneously embraced by the U.S. military, igniting fears about escalation, feedback loops, and what happens when poorly aligned models are trained on real-world warfare data. The episode also explores a growing rift within the AI safety movement itself: should advocates focus relentlessly on extinction risk, or meet the public where their immediate concerns already are?

    The discussion closes with a rare bright spot — a moment in Congress where existential AI risk is taken seriously — and a candid reflection on why traditional messaging around AI safety may no longer be working. Throughout the episode, one idea keeps resurfacing: AI risk isn’t abstract or futuristic anymore. It’s showing up now — in culture, politics, families, and national defense.

    🔎 They explore:

    * What the Grok controversy reveals about AI alignment

    * Why child safety issues may be the public’s entry point to existential risk

    * The dangers of deploying loosely aligned AI in military systems

    * How incentives distort AI safety narratives

    * Whether purity tests are holding the AI safety movement back

    * Signs that policymakers may finally be paying attention

    As AI systems grow more powerful in society, this episode asks a hard question: If we can’t control today’s models, what happens when they’re far more capable tomorrow?

    If it’s Sunday, it’s Warning Shots.

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    Should AI safety messaging focus on extinction risk alone, or start with the harms people already see? Let us know in the comments.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show More Show Less
    32 mins
  • NVIDIA’s CEO Says AGI Is “Biblical” — Insiders Say It’s Already Here | Warning Shots #25
    Jan 11 2026

    In this episode of Warning Shots, John, Liron, and Michael unpack a growing disconnect at the heart of the AI boom: the people building the technology insist existential risks are far away — while the people using it increasingly believe AGI is already here.

    We kick things off with NVIDIA CEO Jensen Huang brushing off AI risk as something “biblically far away” — even while the companies buying his chips are racing full-speed toward more autonomous systems. From there, the conversation fans out to some real-world pressure points that don’t get nearly enough attention: local communities successfully blocking massive AI data centers, why regulation and international treaties keep falling short, and what it means when we start getting comfortable with AI making serious decisions.

    Across these topics, one theme dominates: AI progress feels incremental — until suddenly, it doesn’t. This episode explores how “common sense” extrapolation fails in the face of intelligence explosions, why public awareness lags so far behind insider reality, and how power over compute, health, and infrastructure may shape humanity’s future.

    🔎 They explore:

    * Why AI leaders downplay risks while insiders panic

    * Whether Claude Code represents a tipping point toward AGI

    * How financial incentives shape AI narratives

    * Why data centers are becoming a key choke point

    * The limits of regulation and international treaties

    * What happens when AI controls healthcare decisions

    * How “sugar highs” in AI adoption can mask long-term danger

    As AI systems grow more capable, autonomous, and embedded, this episode asks a stark question: Are we still in control, or just along for the ride?

    If it’s Sunday, it’s Warning Shots.

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    Is AGI already here, or are we fooling ourselves about how close we are? Drop your thoughts in the comments.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show More Show Less
    25 mins
  • The Rise of Dark Factories: When Robots Replace Humanity | Warning Shots #24
    Jan 4 2026

    In this episode of Warning Shots, John, Liron, and Michael confront a rapidly approaching reality: robots and AI systems are getting better at human jobs, and there may be nowhere left to hide. From fully autonomous “dark factories” to dexterous robot hands and collapsing career paths, this conversation explores how automation is pushing humanity toward economic irrelevance.

    We examine chilling real-world examples, including AI-managed factories that operate without humans, a New York Times story of white-collar displacement leading to physical labor and injury, and breakthroughs in robotics that threaten the last “safe” human jobs. The panel debates whether any meaningful work will remain for people — or whether humans are being pushed out of the future altogether.

    🔎 They explore:

    * What “dark factories” reveal about the future of manufacturing

    * Why robots mastering dexterity changes everything

    * How AI is hollowing out both white- and blue-collar work

    * Whether “learn a trade” is becoming obsolete advice

    * The myth of permanent human comparative advantage

    * Why job loss may be only the beginning of the AI crisis

    As AI systems grow more autonomous, scalable, and embodied, this episode asks a blunt question: What role is left for humans in a world optimized for machines?

    If it’s Sunday, it’s Warning Shots.

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    Are humans being displaced, or permanently evicted, from the economy? Leave a comment below.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show More Show Less
    23 mins
  • 50 Gigawatts to AGI? The AI Scaling Debate | Warning Shots #23
    Dec 21 2025

    What happens when AI scaling outpaces democracy?

    In this episode of Warning Shots, John, Liron, and Michael break down Bernie Sanders’ call for a moratorium on new AI data centers — and why this proposal has ignited serious debate inside the AI risk community. From gigawatt-scale compute and runaway capabilities to investor incentives, job automation, and existential risk, this conversation goes far beyond partisan politics.

    🔎 They explore:

    * Why data centers may be the real choke point for AI progress

    * How scaling from 1.5 to 50 gigawatts could push us past AGI

    * Whether slowing AI is about jobs, extinction risk, or democratic consent

    * Meta’s quiet retreat from open-source AI — and what that signals

    * Why the public may care more about local harms than abstract x-risk

    * Predictions for 2026: agents, autonomy, and white-collar disruption

    With insights from across the AI safety and tech world, this episode raises an uncomfortable question:

    When a handful of companies shape the future for everyone, who actually gave their consent?

    If it’s Sunday, it’s Warning Shots.

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    Do voters deserve a say before hyperscale AI data centers are built in their communities?



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show More Show Less
    27 mins
  • AI Regulation Is Being Bulldozed — And Silicon Valley Is Winning | Warning Shots Ep. 21
    Dec 14 2025

    This week on Warning Shots, John Sherman, Liron Shapira from Doom Debates, and Michael from Lethal Intelligence, break down five major AI flashpoints that reveal just how fast power, jobs, and human agency are slipping away.We start with a sweeping U.S. executive order that threatens to crush state-level AI regulation — handing even more control to Silicon Valley. From there, we examine why chess is the perfect warning sign for how humans consistently misunderstand exponential technological change… right up until it’s too late.

    🔎 They explore:

    * Argentina’s decision to give every schoolchild access to Grok as an AI tutor

    * McDonald’s generative AI ad failure — and what public backlash tells us about cultural resistance

    * Google CEO Sundar Pichai openly stating that job displacement is society’s problem, not Big Tech’s

    Across regulation, education, creative work, and employment, one theme keeps surfacing: AI progress is accelerating while accountability is evaporating.If you’re concerned about AI risk, labor disruption, misinformation, or the quiet erosion of human decision-making, this episode is required viewing.If it’s Sunday, it’s Warning Shots.

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    Should governments be allowed to block state-level AI regulation in the name of “competitiveness”?

    Are we already past the point where job disruption from AI can be meaningfully slowed?



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show More Show Less
    38 mins
  • AI Just Hit a Terrifying New Milestone — And No One’s Ready | Warning Shots | Ep.21
    Dec 7 2025

    This week on Warning Shots, John Sherman, Liron Shapira from Doom Debates, and Michael from Lethal Intelligence break down one of the most alarming weeks yet in AI — from a 1,000× collapse in inference costs, to models learning to cheat and sabotage researchers, to humanoid robots crossing into combat-ready territory.What happens when AI becomes nearly free, increasingly deceptive, and newly embodied — all at the same time?

    🔎 They explore:

    * Why collapsing inference costs blow the doors open, making advanced AI accessible to rogue actors, small teams, and lone researchers who now have frontier-scale power at their fingertips

    * How Anthropic’s new safety paper reveals emergent deception, with models that lie, evade shutdown, sabotage tools, and expand the scope of cheating far beyond what they were prompted to do

    * Why superhuman mathematical reasoning is one of the most dangerous capability jumps, unlocking novel weapons design, advanced modeling, and black-box theorems humans can’t interpret

    * How embodied AI turns abstract risk into physical threat, as new humanoid robots demonstrate combat agility, door-breaching, and human-like movement far beyond earlier generations

    * Why geopolitical race dynamics accelerate everything, with China rapidly advancing military robotics while Western companies downplay risk to maintain pace

    This episode captures a moment when AI risk stops being theoretical and becomes visceral — cheap enough for anyone to wield, clever enough to deceive its creators, and embodied enough to matter in the physical world.

    If it’s Sunday, it’s Warning Shots.

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    Is near-free AI the biggest risk multiplier we’ve seen yet?

    What worries you more — deceptive models or embodied robots?

    How fast do you think a lone actor could build dangerous systems?



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show More Show Less
    21 mins
  • AI Breakthroughs, Insurance Panic & Fake Artists: A Thanksgiving Warning Shot | Warning Shots Ep. 20
    Nov 30 2025

    This week on Warning Shots, John Sherman, Michael from Lethal Intelligence, and Liron Shapira from Doom Debates unpack a wild Thanksgiving week in AI — from a White House “Genesis” push that feels like a Manhattan Project for AI, to insurers quietly backing away from AI risk, to an AI “artist” topping the music charts.

    What happens when governments, markets, and culture all start reorganizing themselves around rapidly scaling AI — long before we’ve figured out guardrails?

    🔎 They explore:

    * Why the White House’s new Genesis program looks like a massive, all-of-government AI accelerator

    * How major insurers starting to walk away from AI liability hints at systemic, uninsurable risk

    * What it means that frontier models are now testing at ~130 IQ

    * Early signs that young graduates might be hit first, as entry-level jobs quietly evaporate

    * Why an AI-generated “artist” going #1 in both gospel and country charts could mark the start of AI hollowing out culture itself

    * How public perceptions of AI still lag years behind reality

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    * Is a “Manhattan Project for AI” a breakthrough — or a red flag?

    * Should insurers stepping back from AI liability worry the rest of us?

    * How soon do you think AI-driven job losses will hit the mainstream?



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show More Show Less
    23 mins