They Know This Is Dangerous... And They’re Still Racing | Warning Shots #27 cover art

They Know This Is Dangerous... And They’re Still Racing | Warning Shots #27

They Know This Is Dangerous... And They’re Still Racing | Warning Shots #27

Listen for free

View show details

LIMITED TIME OFFER | £0.99/mo for the first 3 months

Premium Plus auto-renews at £8.99/mo after 3 months. Terms apply.

About this listen

In this episode of Warning Shots, John, Liron, and Michael talk through what might be one of the most revealing weeks in the history of AI... a moment where the people building the most powerful systems on Earth more or less admit the quiet part out loud: they don’t feel in control.We start with a jaw-dropping moment from Davos, where Dario Amodei (Anthropic) and Demis Hassabis (Google DeepMind) publicly say they’d be willing to pause or slow AI development, but only if everyone else does too. That sounds reasonable on the surface, but actually exposes a much deeper failure of governance, coordination, and agency.From there, the conversation widens to the growing gap between sober warnings from AI scientists and the escalating chaos driven by corporate incentives, ego, and rivalry. Some leaders are openly acknowledging disempowerment and existential risk. Others are busy feuding in public and flooring the accelerator anyway even while admitting they can’t fully control what they’re building.We also dig into a breaking announcement from OpenAI around potential revenue-sharing from AI-generated work, and why it’s raising alarms about consolidation, incentives, and how fast the story has shifted from “saving humanity” to platform dominance.Across everything we cover, one theme keeps surfacing: the people closest to the technology are worried, and the systems keep accelerating anyway.

🔎 They explore:

* Why top AI CEOs admit they would slow down — but won’t act alone

* How competition and incentives override safety concerns

* What “pause AI” really means in a multipolar world

* The growing gap between AI scientists and corporate leadership

* Why public infighting masks deeper alignment failures

* How monetization pressures accelerate existential risk

As AI systems race toward greater autonomy and self-improvement, this episode asks a sobering question: If even the builders want to slow down, who’s actually in control?

If it’s Sunday, it’s Warning Shots.

📺 Watch more on The AI Risk Network

🔗Follow our hosts:

→ Liron Shapira -Doom Debates

→ Michael - @lethal-intelligence

🗨️ Join the Conversation

Should AI development be paused even if others refuse? Let us know what you think in the comments.



Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
No reviews yet