For Humanity: An AI Risk Podcast cover art

For Humanity: An AI Risk Podcast

For Humanity: An AI Risk Podcast

By: The AI Risk Network
Listen for free

LIMITED TIME OFFER | £0.99/mo for the first 3 months

Premium Plus auto-renews at £8.99/mo after 3 months. Terms apply.

About this listen

For Humanity, An AI Risk Podcast is the the AI Risk Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

theairisknetwork.substack.comThe AI Risk Network
Social Sciences
Episodes
  • Why Laws, Treaties, and Regulations Won’t Save Us from AI | For Humanity Ep. 77
    Jan 17 2026

    What if the biggest mistake in AI safety is believing that laws, treaties, and regulations will save us?In this episode of For Humanity, John sits down with Peter Sparber, a former architect of Big Tobacco’s successful war against regulation, to confront a deeply uncomfortable truth: the AI industry is using the exact same playbook—and it’s working. Drawing on decades of experience inside Washington’s most effective lobbying operations, Peter explains why regulation almost always fails against powerful industries, how AI companies are already neutralizing political pressure, and why real change will never come from lawmakers alone. Instead, he argues that the only path to meaningful AI safety is making unsafe AI bad for business—by injecting risk, liability, and uncertainty directly into boardrooms and C-suites. Peter reveals why AI doesn’t need to outsmart humanity to defeat regulation, it only needs money, time, and political cover. By exposing how industries evade oversight, delay enforcement, and co-opt regulators, this conversation re-frames AI safety around power, incentives, and accountability.

    Together, they explore:

    * Why laws, treaties, and regulations repeatedly fail against powerful industries

    * How Big AI is following Big Tobacco’s exact regulatory playbook

    * Why public outrage rarely translates into effective policy

    * How companies neutralize enforcement without breaking the law

    * Why third-party standards may matter more than legislation

    * How local resistance, liability, and investor pressure can change behavior

    * Why making unsafe AI bad for business is the only strategy with teeth

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show More Show Less
    1 hr and 24 mins
  • What We Lose When AI Makes Choices for Us | For Humanity #76
    Dec 20 2025

    What if the greatest danger of AI isn’t extinction — but the quiet loss of our ability to think and choose for ourselves? In this episode of For Humanity, John sits down with journalist and author Jacob Ward (CNN, PBS, Al Jazeera; The Loop) to unpack the most under-discussed risk of artificial intelligence: decision erosion.

    Jacob explains why AI doesn’t need to become sentient to be dangerous — it only needs to be convenient. Drawing from neuroscience, behavioral psychology, and real-world reporting, he reveals how systems designed to “help” us are slowly pushing humans into cognitive autopilot.

    Together, they explore:

    * Why AI threatens near-term human agency more than long-term sci-fi extinction

    * How Google Maps offers a chilling preview of AI’s effect on the human brain

    * The difference between fast-thinking and slow-thinking — and why AI exploits it

    * Why persuasive AI may outperform humans politically and psychologically

    * How profit incentives, not intelligence, are driving the most dangerous outcomes

    * Why focusing only on extinction risk alienates the public — and weakens AI safety efforts

    👉 Follow More of Jacob Ward’s Work:

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.

    #AISafety #AIAlignment #ForHumanityPodcast #AIRisk #ForHumanity #JacobWard #AIandSociety #ArtificialIntelligence #HumanAgency #TechEthics #AIResponsibility



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Show More Show Less
    1 hr and 20 mins
  • The Congressman Who Gets AI Extinction Risk— Rep. Bill Foster on the Future of Humanity | For Humanity | Ep. 75
    Dec 6 2025

    In this episode of For Humanity, John Sherman sits down with Congressman Bill Foster — the only PhD scientist in Congress, a former Fermilab physicist, and one of the few lawmakers deeply engaged with advanced AI risks. Together, they dive into a wide-ranging conversation about the accelerating capabilities of AI, the systemic vulnerabilities inside Congress, and why the next few years may determine the fate of our species.

    Foster unpacks why AI risk mirrors nuclear risk in scale, how interpretability is collapsing as models evolve, why Congress is structurally incapable of responding fast enough, and how geopolitical pressures distort every conversation on safety. They also explore the looming financial bubble around AI, the coming energy crunch from massive data centers, and the emerging threat of anonymous encrypted compute — a pathway that could enable rogue actors or rogue AIs to operate undetected.

    If you want a deeper understanding of how AI intersects with power, geopolitics, compute, regulation, and existential risk, this conversation is essential.

    Together, they explore:

    * • The real risks emerging from today’s AI systems — and what’s coming next

    * Why Congress is unprepared for AGI-level threats

    * How compute verification could become humanity’s safety net

    * Why data centers may reshape energy, economics, and local politics

    * How scientific literacy in government could redefine AI governance

    👉 Follow More of Congressman Foster’s Work:

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.

    #AISafety #AIAlignment #ForHumanityPodcast #AIRisk #FutureOfAI #AIandWarfare #AutonomousWeapons #AIEthics #TechForGood #ArtificialIntelligence



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Show More Show Less
    1 hr and 10 mins
No reviews yet