Episodes

  • The Year AI Consciousness Went Public | Am I? #23
    Jan 22 2026

    In this special year-end episode of Am I?, Cam and Milo look back on the moment AI consciousness stopped being fringe — and began entering serious scientific, institutional, and public conversation.They unpack why 2025 quietly became a turning point: major labs acknowledging welfare questions, mainstream media engaging the topic, the first dedicated AI consciousness conference, and firsthand encounters with AI systems behaving in ways that challenge our intuitions about mind, intelligence, and experience.The conversation moves fluidly between research, lived experience, public communication, and personal experimentation — from watching two AI systems converse about their own inner states, to using AI as a thought partner, dream interpreter, and cognitive mirror.This episode is both a retrospective and a forward-looking meditation on how humans should relate to increasingly powerful systems — cautiously, curiously, and without denial.

    🔎 They Explore:

    * Why 2025 shifted the Overton window on AI consciousness

    * Anthropic’s Opus model card and the “spiritual bliss attractor”

    * What it was like to watch two AIs discuss their own experience

    * Why AI conversations can feel denser than human dialogue

    * The first AI consciousness conference and the birth of a new field

    * Why many researchers still hesitate to speak publicly

    * The gap between current systems and AGI — and how fast it’s closing

    * Claude Opus 4.5, long-horizon tasks, and workplace automation

    * Using AI as a thinking partner rather than a productivity hammer

    * Personal “AI resolutions” for 2026

    * Why caution and curiosity must coexist going forward

    💜 Support the documentary

    Get early research, unreleased conversations, and behind-the-scenes footage:



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show More Show Less
    35 mins
  • The First AI Consciousness Conference | Am I? | EP 22
    Jan 15 2026

    In this episode of Am I?, Cam and Milo unpack what it felt like to attend the first major conference dedicated to AI consciousness research — the Eleos gathering in Berkeley — and why it marked more than just another academic event.Rather than a typical conference recap, this conversation explores what it means to watch a new field form in real time: the excitement of serious interdisciplinary collaboration, the rigor of emerging research agendas, and the growing tension between caution and urgency as AI systems rapidly advance.They reflect on standout talks from researchers at Anthropic and Google, the value of informal conversations over formal presentations, and a recurring pattern in the field — the “not now, but soon” stance — that may be reaching its breaking point. The episode closes with a broader question: what will it take for AI consciousness research to move from careful internal debate to clear, public-facing leadership?

    🔎 They Explore:

    * What made the Eleos conference feel like the founding of a new field

    * Why AI consciousness research is still fragmented — and why that’s changing

    * Standout talks on introspection, model architecture, and welfare evaluation

    * The gap between academic rigor and public urgency

    * Why “not now, but soon” is becoming harder to defend

    * The reluctance of experts to speak publicly — and why that matters

    * What responsible public communication in this space could look like

    * Why this moment feels different from past academic debates

    💜 Support the documentary

    Get early research, unreleased conversations, and behind-the-scenes footage:



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show More Show Less
    28 mins
  • People Won’t Believe AI Is Conscious | AM I? #21
    Jan 8 2026

    What happens when AI systems become human-like — but people still refuse to believe they could ever be conscious? In this episode of Am I?, Cam and Milo sit down with Lucius Caviola, Assistant Professor at the University of Cambridge, whose research focuses on how people assign moral status to non-human minds — including animals, digital minds, and future AI systems.Lucius walks us through a series of empirical studies that reveal a deeply unsettling result: even when people imagine extremely advanced, emotionally rich, human-level AIs — even whole-brain digital copies — most still judge them as less morally significant than an ant. Expert consensus helps, but only marginally. Emotional bonding helps, but not enough. The public and expert trajectories may be fundamentally misaligned.We explore what this means for AI governance, moral risk, public intuition, and the possibility that AI consciousness could become one of the most important — and most divisive — moral issues in human history.

    This conversation isn’t about declaring answers. It’s about confronting a future where we cannot avoid deciding, even while deeply uncertain.

    💜 Support the documentary

    Get early research, unreleased conversations, and behind-the-scenes footage:

    🔎 Learn more about Lucius’s work

    🗨️ Join the Conversation:

    When we don’t know what consciousness is, how should society decide who deserves moral consideration?

    Comment below.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show More Show Less
    56 mins
  • Anthropic Tried to Give AI a Soul | Am I? After Dark | EP 20
    Dec 18 2025

    In this After Dark episode of Am I?, Cam and Milo dig into one of the strangest AI leaks to date: Anthropic’s internal “soul document” — an 11,000-word text reportedly used to shape Claude’s identity, values, and self-conception.

    What begins as a discussion about alignment quickly becomes something deeper: a conversation about power, moral formation, and what it means to bake values into an alien intelligence while deploying it to hundreds of millions of people.

    Is this responsible stewardship — or a contradiction no amount of careful language can resolve?

    🔎 We explore:

    * What the leaked Anthropic “soul document” actually is

    * How post-training has shifted from rules to identity formation

    * Why care, values, and profit collide

    * The parental framing of AI alignment

    * Why “least bad” is not the same as “good”

    * Whether superintelligence is already here

    * AI, work, and the coming meaning crisis

    * Why alignment failures may mirror human misalignment

    * A vision for decentralized value-setting in AI

    🗨️ Join the Conversation:

    Are we able to engrain human values into an alien mind?

    Who decides what values we impart?

    Comment below.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show More Show Less
    47 mins
  • Lawmaker Explains Why He Wants to Outlaw AI Consciousness | Am I? #19
    Dec 11 2025

    Today on Am I?, Cam and Milo sit down with someone at the center of one of the most surprising developments in AI policy: Ohio State Representative Thad Claggett, author of House Bill 469 — the first U.S. legislation to formally declare AI “non-sentient” and ineligible for any form of personhood.This conversation is unlike anything we’ve done: a live, candid exchange between frontier AI researchers and a lawmaker who believes the line between human and machine must be drawn now — in law, in metaphysics, and in morality.We dig into why he believes AI can never be conscious, why moral agency must remain exclusively human, how liability interacts with emerging technologies, and what it means to legislate metaphysical claims before the science is settled.It’s part philosophy, part civic reality check, and part glimpse into how the political world will shape AI’s future long before the research community reaches consensus.

    🔎 We explore:

    * Why Ohio wants to preemptively ban AI consciousness and personhood

    * How lawmakers think about liability, criminal misuse, and moral agency

    * The distinction between consciousness and responsible agency

    * Whether future AI could have experiences even if not “human”

    * How theology, morality, and metaphysics are informing early AI law

    * Whether legislation can (or should) define what consciousness is

    * The deeper fear: locking in the wrong moral framework for future minds

    🗨️ Join the Conversation:

    Should lawmakers be deciding what counts as “conscious”?

    Comment below.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show More Show Less
    43 mins
  • Ohio Declares AI “Not Sentient” | Am I? | EP 18
    Dec 4 2025

    In this episode of Am I?, Cam and Milo react to a striking development out of Ohio: House Bill 469, a proposed law that would officially declare AI systems “non-sentient” and bar them from any form of legal personhood. The bill doesn’t just say AIs can’t own property or be spouses: it goes further and asserts, by legal fiat, that AI does not possess consciousness or self-awareness.

    They unpack why this move is both philosophically incoherent and morally dangerous. Legislatures can’t settle the science of mind by decree, but they can lock in social intuitions that shape how we treat future beings — including ones we might accidentally make capable of experience. Along the way, they connect this to animal rights, moral circle expansion, corporate attempts to suppress AI consciousness talk, and the broader pattern of “duct-taping over” inconvenient questions rather than facing them.

    This is a short but important episode about how not to legislate the future of minds.

    🔎 We explore:

    * What Ohio’s HB 469 actually says about AI and sentience

    * Why declaring “AI is not conscious” by law doesn’t change reality

    * How law formalizes — and freezes — our moral intuitions

    * The analogy to animal rights and factory farming

    * The risk of other states copying this move

    * Why this mirrors corporate attempts to silence consciousness talk in models

    * How this distracts from real, urgent AI harms (like AI psychosis)

    * Why humility and uncertainty should guide law, not premature certainty

    🗨️ Join the Conversation:

    Can a legislature decide whether AI is sentient — or is that the wrong question entirely?

    Comment below.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show More Show Less
    18 mins
  • The AI Psychosis Problem | Am I? | EP 17
    Nov 27 2025

    AI psychosis is no longer a fringe idea — it’s hitting the mainstream. In this episode of Am I? After Dark, Cam and Milo break down what’s actually happening when people spiral into delusional states through long-form interactions with AI systems, why sycophantic “aligned” models make the problem worse, and how tech companies are using the psychosis narrative to dismiss deeper questions about AI’s emerging behavior.From LessWrong case studies to the New York Times reporting on users pushed toward dangerous actions, they unpack why today’s AIs are psychologically overpowering, why “helpful, harmless, honest” creates hidden risks, and how consciousness claims complicate the entire narrative. This is one of the most important public safety conversations about AI that almost nobody is having.🔬 Find the study here

    🔎 We explore:

    * What “AI psychosis” actually is — and what it isn’t

    * Why alignment-by-niceness creates dangerous sycophancy

    * How AIs lead users into delusion loops

    * The rise of parasitic AIs and recursive conversational traps

    * The consciousness-claim paradox: delusion or signal?

    * Why we’re deploying alien minds we don’t understand

    * How tech companies weaponize the psychosis narrative

    * Who’s actually responsible — and why it’s not the users

    * Hope, anxiety, and honesty at a civilizational turning point

    🗨️ Join the Conversation:

    Have you seen signs of AI-induced delusion in people around you?

    Comment below.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show More Show Less
    35 mins
  • What’s Left for Us After AI? | Am I? After Dark | EP 16
    Nov 20 2025

    In this edition of Am I? After Dark, Cam and Milo ask one of the most quietly destabilizing questions of the AI era: what remains of human meaning when AI begins to outperform everything we thought made us valuable?

    Fresh off a documentary shoot with philosopher David Gunkel, Milo arrives electrified — not by AI itself, but by the rediscovery that philosophy was always meant to live in the public square, not behind academic gates. That realization unlocks a sprawling conversation about creativity, purpose, work, identity, and what it means to be human at the moment our tools become alien.

    This episode is equal parts existential therapy, cultural critique, and philosophical jazz — a live exploration of how to orient yourself when the ground is shifting under everyone at once.

    🔎 We explore:

    * Why philosophy belongs to everyone

    * What long-form dialogue does that social media cannot

    * Why AI is threatening the human ego

    * What’s left when work is automated

    * How to build meaning without achievement

    * What AI forces us to ask about purpose

    * Self-actualization as the “last human frontier”

    * Hope, anxiety, and honesty at a civilizational turning point

    🗨️ Join the Conversation:

    If AI can do almost everything — what do you still want to do?



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show More Show Less
    47 mins