Treating AI Like Software Is Dangerous
Failed to add items
Add to basket failed.
Add to wishlist failed.
Remove from wishlist failed.
Adding to library failed
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
Nonprofits are being urged to adopt AI quickly, often with the same playbook used for past technology shifts: select tools, train staff, and adapt over time. This episode explores why that approach breaks down under AI—and why the risks aren't about staff readiness or technical skill.
The conversation examines how AI alters decision-making, accountability, and oversight inside nonprofit organizations. Rather than behaving like traditional software, AI reshapes who makes judgments, how consistency is maintained, and where responsibility ultimately sits. When these changes go unaddressed, governance legitimacy, operational coherence, and mission alignment quietly erode.
This episode is for executive directors, board members, and nonprofit leaders responsible for outcomes who are sensing that AI adoption feels different—but haven't yet had a clear framework for understanding why. It focuses on governance as the starting point, not tools or training.
> If you want to hear the full explanation delivered directly, you can watch the original video here:
YouTube video: https://youtu.be/0ka9hVA3jP8
Note: This podcast episode is an AI-generated conversation created by Bright Nonprofit. The source material is a real YouTube video featuring a real person, Steve Vick, speaking in his own words on the Bright Nonprofit YouTube channel. The AI format is used to reflect on and discuss that original video content. No new ideas, arguments, or claims are introduced beyond what appears in the original video.