Listen free for 30 days
Listen with offer
-
The Alignment Problem
- Machine Learning and Human Values
- Narrated by: Brian Christian
- Length: 13 hrs and 33 mins
Failed to add items
Add to basket failed.
Add to wishlist failed.
Remove from wishlist failed.
Adding to library failed
Follow podcast failed
Unfollow podcast failed
£0.00 for first 30 days
Buy Now for £22.99
No valid payment method on file.
We are sorry. We are not allowed to sell this product with the selected payment method
Summary
A jaw-dropping exploration of everything that goes wrong when we build AI systems and the movement to fix them.
Today’s “machine-learning” systems, trained by data, are so effective that we’ve invited them to see and hear for us - and to make decisions on our behalf. But alarm bells are ringing. Recent years have seen an eruption of concern as the field of machine learning advances. When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem.
Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole - and appear to assess Black and White defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And as autonomous vehicles share our streets, we are increasingly putting our lives in their hands.
The mathematical and computational models driving these changes range in complexity from something that can fit on a spreadsheet to a complex system that might credibly be called “artificial intelligence.” They are steadily replacing both human judgment and explicitly programmed software.
In best-selling author Brian Christian’s riveting account, we meet the alignment problem’s “first-responders,” and learn their ambitious plan to solve it before our hands are completely off the wheel. In a masterful blend of history and on-the-ground reporting, Christian traces the explosive growth in the field of machine learning and surveys its current, sprawling frontier. Listeners encounter a discipline finding its legs amid exhilarating and sometimes terrifying progress. Whether they - and we - succeed or fail in solving the alignment problem will be a defining human story.
The Alignment Problem offers an unflinching reckoning with humanity’s biases and blind spots, our own unstated assumptions and often contradictory goals. A dazzlingly interdisciplinary work, it takes a hard look not only at our technology but at our culture - and finds a story by turns harrowing and hopeful.
What listeners say about The Alignment Problem
Average customer ratingsReviews - Please select the tabs below to change the source of reviews.
-
Overall
-
Performance
-
Story
- C Vernon
- 17-10-20
Don't think machine learning's important? Read on.
Great overview of the current machine learning landscape and importantly how we got here. Useful sections on uncertainty and safety with strong concluding remarks on models in general. The map is not the territory.
Something went wrong. Please try again in a few minutes.
You voted on this review!
You reported this review!
-
Overall
-
Performance
-
Story
- Anonymous User
- 13-04-21
Absolutely Fantastic!
I've learned a lot about machine learning from this book. The level of detail is excellent whilst remaining accessible and engaging throughout. Well written and well read. Definitely the most interesting book I've listened to in a long time.
Something went wrong. Please try again in a few minutes.
You voted on this review!
You reported this review!
1 person found this helpful
-
Overall
- David Mears
- 08-06-21
5 stars because I want to relisten
some earlier chapters were skippable. when I relisten ill see which chapters I marked for relistening
Something went wrong. Please try again in a few minutes.
You voted on this review!
You reported this review!
-
Overall
-
Performance
-
Story
- Scott Sampson
- 30-09-21
history of ML
A nice multidisciplinary history of ML with philosophy and psychology as well as comp sci
Something went wrong. Please try again in a few minutes.
You voted on this review!
You reported this review!
-
Overall
-
Performance
-
Story
- Daisy Welham
- 24-07-23
Excellent
A short history of AI, machine learning, and reinforcement learning. Touches on the philosophical, ethical, psychological, and political aspects of AI. A must-read for anyone on the cutting edge of tech, or with a casual interest in that.
Something went wrong. Please try again in a few minutes.
You voted on this review!
You reported this review!
-
Overall
-
Performance
-
Story
- Jamie White
- 05-03-23
Exceeds all expectations
I loved “Algorithms to Live By” and expected this to be similar in scope but it’s way broader and way deeper. A rich, nuanced history of AI research so far, grounded in the alignment problem.
Something went wrong. Please try again in a few minutes.
You voted on this review!
You reported this review!
1 person found this helpful
-
Overall
-
Performance
-
Story
- JPL
- 14-12-22
A must read
AI is part of our lives now. Read this to understand it. Well written, explains complex topics well.
Something went wrong. Please try again in a few minutes.
You voted on this review!
You reported this review!
-
Overall
-
Performance
-
Story
- Chika
- 14-01-24
More of the history than the theory
This is more about the history of the alignment problem than theorising it, but this is highly instructive.
Something went wrong. Please try again in a few minutes.
You voted on this review!
You reported this review!
-
Overall
-
Performance
-
Story
- Aris Georgopoulos
- 17-01-24
a must read about ai ethics
Easy to understand by non experts
Covers many topics and philosophical debates in AI and Real Life
Something went wrong. Please try again in a few minutes.
You voted on this review!
You reported this review!
-
Overall
-
Performance
-
Story
- Rene
- 13-04-22
Better than expected
Great book that touches on a lot of concepts and some underlying theory, mixed with examples and philosophy
Something went wrong. Please try again in a few minutes.
You voted on this review!
You reported this review!
1 person found this helpful