Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.
But we have one advantage: We get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
PLEASE NOTE: When you purchase this title, the accompanying reference material will be available in your My Library section along with the audio.
©2014 Nick Bostrom (P)2014 Audible Inc.
Someone directed to do a less hammy and over-dramatic performance of what is a non-fiction book.
Had I known the book makes many references to figures in the print version, I wouldn't have downloaded.
This is an intelligent, passionate and thoughtful book for a general, educated audience. Its hard at times, but saving humanity usually is hard.
I've followed Bostrom's academic writing for sometime on matters relating to existential risk. He's a cogent antidote to conspiracy theories, taking seriously our own and nature's capacity for human extinction.
This book outlines the likelihoods and timescales of different technologies creating an intelligence orders of magnitude beyond our own; the possible outcomes, good and bad, for humanity; and ways we can manage and mitigate the effects. In essence, it's, message is that sooner or later we will likely create an intelligence vastly beyond our own and without careful planning (say, not encoding this intelligence to optimize what we humans care about - freedom of choice, minimizing pain, beauty etc) we could very likely be superseded, if not destroyed.
It's all speculative, of course, as is any book about the future. But it's foolish not to plan for rainy days. This is one of those books that humbles you; makes your daily battle against confectionary or anxiety over relationships or vanity about your position in society seem petty.
I initially found the book tough to get into and even left it for a short time. Also I wasn't sure about the narration (which sounds like Sideshow Bob)
However on 2nd attempted I loved the book! Also the narration is lovely as well. (It feels soothing once you get used to it)
This is one of those books that end up being a lot more than what the original topic is. It's also about us, politics and future economics which I found very insightful.
The language used and often-dry delivery leads to a very slow and largely uninteresting listen. The first section is more enjoyable than the rest of the book.
Realistic and scientific predictions of our exciting sci-fi-like future, very interesting, no stupid ends like in movies.
Could be much shorter, some parts are repeated several times, though it's probably good for better context.
I loved the first few chapters, this gave a great outline of where we are up to with the relevant technologies and what the obstacles are for progression. I would recommend buying the book for this even if it consisted of only the first 4-5 chapters.
My only criticism would be that the author fixated a little on the idea of a superintelligence with a very simple goal system e.g. making as many paperclips as possible. My own view is that in the process of recursive self improvement the AI's goal system would develop in line with the rest of it's intellect and it would end up with more sophisticated, not more simplistic goals than humans. This could of course bring it's own risks and is inherently unpredictable, but doesn't necessarily equate to the default of existential catastrophe asserted in the book.
It is of course expected that there will be different views on this and that my own may be wrong.
"Colossus: The Forbin Project is coming"
This book is more frightening than any book you'll ever read. The author makes a great case for what the future holds for us humans. I believe the concepts in "The Singularity is Near" by Ray Kurzweil are mostly spot on, but the one area Kurzweil dismisses prematurely is how the SI (superintelligent advanced artificial intelligence) entity will react to its circumstances.
The book doesn't really dwell much on how the SI will be created. The author mostly assumes a computer algorithm of some kind with perhaps human brain enhancements. If you reject such an SI entity prima facie this book is not for you, since the book mostly deals with assuming such a recursive self aware and self improving entity will be in humanities future.
The author makes some incredibly good points. He mostly hypothesizes that the SI entity will be a singleton and not allow others of its kind to be created independently and will happen on a much faster timeline after certain milestones are fulfilled.
The book points out how hard it is to put safeguards into a procedure to guard against unintended consequences. For example, making 'the greater good for the greatest many' the final goal can lead to unintended consequence such as allowing a Nazi ruled world (he doesn't give that example directly in the book, and I borrow it from Karl Popper who gave it as a refutation for John Stuart Mill's utilitarian philosophy). If the goal is to make us all smile, the SI entity might make brain probes that force us to smile. There is no easy end goal specifiable without unintended consequences.
This kind of thinking within the book is another reason I can recommend the book. As I was listening, I realized that all the ways we try to motivate or control an SI entity to be moral can also be applied to us humans in order to make us moral to. Morality is hard both for us humans and for future SI entities.
There's a movie from the early 70s called "Colossus: The Forbin Project", it really is a template for this book, and I would recommend watching the movie before reading this book.
I just recently listened to the book, "Our Final Invention" by James Barrat. That book covers the same material that is presented in this book. This book is much better even though they overlap very much. The reason why is this author, Nick Bostrom, is a philosopher and knows how to lay out his premises in such a way that the story he is telling is consistent, coherent, and gives a narrative to tie the pieces together (even if the narrative will scare the daylights out of the listener).
This author has really thought about the problems inherent in an SI entity, and this book will be a template for almost all future books on this subject.
"A must read that must be read slowly"
There is not much math in this book, not many pictures or tables. Usually this is a good indicator that I'll be able to follow along in an audio version. That was not true of this book. I listen to audiobooks while doing menial tasks involving infrequent and brief moments of concentration, with most books I am able to do this easily, but this book requires some pondering and digestion. Any distraction seemed to be enough to miss something important. Perhaps some of this was due to narrator's smooth baratone which - for reasons I don't know - I didn't like. I plan on getting the hard copy and reading this one in silence. This book is definitely a must read, but it also seems it must be read slowly. Put it down, think about it, talk about it with your friends, then and only then on to the next chapter.
"Interesting for sure, but kind of boring"
Every chapter is more or less the author proposing an idea/prediction, and then exhaustively defining and constraining the solution space for that idea. .e.g AI could be done via method X, which would enable A, B, C, D, but would exclude J, K, L, M, N, etc..
Except for that's done over an hour.
So, every detail is treated very well, and it's an interesting process, but near the end I just couldn't take it any more and had to skip parts. :)
"Thorough, With a Mix of Excellence and Other"
The book is worth the listen because it is a very good and thorough exposition of one of the major technological problems and risks approaching us in the very near future. Anything that can bring Popular awareness of this and similar issues is a great value.
On the down side the author is so committed to voicing the scholarly non-committal tone that he fails to make definite statements about any topic, even when he could do so.
At times there are logical fallacies in the arguments, and assumptions about the nature of Artificial Intelligences that appear to be groundless, and are not supported by explanation.
There is also a tendency to quote and rely on a variety of "Celebrity" Experts, who have track records in Technology that more recently have led them down allies of almost clownish obsolescence in one case, and over-confidence leading to fallacies and mistakes in their work in the other case.
I would not take this book as 'gospel' on Super-Intelligence. Rather it is a worthwhile entry into the current fieldwork on the subject, such as it is.
"Pretty hard to listen to for more than a short tim"
I wish it was, but it only takes a couple of minutes before my mind starts wandering and the narrator is just idle background noise.
Read the book instead of listen to it.
The narrator speaks clearly and eloquently but the tone and meter were just impossible for me to enjoy. He didn't appear to be at all interested or passionate about the subject matter and instead just sounded like he was reading a script full of Star Trek technobabble and was just completely bored.
"Book About AI Narrated By AI"
I've maintained something of an interest in AI and decided that this book would allow me to go a bit more in-depth. Nope. Whatever degree is required to maintain and understand the analysis that Bostrom puts forward is one that I clearly like. What I mean to say is that Superintelligence is drier than the Sahara and faaaar too long! Worst of all, the narration actually sounds robotic. Bad book, bad narration, bad choice.
"Brilliant and Terrifying."
Nick Bostrom's, Superintelligence takes you on a journey through a sea of terminology and educated predictions to provide a stark and clear picture of the problems we face as a species as we approach singularity. The book is easy enough to work though and much more theoretical and practical than technical. Absolutely worth a read/listen for anyone worried or curious about how, when, or why machine intelligence will change humanity.
"Speculation masquerading as Science"
The majority of Nick Bostrom's "SuperIntelligence" is conjecture, and much of it is not credible. The narrative lifts off from a thin crust of scientific fact then tail-spins into unimaginative speculation: some AI projects might advance faster than others; governments would seek to gain control if they thought AI would present a strategic advantage; an AI might become so powerful so fast it could take over the world, resulting in a "singleton" new world order, etc. The book reads like an index of unmoving science fiction premises rather than a thought-provoking expedition over the landscape of possibility.
The author attempts to capture the complex challenges underpinning the development of Artificial Intelligence under the umbrella term "recalcitrance". This leads to absurd simplifications like "Rate of change in intelligence equals optimization power over recalcitrance" on which he bases his theories of an AI "explosion". One of the countless implausible possibilities the author describes is that AI projects could be moved to the cloud, as if they could be scaled as easily as websites.
Emperor of all Maladies
SuperIntelligence is rich with phrases like "neurotoxic pollutants" that Napoleon Ryan pronounces with an educated bearing that fits the narrative's imperious tone. The book sounds as if it were written, published and narrated by Orwell's Ministry of Information. The section that describes parental attitudes to improving children's intelligence via genetics is particularly callous.
One memorable insight revealed is that pundits who predict the creation of an Artificial Intelligence "within twenty years" are safe in the knowledge that their careers will be over by the time their predictions are proved wrong. We can hope that Bostrom's AI explosion results in robots that can write books more credible, and less soulless, than this.
"minutiae about a distant future"
pretentious big words ( and turning nouns into verbs and verbs into nouns) and unnecessary run on words that is quite frankly brilliant as an essay and very boring as a book.
"Mindblowing and easy to read"
Worth a read/listen if the prospect of AI both excites and scares you. The book gives a thorough look at all the different ways we might go about developing AI, and what might happen if we do succeed.
Report Inappropriate Content
If you find this review inappropriate and think it should be removed from our site, let us know. This report will be reviewed by Audible and we will take appropriate action.