---
product_id: 75647185
title: "Superintelligence: Paths, Dangers, Strategies"
price: "€ 0.06"
currency: EUR
in_stock: false
reviews_count: 10
url: https://www.desertcart.be/products/75647185-superintelligence-paths-dangers-strategies
store_origin: BE
region: Belgium
---

# Superintelligence: Paths, Dangers, Strategies

**Price:** € 0.06
**Availability:** ❌ Out of Stock

## Quick Answers

- **What is this?** Superintelligence: Paths, Dangers, Strategies
- **How much does it cost?** € 0.06 with free shipping
- **Is it available?** Currently out of stock
- **Where can I buy it?** [www.desertcart.be](https://www.desertcart.be/products/75647185-superintelligence-paths-dangers-strategies)

## Best For

- Customers looking for quality international products

## Why This Product

- Free international shipping included
- Worldwide delivery with tracking
- 15-day hassle-free returns

## Description

Buy Superintelligence: Paths, Dangers, Strategies Unabridged by Bostrom, Nick, Ryan, Napoleon (ISBN: 9781501227745) from desertcart's Book Store. Everyday low prices and free delivery on eligible orders.

Review: A seriously important book - Nick Bostrom is one of the cleverest people in the world. He is a professor of philosophy at Oxford University, and was recently voted 15th most influential thinker in the world by the readers of Prospect magazine. He has laboured mightily and brought forth a very important book, Superintelligence: paths, dangers, strategies. I hope this book finds a huge audience. It deserves to. The subject is vitally important for our species, and no-one has thought more deeply or more clearly than Bostrom about whether superintelligence is coming, what it will be like, and whether we can arrange for a good outcome – and indeed what ” a good outcome” actually means. It’s not an easy read. Bostrom has a nice line in wry self-deprecating humour, so I’ll let him explain: “This has not been an easy book to write. I have tried to make it an easy book to read, but I don’t think I have quite succeeded. … the target audience [is] an earlier time-slice of myself, and I tried to produce a book that I would have enjoyed reading. This could prove a narrow demographic.” This passage demonstrates that Bostrom can write very well indeed. Unfortunately the search for precision often lures him into an overly academic style. For example, he might have done better to avoid using words like modulo, percept and irenic without explanation – or at all. Superintelligence covers a lot of territory, and there is only space here to indicate a few of the high points. Bostrom has compiled a meta-survey of 160 leading AI researchers: 50% of them think that an artificial general intelligence (AGI) – an AI which is at least our equal across all our cognitive functions – will be created by 2050. 90% of the researchers think it will arrive by 2100. Bostrom thinks these dates may prove too soon, but not by a huge margin. He also thinks that an AGI will become a superintelligence very soon after its creation, and will quickly dominate other life forms (including us), and go on to exploit the full resources of the universe (“our cosmic endowment”) to achieve its goals. What obsesses Bostrom is what those goals will be, and whether we can determine them. If the goals are human-unfriendly, we are toast. He does not think that intelligence augmentation or brain-computer interfaces can save us by enabling us to reach superintelligence ourselves. Superintelligence is a two-horse race between whole brain emulation (copying a human brain into a computer) and what he calls Good Old Fashioned AI (machine learning, neural networks and so on). The book’s middle chapter and fulcrum is titled “Is the default outcome doom?” Uncharacteristically, Bostrom is coy about answering his own question, but the implication is yes, unless we can control the AGI (constrain its capabilities), or determine its motivation set. The second half of the book addresses these challenges in great depth. His conclusion on the control issue is that we probably cannot constrain an AGI for long, and anyway there wouldn’t be much point having one if you never opened up the throttle. His conclusion on the motivation issue is that we may be able to determine the goals of an AGI, but that it requires a lot more work, despite the years of intensive labour that he and his colleagues have already put in. There are huge difficulties in specifying what goals we would like the AGI to have, and if we manage that bit then there are massive further difficulties ensuring that the instructions we write remain effective. Forever. Now perhaps I am being dense, but I cannot understand why anyone would think that a superintelligence would abide forever by rules that we installed at its creation. A successful superintelligence will live for aeons, operating at thousands or millions of times the speed that we do. It will discover facts about the laws of physics, and the parameters of intelligence and consciousness that we cannot even guess at. Surely our instructions will quickly become redundant. But Bostrom is a good deal smarter than me, and I hope that he is right and I am wrong. In any case, Bostrom’s main argument – that we should take the prospect of superintelligence very seriously – is surely right. Towards the end of book he issues a powerful rallying cry: “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. … [The] sensible thing to do would be to put it down gently, back out of the room, and contact the nearest adult. [But] the chances that we will all find the sense to put down the dangerous stuff seems almost negligible. … Nor is there a grown-up in sight. [So] in the teeth of this most unnatural and inhuman problem [we] need to bring all our human resourcefulness to bear on its solution.” Amen to that.
Review: the great majority of the book is accessible to lay readers ... - Superintelligence, Paths, Dangers, Strategies by Nick Bostrom, 2016 edition. A rigorous philosophical and ethical treatment of the subject. It demands quite an effort from the reader but the more you are willing to make the more the reward. The formalist style and maths gives it a textbook feel. Some of it was over my head but don't be put off, the great majority of the book is accessible to lay readers although some background in the subject would obviously help. A strong theme is the need for some overreaching systems of control to protect us from undesirable behavior by super-intelligent machines lest they misunderstand, whether accidentally or deliberately, the goals we set them. If that sounds too much like science fiction then reading the book might change your mind. Among the many topics addressed I found the whole brain emulation idea quite fascinating, also the notion of "mind crime" where inside a super-intelligent machine there is some kind of sentient being which could be exposed to mental suffering. That gives one pause for thought. I was expecting more about the architectures and software methods that are currently showing the most promise but these are only mentioned indirectly, they are not the subject of this book. While I am in awe of the huge intellectual depth and span of this work, I reluctantly drop half a star (rounded to one) because of the almost obsessional academic style which starts to feel tedious and repetitive at times. I felt that he could get some of his arguments across more economically to greater effect. But the book is nevertheless a masterpiece on this subject and will likely be a reference for many years to come.

## Technical Specifications

| Specification | Value |
|---------------|-------|
| Best Sellers Rank | 4,189,440 in Books ( See Top 100 in Books ) 13 in Computer Science (Books) |
| Customer reviews | 4.3 4.3 out of 5 stars (4,717) |
| Dimensions  | 17.15 x 13.97 x 1.27 cm |
| Edition  | Unabridged |
| ISBN-10  | 1501227742 |
| ISBN-13  | 978-1501227745 |
| Item weight  | 99 g |
| Language  | English |
| Publication date  | 5 May 2015 |
| Publisher  | Audible Studios on Brilliance audio |

## Images

![Superintelligence: Paths, Dangers, Strategies - Image 1](https://m.media-amazon.com/images/I/71nSrc-pr3L.jpg)
![Superintelligence: Paths, Dangers, Strategies - Image 2](https://m.media-amazon.com/images/I/81Lv5qbeZ6L.jpg)
![Superintelligence: Paths, Dangers, Strategies - Image 3](https://m.media-amazon.com/images/I/410W9fPTkNL.jpg)

## Customer Reviews

### ⭐⭐⭐⭐⭐ A seriously important book
*by C***M on 11 July 2014*

Nick Bostrom is one of the cleverest people in the world. He is a professor of philosophy at Oxford University, and was recently voted 15th most influential thinker in the world by the readers of Prospect magazine. He has laboured mightily and brought forth a very important book, Superintelligence: paths, dangers, strategies. I hope this book finds a huge audience. It deserves to. The subject is vitally important for our species, and no-one has thought more deeply or more clearly than Bostrom about whether superintelligence is coming, what it will be like, and whether we can arrange for a good outcome – and indeed what ” a good outcome” actually means. It’s not an easy read. Bostrom has a nice line in wry self-deprecating humour, so I’ll let him explain: “This has not been an easy book to write. I have tried to make it an easy book to read, but I don’t think I have quite succeeded. … the target audience [is] an earlier time-slice of myself, and I tried to produce a book that I would have enjoyed reading. This could prove a narrow demographic.” This passage demonstrates that Bostrom can write very well indeed. Unfortunately the search for precision often lures him into an overly academic style. For example, he might have done better to avoid using words like modulo, percept and irenic without explanation – or at all. Superintelligence covers a lot of territory, and there is only space here to indicate a few of the high points. Bostrom has compiled a meta-survey of 160 leading AI researchers: 50% of them think that an artificial general intelligence (AGI) – an AI which is at least our equal across all our cognitive functions – will be created by 2050. 90% of the researchers think it will arrive by 2100. Bostrom thinks these dates may prove too soon, but not by a huge margin. He also thinks that an AGI will become a superintelligence very soon after its creation, and will quickly dominate other life forms (including us), and go on to exploit the full resources of the universe (“our cosmic endowment”) to achieve its goals. What obsesses Bostrom is what those goals will be, and whether we can determine them. If the goals are human-unfriendly, we are toast. He does not think that intelligence augmentation or brain-computer interfaces can save us by enabling us to reach superintelligence ourselves. Superintelligence is a two-horse race between whole brain emulation (copying a human brain into a computer) and what he calls Good Old Fashioned AI (machine learning, neural networks and so on). The book’s middle chapter and fulcrum is titled “Is the default outcome doom?” Uncharacteristically, Bostrom is coy about answering his own question, but the implication is yes, unless we can control the AGI (constrain its capabilities), or determine its motivation set. The second half of the book addresses these challenges in great depth. His conclusion on the control issue is that we probably cannot constrain an AGI for long, and anyway there wouldn’t be much point having one if you never opened up the throttle. His conclusion on the motivation issue is that we may be able to determine the goals of an AGI, but that it requires a lot more work, despite the years of intensive labour that he and his colleagues have already put in. There are huge difficulties in specifying what goals we would like the AGI to have, and if we manage that bit then there are massive further difficulties ensuring that the instructions we write remain effective. Forever. Now perhaps I am being dense, but I cannot understand why anyone would think that a superintelligence would abide forever by rules that we installed at its creation. A successful superintelligence will live for aeons, operating at thousands or millions of times the speed that we do. It will discover facts about the laws of physics, and the parameters of intelligence and consciousness that we cannot even guess at. Surely our instructions will quickly become redundant. But Bostrom is a good deal smarter than me, and I hope that he is right and I am wrong. In any case, Bostrom’s main argument – that we should take the prospect of superintelligence very seriously – is surely right. Towards the end of book he issues a powerful rallying cry: “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. … [The] sensible thing to do would be to put it down gently, back out of the room, and contact the nearest adult. [But] the chances that we will all find the sense to put down the dangerous stuff seems almost negligible. … Nor is there a grown-up in sight. [So] in the teeth of this most unnatural and inhuman problem [we] need to bring all our human resourcefulness to bear on its solution.” Amen to that.

### ⭐⭐⭐⭐ the great majority of the book is accessible to lay readers ...
*by M***E on 29 June 2017*

Superintelligence, Paths, Dangers, Strategies by Nick Bostrom, 2016 edition. A rigorous philosophical and ethical treatment of the subject. It demands quite an effort from the reader but the more you are willing to make the more the reward. The formalist style and maths gives it a textbook feel. Some of it was over my head but don't be put off, the great majority of the book is accessible to lay readers although some background in the subject would obviously help. A strong theme is the need for some overreaching systems of control to protect us from undesirable behavior by super-intelligent machines lest they misunderstand, whether accidentally or deliberately, the goals we set them. If that sounds too much like science fiction then reading the book might change your mind. Among the many topics addressed I found the whole brain emulation idea quite fascinating, also the notion of "mind crime" where inside a super-intelligent machine there is some kind of sentient being which could be exposed to mental suffering. That gives one pause for thought. I was expecting more about the architectures and software methods that are currently showing the most promise but these are only mentioned indirectly, they are not the subject of this book. While I am in awe of the huge intellectual depth and span of this work, I reluctantly drop half a star (rounded to one) because of the almost obsessional academic style which starts to feel tedious and repetitive at times. I felt that he could get some of his arguments across more economically to greater effect. But the book is nevertheless a masterpiece on this subject and will likely be a reference for many years to come.

### ⭐⭐⭐⭐⭐ The Ai threat's are real
*by A***A on 29 January 2026*

Love this book. Anyone interested in Ai should make sure they read this. The threats are real and clearly explained no hype.

---

## Why Shop on Desertcart?

- 🛒 **Trusted by 1.3+ Million Shoppers** — Serving international shoppers since 2016
- 🌍 **Shop Globally** — Access 737+ million products across 21 categories
- 💰 **No Hidden Fees** — All customs, duties, and taxes included in the price
- 🔄 **15-Day Free Returns** — Hassle-free returns (30 days for PRO members)
- 🔒 **Secure Payments** — Trusted payment options with buyer protection
- ⭐ **TrustPilot Rated 4.5/5** — Based on 8,000+ happy customer reviews

**Shop now:** [https://www.desertcart.be/products/75647185-superintelligence-paths-dangers-strategies](https://www.desertcart.be/products/75647185-superintelligence-paths-dangers-strategies)

---

*Product available on Desertcart Belgium*
*Store origin: BE*
*Last updated: 2026-05-14*