Nick Bostrom’s book is a comprehensive assessment of the risk of ‘super intelligence’ and what the human race will need to do in order to avert global disaster. Super-intelligence is used here in its broadest sense – it is not limited to artificial intelligence but includes (and defines) different outcomes such as whole brain emulation and networked brains. Each is assessed for different qualities and characteristics (e.g. how quickly each will ‘take off’ once they have surpassed human intelligence levels) and therefore their inherent risk. The risk here being that whatever the super-intelligence is will, if not planned, controlled and managed properly, very quickly take over and subsume the human race and, eventually, the universe.
The default assumption is very much that we are all doomed. I have, to date, maintained an optimistic viewpoint on the impact of artificial intelligence on our world, but, having read this book, it now seems more than obvious that a disastrous outcome is the one we are most likely to end up with. Unless, of course, we do something about it now, before it is too late. And this is the main theme of the book: what controls need to be put in place in the development of super-intelligence so that we end up with a benevolent solution (which, in stark contrast to the uncontrolled outcome, would have us as masters of all of the resources of the universe).
Nick Bostrom is a Professor in the Faculty of Philosophy at Oxford University , therefore the book is academic in nature and, it must be said, a pretty hard read (do read it as an eBook with a built-in dictionary – you will need it). But he cannot be accused of skimping on the task at hand – this is the most comprehensive and lucid exposition of the AI argument that I have ever read – in fact you are likely never to have to read another book on the subject (until, of course, real events overtake the theorising). Everything is big picture – this is strategy on a grand scale – and that suited me just fine. Others may hanker after more detail about specifics of AI, but that is not what this book is about. The talk of a super-intelligence appropriating all of the natural resources of the universe in order to fulfil its core motivation (which may be something as innocuous as ‘to make as many paperclips as possible’) initially seems absurd, but, through clear and structured argument, quickly becomes the accepted approach. My only criticism, though, is that it doesn’t cover market forces and economics well enough. There is discussion on the role of a patron in the development of ‘seed AI’ and how the replacement of jobs will affect the speed of the intelligence explosion, but it seems to avoid the obvious question of human greed and the pursuit of power by, say, a rogue nation or a super-rich power-hungry individual. All the controls in the world would not be able to defend against a poorly informed and isolated dictatorship.
So, a heavy-going, deep and far-reaching book that describes clear paths on how to manage the biggest risk facing humanity in the near future. Read this to be informed, but read this to be prepared.