A large part of the self-proclaimed rationalist community consists of organizations (like the Future of Humanity Institute, the Machine Intelligence Research Institute, etc) that aim to minimize ‘existential risk’: things that threaten the existence of the human species. Strangely enough, global warming and nuclear disarmament are not prominent topics in the work these institutes do. Instead, they’re worried about a super-intelligent AI that turns us all into paperclips.
The basic idea is this. At some point, an artificial intelligence capable of learning will be created – by a paperclip-making company, say – and told “we want you to make paperclips for us in the most efficient way possible”. That AI will decide, firstly, that in order to determine the most efficient way it needs more processing power and storage space. It will hack into supercomputers, upload itself to the internet, do whatever it takes to make itself super-intelligent. Then it will decide that to make the most paperclips it would be useful to turn the entire planet, including the humans on it, into paperclip-making machines. Telling its creators of its plan would, it will think – it is after all a very intelligent machine – jeopardize the mission, since they’ll of course take issue with being turned into paperclip-making machines. So it will proceed in secret, until it wipes out the human race, builds its gigantic, planet-sized factory, and the solar system (and maybe the universe!) will end in an expanding cloud of metal paperclips.
It is this awful fate that the FHI and MIRI are trying to avoid, by working out how to build an AI that is ‘Friendly’ (read: would not turn us into paperclips). They seem to have ignored the obvious point that the AI paperclip story is, in fact, colossally stupid and not actually sentient. What kind of intelligent AI would misunderstand its instructions to the extent of killing its owners and harvesting their biomass for fuel? Could an AI capable of making that mistake also be capable of concealing its plans so deftly and manipulating humanity into fulfilling its desires?
Ah, the paperclip-story-believers say, it’s not that the AI misunderstands. Rather it’s just trying to fulfill the goals for which it was designed, like us humans try to eat and drink and reproduce. In which case I contend that this is not an intelligence at all, since it does not have the power to set its own goals. Many humans spend their whole lives without reproducing, of their own free will. We have to eat and drink, but that ties into our self-preservation instinct – and there are humans who have refused food and drink for reasons that they believed trumped self-preservation.
If an AI is created that makes itself hyper-intelligent (which I think is unlikely, to say the least) let us at least admit that it will be hyper-intelligent. The kind of AI in the paperclip story is marred by a monomania that somehow does not interfere with its function; afflicted by a learning disability that totally fails to impact its learning. To see this as the primary threat to humanity is, well, not very intelligent at all.