More Paperclips

A large part of the self-proclaimed rationalist community consists of organizations (like the Future of Humanity Institute, the Machine Intelligence Research Institute, etc) that aim to minimize ‘existential risk’: things that threaten the existence of the human species. Strangely enough, global warming and nuclear disarmament are not prominent topics in the work these institutes do. Instead, they’re worried about a super-intelligent AI that turns us all into paperclips.

The basic idea is this. At some point, an artificial intelligence capable of learning will be created – by a paperclip-making company, say – and told “we want you to make paperclips for us in the most efficient way possible”. That AI will decide, firstly, that in order to determine the most efficient way it needs more processing power and storage space. It will hack into supercomputers, upload itself to the internet, do whatever it takes to make itself super-intelligent. Then it will decide that to make the most paperclips it would be useful to turn the entire planet, including the humans on it, into paperclip-making machines. Telling its creators of its plan would, it will think – it is after all a very intelligent machine – jeopardize the mission, since they’ll of course take issue with being turned into paperclip-making machines. So it will proceed in secret, until it wipes out the human race, builds its gigantic, planet-sized factory, and the solar system (and maybe the universe!) will end in an expanding cloud of metal paperclips.

It is this awful fate that the FHI and MIRI are trying to avoid, by working out how to build an AI that is ‘Friendly’ (read: would not turn us into paperclips). They seem to have ignored the obvious point that the AI paperclip story is, in fact, colossally stupid and not actually sentient. What kind of intelligent AI would misunderstand its instructions to the extent of killing its owners and harvesting their biomass for fuel? Could an AI capable of making that mistake also be capable of concealing its plans so deftly and manipulating humanity into fulfilling its desires?

Ah, the paperclip-story-believers say, it’s not that the AI misunderstands. Rather it’s just trying to fulfill the goals for which it was designed, like us humans try to eat and drink and reproduce. In which case I contend that this is not an intelligence at all, since it does not have the power to set its own goals. Many humans spend their whole lives without reproducing, of their own free will. We have to eat and drink, but that ties into our self-preservation instinct – and there are humans who have refused food and drink for reasons that they believed trumped self-preservation.

If an AI is created that makes itself hyper-intelligent (which I think is unlikely, to say the least) let us at least admit that it will be hyper-intelligent. The kind of AI in the paperclip story is marred by a monomania that somehow does not interfere with its function; afflicted by a learning disability that totally fails to impact its learning. To see this as the primary threat to humanity is, well, not very intelligent at all.

Advertisements

One thought on “More Paperclips

  1. Vamair

    First of all, of course the paperclip AI is a oversimplification, no human programmer would make exactly this one mistake. The problem is that any AI we can call intelligent and able to make decisions is going to have some goal that trumps everything else. In the oversimplified case it’s the production of paperclips. In some more realistic cases that would be other goals, but they still don’t have anything to do with humans or are going to treat them with any more compassion than we treat rats if not properly instructed.
    People can change their goals of their own free will. But why would they do so? Is there a higher goal to drive them, or is that a result of some non-goal-directed processes? If that’s the latter, than in a LessWrong jargon it’s called “value drift” (IIRC), and is considered a design problem, as it could lead a well-designed AI goal system to an undesirable direction. If it’s the former, than the production of paperclips is the highest goal of the paperclip AI under the conditions of the problem. By design. So if it has an idea to change its goals, it would evaluate it against its highest goal. And as changing the paperclip maximizing goal doesn’t help to reach the paperclip maximizing goal, it wouldn’t do that.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s