Mind and the Singularity

Repent! The Singularity approaches! The Singularity is the moment where the artificial intelligence we’ve created edits itself to dizzying heights of intelligence, ushering in a new world of technology governed by an enormously powerful machine mind. Proponents of this view generally believe the Singularity to be inevitable; they state their case thus:

There is some chance that in the near future (next 20-100 years), an “artificial general intelligence” (AGI) – a computer that is vastly more intelligent than humans in every relevant way – will be created. This AGI will likely have a utility function and will seek to maximize utility according to this function.

This AGI will be so much more powerful than humans – due to its superior intelligence – that it will be able to reshape the world to maximize its utility, and humans will not be able to stop it from doing so.

Therefore, it is crucial that its utility function be one that is reasonably harmonious with what humans want. A “Friendly” utility function is one that is reasonably harmonious with what humans want, such that a “Friendly” AGI (FAI) would change the world for the better (by human standards) while an “Unfriendly” AGI (UFAI) would essentially wipe out humanity (or worse).

Unless great care is taken specifically to make a utility function “Friendly,” it will be “Unfriendly,” since the things humans value are a tiny subset of the things that are possible. Therefore, it is crucially important to develop “Friendliness theory” that helps us to ensure that the first strong AGI’s utility function will be “Friendly.”

The developer of Friendliness theory could use it to build an FAI directly or could disseminate the theory so that others working on AGI are more likely to build FAI as opposed to UFAI.

So aside from the obvious problems with equating ethics and a ‘utility function’, this is an interesting question. What kind of ethical behaviour ought you to code into an extremely powerful hyperintelligence? There are a couple of points to be made here.

Firstly, we should follow Hume in believing there to be no basic connection between intelligence and morality. A being several times more intelligent than a human (let’s provisionally define ‘intelligence’ as ‘knowing more things’) may have the morality of a bigot or a saint. We shouldn’t expect a clever AI to be able to ‘figure out its own ethics’ – the only reason we can figure out our ethics is the multiplicity of biological drives we were born with.

Secondly, the question ‘what ethics should we give our AI?’ appears to reduce to ‘what is our own ethics?’ in practice. From the perspective of a committed utilitarian, it would be grossly unethical to put deontological prescriptions into the AI code, and vice versa for the committed deontologist. You cannot say that you believe one thing to be morally right and code a different thing into the AI! So we’re faced with the age-old question of how to quantify our ethical intuitions.

Thirdly, there’s a whole host of issues surrounding computer code (what the AI community talk of as algorithms) and ethics. If I’m right and we should be against moral systems, if moral life is too complicated to sum up in a series of algorithms, isn’t it impossible to express an ideal ethics in computer code? People trying to code this kind of intelligence keep talking about ‘our ethical algorithms’, but if we aren’t algorithmic thinkers in practice – if algorithms are only inefficient models to describe our real ‘decision’-making process – this whole mode of thought is suspect.

Fourthly, there’s a deal of amusement value in the way programmers and AI scientists are turning their hoary gaze on the field of ethics and exclaiming “well this isn’t a hard problem, what have you philosophers been doing all this time!” Luke Muehlhauser, CEO of the Singularity Institute and amateur philosopher, began a sequence of posts in 2011 which aimed to “solve metaethics now”. After a few posts laying the groundwork, there has been no progress since June 2011. Eliezer Yudkowsky, another singularity guru, finished a sequence solving the question “what is right”. Read it yourself if you dare; Yudkowsky spends an unbearable amount of time defining the way we do ethics as a vastly complicated algorithm, then brings up the claim out of nowhere that this algorithm equates to “what is right”. In short, he tries to explain why we are interested in ethics rather than answering the actual questions we’re interested in.

I mean, um, reducing our ethical intuitions to an algorithm might be useful for an AI programmer – if it’s possible, which I doubt – but it in no way solves the problems of meta-ethics. Why is my algorithm better than yours? What standards ought we be using to judge our own algorithm anyway?

Advertisements

3 thoughts on “Mind and the Singularity

  1. marksackler

    Perhaps it would be as simple as Asimov’s three laws of robotics…

    A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
    Assuming, that is, said AI doesn’t develop the ability to override its own code.

    Reply
  2. Sean Post author

    The Three Laws are a good place to start! Reading Asimov’s own short stories, however, you start to see the pitfalls and holes in a three-law-equipped robot. My personal favourite is the one where the robots pick apart the definition of ‘human being’, reduce it to consciousness and intelligence, then decide that they are more ‘human’ than any member of homo sapiens.

    Reply
  3. Pingback: Hume’s Error | kierkeguardians

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s