Eliezer Yudkowsky Doesn’t Understand Ethics

So I was reading through the LessWrong archives after I wrote my last post, trying to figure out what precisely rubbed me up the wrong way about pretty much every article, when I stumbled upon this gem.

Now, would you rather that a googolplex people got dust specks in their eyes, or that one person was tortured for 50 years?  I originally asked this question with a vastly larger number – an incomprehensible mathematical magnitude – but a googolplex works fine for this illustration.

Most people chose the dust specks over the torture.  Many were proud of this choice, and indignant that anyone should choose otherwise:  “How dare you condone torture!”

Reader, you will not be surprised to learn that Yudkowsky comes down firmly on the side of torture, and is appalled that anybody would be so irrational as to not, as he puts it, “shut up and multiply”. This isn’t just biting the bullet on utilitarianism. This is going out and shooting yourself in the mouth, then insisting through the blood that you were right all along, and that mouth-shooting is the only rational thing to do under the circumstances.

The only argument – other than assertion – that Yudkowsky employs is a kind of Ship of Theseus paradox, a light finger moving the scales by one util at a time. Would you choose the dust specks if they were grains of sand instead? How about pebbles? How about boulders? Since we cannot pick a line, he claims, we must treat the grains of sand like boulders (with, of course, much more people needed to balance out the torture).

Any emotional revulsion we feel at the prospect of torturing one person to protect a googolplex of people from dust specks, Yudkowsky says, comes from our own selfish desire to avoid feeling guilt. Guilt is a natural product of behaving in a rational fashion! We have primitive animal brains, after all – we can’t expect them to react appropriately to ethical behaviour.

All this from the guy who argues that ethics can only be a model for what we value, and that there is no objective ethical standard we ought to respect. Here, off the top of my head, are a few problems with his speck-torture argument, any of which ought to be decisive:

  • Most people would consent to a speck in their eye to avoid somebody else being tortured. I know I would, and I can’t imagine there being a person who wouldn’t. An infinite number of people who blink away specks, if they consent to doing so, is of course preferable to one person being tortured.
  • There is such a thing as a threshold of pain, and while fifty years of torture cracks that threshold resoundingly, a speck in the eye does not even come close. Even an annoying grain of sand is qualitatively different from torture.
  • Yudkowsky offers no guideline for why we should accept the utilitarian conclusion here when it is patently against our intuitions. If we should reject our intuitions in this case, why stop there? Following Yudkowsky’s own speck-to-grain-to-boulder argument, there is no reason to keep any intuition – even those that justify utilitarianism.
  • This one may not be decisive, but Yudkowsky’s argument justifies the Repugnant Conclusion, which ought to give anybody pause.

Yudkowsky’s stab at philosophy would be grotesque if it weren’t so amusing to watch – and, given his criticism of most of philosophy as baseless and superseded by cognitive science, there’s an element of schadenfreude in watching his arguments fail.

Advertisements

11 thoughts on “Eliezer Yudkowsky Doesn’t Understand Ethics

  1. Roxolan

    “Most people would consent to a speck in their eye to avoid somebody else being tortured. I know I would, and I can’t imagine there being a person who wouldn’t.”
    Out of 3^^^3 people though (assuming they’re not identical but follow the usual bell curves), I’d expect more people to refuse than there are atoms in the universe.

    Either way though, the argument is flawed. You might be able to find a hundred million people willing to spend one cent to give a starving man a meal, but this doesn’t mean that spending a million dollars on one starving man’s meal is a morally defensible use of available resources. Lots of people don’t “shut up and multiply” but that doesn’t mean shutting up and multiplying isn’t the right policy.

    “Even an annoying grain of sand is qualitatively different from torture.”
    I’m not sure you can make that case – though I’m willing to listen. If you have limited resources to spend to fix problems, then you have to use a single metric to compare them. And while it will probably tell you that torture is a *lot* worse than an annoying grain of sand, it can’t be *infinitely* bad (else you’d do literally anything to stop it), nor can the annoying grain of sand be zero bad. And at that point, I don’t see an alternative to shutting up and multiplying.

    “If we should reject our intuitions in this case, why stop there? Following Yudkowsky’s own speck-to-grain-to-boulder argument, there is no reason to keep any intuition – even those that justify utilitarianism.”
    This is discussed at length in the meta-ethics sequence.

    Reply
    1. Sean Post author

      Hey Roxolan, thanks for your comment! It’s good to have a counterargument that I can – how do you say – ‘steelman’ and sink my teeth into.

      “Out of 3^^^3 people though (assuming they’re not identical but follow the usual bell curves), I’d expect more people to refuse than there are atoms in the universe.”
      Well, perhaps. I suppose I wasn’t so much saying that nobody would refuse as that nobody should refuse. To my mind, anybody who refuses to accept a grain of sand in their eye for the purpose of stopping torture would be morally in the wrong – to the extent that it would be ethical to force them to accept it. If you disagree with me on this point, I think you’re forced either into a hyper-libertarian position or a position where consent has no special place in morality. I believe both those positions to be untenable.

      “If you have limited resources to spend to fix problems, then you have to use a single metric to compare them.”
      Is ethics reducible to a simple allocation of limited resources to the world’s problems? If you’re director of a charitable institute, maybe – but for those of us living in the complex world of human interaction, I think not. Using a single metric is necessary in some circumstances, but it’s a necessary evil that blurs a lot of essential detail. Anybody who ranks all events in the world on a spectrum between “infinitely bad” and “zero bad” ought to be very, very careful about oversimplification.

      “And while it will probably tell you that torture is a *lot* worse than an annoying grain of sand, it can’t be *infinitely* bad (else you’d do literally anything to stop it).”
      Just wanted to point out that people do in fact do literally anything to stop torture. That is the point of torture, after all. Now sure, that doesn’t mean it’s infinitely bad – perhaps one person’s capacity to suffer can’t encompass infinite badness – but it is almost by definition the worst thing an individual can experience.

      As for Yudkowsky’s meta-ethics sequence, I have in fact read it! It was long and interminable and packed with the kind of jargon you might see in a MBA lecture. I don’t believe that Yudkowksy at any point adequately defends the reliability of any moral intuition. I think that while he may know the meaning of the word ‘normativity’, he sure doesn’t understand it.

      Reply
      1. matt

        “To my mind, anybody who refuses to accept a grain of sand in their eye for the purpose of stopping torture would be morally in the wrong – to the extent that it would be ethical to force them to accept it.”
        What’s the point in arguing with you if you’ll resort to using violence to force me to agree with you? Vile.

        “but for those of us living in the complex world of human interaction, I think not. Using a single metric is necessary in some circumstances, but it’s a necessary evil that blurs a lot of essential detail. Anybody who ranks all events in the world on a spectrum between “infinitely bad” and “zero bad” ought to be very, very careful about oversimplification.”
        I see no arguments for the case you’re making (thinking one thing or the other is not an argument).

        “Just wanted to point out that people do in fact do literally anything to stop torture. That is the point of torture, after all.”
        You mean the person being tortured? I think the original poster meant that everyone should do literally anything to stop anyone being tortured – under the stated assumptions – which doesn’t make sense.

      2. Sean Post author

        Matt, when I say “force them to accept it” I mean force someone to accept a grain of sand in their eye to prevent the torture of someone else. Of course I don’t mean physically force you to accept my hypothetical.

        You say my argument against reducing all moral thought to points on a spectrum of badness fails. Well, it’s not really an ‘argument’. If you’re inclined to think that reductively, no argument’s going to change your mind. I was pointing out that moral life is way more complicated than that. If you want a mathematical analogy (since you seem to be a LessWrongite, of course you do) I’m saying that a complete pairwise ordering of moral situations is impossible. The relation aMb (where M means ‘is morally better than’ and ‘a,b’ are states of the world) is not necessarily transitive or distributive; that is, aMb and bMc does not imply aMc. Nor does aMb, cMb imply (a&c)Mb.

        Of course you can say “well you’re just wrong! if you shut up and multiply it is so transitive and distributive!” But at that point you’re assuming utilitarianism to make it easier. Why on earth should I shut up and multiply? If you refer me to the godawful metaethics sequence again, I may scream.

      3. matt

        “Matt, when I say “force them to accept it” I mean force someone to accept a grain of sand in their eye to prevent the torture of someone else.”
        So you’re saying “we should use force on those who do not comply with my way of thinking,” yes/no?

        “You say my argument against reducing all moral thought to points on a spectrum of badness fails.”
        I did not. Read again. I said exactly what you said, that you made no arguments. You just said “well I think it’s like this!” without supplying a reason why anyone else should come to the same conclusion.

        “If you want a mathematical analogy (since you seem to be a LessWrongite, of course you do)”
        I fail to see the importance of your assumption about me as a person. I just want clarity.

        “I was pointing out that moral life is way more complicated than that.”
        No. You were asserting it. It makes little sense to assume the thing you’re trying to conclude.

        “The relation aMb (where M means ‘is morally better than’ and ‘a,b’ are states of the world) is not necessarily transitive or distributive”
        Still, you are just asserting possibilities, not arguing for the case. Yes it’s possible. So what? Unicorns are possible too.

        “Of course you can say “well you’re just wrong! if you shut up and multiply it is so transitive and distributive!” But at that point you’re assuming utilitarianism to make it easier. Why on earth should I shut up and multiply? If you refer me to the godawful metaethics sequence again, I may scream.”
        I have yet to make any novel claims or suggestions; so far I’ve just questioned your claims.

      4. Vamair

        “Is ethics reducible to a simple allocation of limited resources to the world’s problems? If you’re director of a charitable institute, maybe – but for those of us living in the complex world of human interaction, I think not.”
        I’d believe that yes it is. Could you please give me an example of a choice a person can make that is not about the allocation of limited resources aka “what’s the best thing I can to do in a given situation using what I have”? And what is ethics all about if it’s not about choices?
        If the relations between aBb (a is better than b) is not transitive than there exists some c that aBb, bBc and cBa. Here a, b and c are the states of the world. If you agree that morality is about choices and not some metaphysical rules, than a person willing to act morally should be ready to sacrifice something x to go from a to b, something y to go from b to c and something z to go back to a. That means their ethics says it’s a good thing to sacrifice basically an infinity for pure nothing. Which intuitively seems much more wrong than utilitarianism.

    2. Chromotron

      I think you made a mistake here, Roxolan: You assume that any (utilitarian) metric assumes only real values (and possibly infinity, but without clearly specifiying what this means). You could as well have different orders of relevance, e.g. where any finite amount of dust specks is of lower relevance than any kind of torture, but without giving torture an infinite value. Also note that there could be different types of infinity.

      More mathematically: let epsilon be a number smaller than any positive real (see: http://en.wikipedia.org/wiki/Surreal_number), and associate this value to a dust speck. Also give e.g. any fixed unit of monetary value a normative value of 1 and torture a real value as well. Then maybe some amount of money compensates torture, but no finite amount of specks of dust every compares to any of them. Similiarily, you could arrange money, dust specks and torture to be on completely different levels.

      So in short I claim that there is a missing argument of things behaving the way it is used there, however appealing or intuitive it may sound.

      Reply
      1. Sean Post author

        “You could as well have different orders of relevance, e.g. where any finite amount of dust specks is of lower relevance than any kind of torture, but without giving torture an infinite value.”

        Yes, exactly. I think that basic utilitarianism assumes – for the sake of simplicity – one criterion of intrinsic value, then slips into claiming that such an assumption actually holds in the real world. The logical step here – G.E Moore’s “Ethics” has a great section on this – rests on some dubious mathematics. In short, just because any morally preferable situation must contain some pleasure does not mean that the ‘morality’ of a situation is proportional to the pleasure contained within it.

  2. Zombie Joe

    What you’re doing here is protecting intuition against reason. And that makes perfect sense. We know that our reasonings are not perfect so when we read something that is extremely against our intuitions, it’s a good heuristic to reject believing it (even if convinced by the reasoning on a technical/formal level), because there’s a chance that we’re being manipulated and confused for some purpose. This approach has also been discussed at LessWrong.

    So in general, believing stuff against the intuition can be dangerous, especially if we’re talking about moral issues. Since those are topics where the writer may have an agenda.

    Specifically, Yudkowsky may be trying to trick you into opening your mind totally and removing any intuitive protective layers, and in that state of mind you will more easily let through the ideas of cryonics, transhumanism and the importance of donating for friendly AI research.

    Reply
    1. M.L.

      Eliezer Yudkowsky doesn’t impress me at all.

      This dust specks versus torture ‘dilemma’ offers a nice illustration of how he operates.

      In the initial version of the dilemma, instead of simply saying “an infinite number of people” or “an infathomably large number of people”, he spends a paragraph or so to prentiously establish his very large number in terms of 3^^^3.

      So it’s clear from go that the real exercise here is for Eliezer to show us all how intelligent he is. Yawn.

      He then goes on to pose his inane pseudo-dilemma, and initially leaves his opinion a ‘secret’, but saying he thinks the answer is “obvious”.

      Well, the correct answer is obvious (dust specks) and it’s also obvious from the outset that Eliezer is going to tell us that he thinks torture is the ‘obvious’ answer, which of course he will ‘explain’ using more convoluted and needlessly complicated language, again with the intention of showing off his intellect.

      And then you have the pathetic spectacle of all the Yudkowsky sychophants in the comments section marveling at his supernatural brilliance.

      Yudkowsky is like the Peter Sellers character Chauncy Gardener in the movie “Being There”.

      But I digress…

      Reply
  3. Dave

    “There is such a thing as a threshold of pain, and while fifty years of torture cracks that threshold resoundingly, a speck in the eye does not even come close. Even an annoying grain of sand is qualitatively different from torture.”

    I think I understand the basis of what you’re saying here. The human body’s sense of pain can only be so acute, so there must be some minimum level of damage that must be done to even register as suffering in the mind. But whether a dust fleck to the eye falls above or below that threshold isn’t an integral concern; we’re here to debate ethics, not neuroscience or psychology. For the sake of intellectual charity, let’s just assume that a dust fleck would inflict exactly one quantum of pain.

    Once the discomfort is great enough to be perceptible, I don’t see what qualitative difference could exist between a small object in the eye and torture. What morally relevant property of agony, other than its intensity, would you say minor discomfort lacks?

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s