Monthly Archives: August 2013

The Mind is Not a Computer

There’s an essay by Ted Chiang (unfortunately not freely available on the internet) where he talks about folk biology: ideas about biology that are believed because they’re ‘just common sense’, whether they’re true or not. One example is the idea that you are what you eat, and that different kinds of meat will encourage different characteristics, with lion meat making you stronger and more aggressive, cow meat making you slower and more bovine, and so on. Obviously these beliefs are wrong – there’s no evidence for them, and besides their compatibility with lazy narrative thinking, no good reason to hold them. But folk biology that we know is wrong isn’t very interesting. What’s an example of folk biology that Chiang thinks we still hold onto? The idea, he writes, that the human brain is fundamentally like a computer.

Once you’re looking for it, this idea is everywhere. People talk about ‘smarter’ and ‘stupider’ computers, use words like ‘forget’ and ‘remember’ to describe computer memory, and so on. Even worse, we imagine our own brains as having finite memory resources (a belief that predated the advent of computers but was certainly encouraged by it), set processing power, and a decision-making faculty very much like a computer algorithm.

If you want the ur-example of this error, the great bulbous thing itself, take a look at this article from LessWrong (of course by Eliezer Yudkowsky). After a discussion of object-classification and neural nets, it ends up with this:

Before you can question your intuitions, you have to realize that what your mind’s eye is looking at is an intuition—some cognitive algorithm, as seen from the inside—rather than a direct perception of the Way Things Really Are.

People cling to their intuitions, I think, not so much because they believe their cognitive algorithms are perfectly reliable, but because they can’t see their intuitions as the way their cognitive algorithms happen to look from the inside.

And so everything you try to say about how the native cognitive algorithm goes astray, ends up being contrasted to their direct perception of the Way Things Really Are—and discarded as obviously wrong.

This is the least jargon-y section of the article. Strip away the talk of “native cognitive algorithms” and what you end up with is “people cling to their intuitions because their intuition seems to them to be direct perception of the truth.” Indeed, you end up with something similar to “people believe their cognitive algorithms are perfectly reliable”, if by ‘cognitive algorithm’ you mean intuition. It is bad writing, and it is bad writing because it clings to the computer-esque framework of LessWrong idioms.

What if the mind does not, in fact, run algorithms? You wouldn’t talk about a stone rolling down a hill “running algorithms” to determine the way in which it bounces, or a sunset running algorithms to determine the arrangement of colour in the sky.  For all its self-proclaimed scientificness, the LessWrong picture of the mind is medieval: the world comes in through the eyes and ears, the little homunculus inside makes a decision, and the body moves.

Hume’s Error

More and more I see rationalist-fetish communities like LessWrong and the impractical theory-driven trend of modern ethical philosophy as symptoms of the same problem. Unfortunately, one of the seeds of that problem is an error by David Hume, my second-favourite philosopher. The main reason I like Hume is that, unlike my favourite philosopher Kierkegaard, Hume was pretty much right about everything: political philosophy, epistemology, reason and passion, metaphysics. He had a tremendous ability to think hard about a subject and, rather than multiplying the problem, either solve it or dissolve it into an error of semantics or sloppy formulation. But where Hume slipped up was in his view of human psychology.

Despite Hume’s method taking as its foundational principle the discovery and investigation of unspoken assumptions, Hume’s assumptions about the way the brain works went unquestioned. He presupposed a Newtonian atomic theory of mind: larger thoughts composed of smaller ones, which in turn are made up of smaller sentiments and impressions. Hume believed that all these mental objects were governed by laws in the same way that physical bodies are, and he devoted no small amount of time to uncovering those laws.

Hume is long dead – but today we have fools carrying his error so far that they describe the brain as like a computer (albeit a badly-designed one) and talk about uncovering the ‘algorithms’ that human thought uses. Next time I’ll go into a little more detail on why this is such a bad idea.

Moral Theory vs Moral Practice

Reading ancient philosophy, perhaps the biggest difference – and the one that most confuses students of philosophy – is in the focus of ancient philosophers. When Plato and Socrates discussed ethics, they did so from a fundamentally practical perspective: how one might live a more ethical life. By contrast, the aim of modern ethicists is for the most part to construct a theory of ethics: a coherent set of principles and rules by which one might classify and judge ethical decisions.

Compare the major ethical theory of the ancient Greeks, virtue ethics, with the utilitarian ethics that are ascendant today. Virtue ethics admonishes us to cultivate virtues within ourselves: to pick a model and emulate them, to introspect and see to what extent we are brave, proud, compassionate and so on. Instead, utilitarianism provides a rule – the most ethical action is the one that makes the most people satisfied – and tells us to follow it.

Students of philosophy in general far prefer utilitarianism. It’s simpler to understand, less vague, almost scientific in its exactness, and – they say – even more practical. And yet it is manifestly not more practical for somebody looking to avoid evil actions. The smallest amount of introspection ought to convince you that you naturally make ethical decisions by thinking about what a good person would do, about what is cowardly, cruel, petty, and so forth; not by weighing up both sides of some kind of Benthamite calculus.

Where utilitarianism is more practical is in the area of judging moral actions. We can’t look inside people’s heads and see their sentiments or motivations, but we can look (to an extent) at the consequences of their actions. It’s also more practical in the area of constructing a theory of what makes an action moral. Virtue ethics’ answer to that question – that a right action is right because it is made by an ethical person – is intuitively unsatisfying. Utilitarianism’s answer, while bloodless and over-simple, seems a little less like a dodge.

Where am I going with this? Virtue ethics is perfect for people focusing on their own moral practice, trying to improve the way they treat themselves and others. Utilitarianism is perfect for people looking to construct a science of ethics, classifying everybody’s actions into one coherent system. Somewhere between Plato and Peter Singer we’ve moved from the first goal to the second. Why? Partly the meteoric rise of science (I choose that metaphor deliberately) and partly Hume’s biggest and perhaps only mistake. But that’s the subject for another post.

Against Hope

It is, of course, fundamentally naive to believe that any amount of effort and goodwill can cause change in our political system. When the changes inevitably come they will be the result of the titanic submerged forces of economics and demographics and so on, and they will in all likelihood not be for the better. Even people who know this occasionally slip into the fallacy of hope by attributing positive results to some single important event: global warming, say, or the advent of an AI-powered singularity. Unfortunately, these hopes are ill-founded. The political system, no matter what happens, will continue to favour the powerful over the powerless, to encourage mean-spiritedness and petty cruelty among its cheerleaders and enforces, and maintain a huge class of subjected poor who produce wealth they never see. That is what a political system is.

We can see this depressing truth play itself out by looking back on the industrial revolution and the practice of mechanization. Oscar Wilde imagined a future where nobody had to work, relying instead on uncomplaining mechanical slaves, and (besides some occasional maintenance) everybody was free to live lives of leisure. He was wrong, though: all mechanization did was increase the profits of those who owned the machines and made the labor pool smaller and hungrier.

Arthur Koestler imagined a theory of political-technological development that took as its model a ship being raised through a series of locked basins. The ship represents the happiness and stability of society; the basins represent the level of technology. When the ship enters a new basin, it sits (relative to the basin) very low in the water: the society is cruel and chaotic. Over time, it balances itself – just in time to enter a new basin and its attendant evils. Even on Koestler’s optimistic view, technology advances but nothing ever changes.

Anarchism does not offer a solution to this depressing picture of the political world. In the realm of political philosophy, it functions very much like the Greek skeptics do in the realm of epistemology. The skeptical arguments outline the boundary of certainty: this far we can philosophize, they say, but no further, and the limits they outline have stood for thousands of years. Anarchist arguments ought to do the same for political philosophy, providing a realist correction to theories of the State.

Just as the skeptics free us – we don’t need to achieve certainty, since it’s impossible – in the area of knowledge, the anarchists free us in the area of politics. Don’t give the bastards more of your time than you absolutely have to. Don’t feel guilty about not engaging more, about not doing more to change things – the system is unchangeable, and to engage with it will only poison you. Spend your time doing something more useful, like making art or blogging. Hope is a gift, of course, but in a genuinely hopeless situation hope all too often keeps people imprisoned, thinking that if they just believed harder or tried more, they can fix the unfixable. I contend the world would a better place if more people despaired about politics.

The State is Technology

I’ve given the game away in the title of this post, I know, but what is technology really? It’s not necessarily a device or machine: it can be as insubstantial as a concept or an idea. Foucault wrote about the “technology of the body”, the ways in which bodies can be manipulated and controlled. He thought that the concept of the “soul” was a technology that kept human bodies docile and compliant (putting it way too roughly) and that most social institutions (prisons, medicine, science) serve as ways of organizing and controlling bodies.

Foucault thought the same drive – to control bodies – was behind most cultural change. The abandonment of public execution, he wrote, was not due to a trend towards “humanization” but because public executions failed to serve their intitial purpose. They originally served as ways for the ruler to display his power and wrath to the people, mess with me and I’ll fucking torture you to death, which kept the people in awe and unrebellious. However, the public execution also served as a meeting-ground for the lower classes, breeding a kind of solidarity. Occasionally people condemned to death were rescued by the crowd; often there were riots. Imprisonment and private execution eventually proved more successful as a technology of power.

So yeah, the State – with its omnipresent surveillance, sprawling police force and foreign armies – is a kind of technology of power, evolved to keep as many human beings as possible channeling their wealth and power towards the institutions at the top. Against this backdrop, an anarchist faith in technology seems a little absurd.

Against Technology

A refrain you often hear when discussing politics/health/the future/anything at all with liberal friends is that science and technology offer solutions to the problems of the world. With enough investment in renewable energy, medicine or artificial intelligence, we can end poverty and usher in a utopian age of peace and plenty. This reminds me of Oscar Wilde’s writing that the advent of mechanized labour will mean nobody has to work and usher in a utopian age of peace and plenty – which was half right, if you interpret “nobody has to work” as “nobody gets to work” and “peace and plenty” as “unemployment and misery”.

Of course, under anarchist analysis – and maybe socialist analysis, who knows – technology is a tool, and tools get used by the people in power to perpetuate their power. With the exception of a few isolated innovations – the Internet can be a venue for rejecting or living without power for now, guys – advances in technology come hand in hand with advances in inequality.

Is anarchism inherently anti-technology? There does exist singularitarian anarchism of the form of Iain M Banks’ Culture series, where everybody does anything they want because all-powerful AI overlords look after the plumbing. Singularitarian anarchism is kind of silly, though, and the Culture is overly aspirational, barring a massive cultural shift. Doesn’t technology change culture for the better? My gut response is: no, culture adapts to better exploit technology. I might be wrong though.

Blog Update!

I’ve done a little update of this blog to include a Stories & Poetry tab in the header. Click over to read a few things I’ve had published in the past few years (and hopefully that list will grow over the next few.) The most recent publication – Making Minds, a brief scifi story soon to be published in Planet Magazine – isn’t readable yet. Hang tight: as soon as I get a link I’ll post it up.

Edit: The link is up!