The Clockwork Orange Fallacy

I’ve been reading a science fiction novel called Diaspora, by Greg Egan. The novel is initially set about a thousand years in the future, which is enough time to allow Egan to postulate all sorts of things without really having to explicitly delve into the morality of gene-splicing or consciousness transferral, etc… However, those sorts of questions emerge anyway because we, the readers, are still living our contemporary lives, where these issues are as relevant as ever.

The novel begins in a “Polis”, which is basically a network of artificial consciousnesses. Some of these are humans who have uploaded themselves, others are entirely artificial. Alternatively, there were apparently a lot of people who transferred themselves into human-shaped robots called Gleisners. Regular human beings are still around, and they’re referred to as “Fleshers” (for obvious reasons). At this point, there are tons of genetically altered humans, to the point where many of the variants can no longer communicate with one another (another class of humans, calling themselves “Bridgers” have been bred specifically to solve the problem of communication). Humans without any sort of genetic tampering are referred to as “Statics”, and don’t seem to be doing well.

In the story, the industrious but apparently suspicious gleisners have discovered an odd astrophysical event which could prove disasterous to Earth (at least, to its fleshers). Two of the characters go down to the planet to warn the fleshers, but they’re met with paranoia and disbelief. One of the characters, Yatima, is a completely random mutation from a polis (he has no “parents”, even artificial ones), and he (or, I should say “ve” as they seem to be quasi-asexual, though even the artificial pronouns sometimes seem to have a gender connotation, but that’s a different discussion) is having some trouble understanding the objections to his suggestion that anyone who wants to can upload themselves to his Polis. In the scene below, he’s speaking with a static human and Francesca, who is a human bridger.

He gazed down at them with a fascinated loathing. ‘Why can’t you stay inside your citadels of infinite blandness, and leave us in peace? We humans are fallen creatures; we’ll never come crawling on our bellies into your ersatz Garden of Eden. I tell you this: there will always be flesh, there will always be sin, there will always be dreams and madness, war and famine, torture and slavery.’

Even with the language graft, Yatima could make little sense of this, and the translation into Modern Roman was equally opaque. Ve dredged the library for clarification; half the speech seemed to consist of references to a virulent family of Palestinian theistic replicators.

Ve whispered to Francesca, dismayed, ‘I thought religion was long gone, even among the statics.’

‘God is dead, but the platitudes linger.’ Yatima couldn’t bring verself to ask whether torture and slavery also lingered, but Francesca seemed to read vis face, and added, ‘Including a lot of confused rhetoric about free will. Most statics aren’t violent, but they view the possibility of atrocities as essential for virtue – what philosophers call “the Clockwork Orange fallacy”. So in their eyes, autonomy makes the polises a kind of amoral hell, masquerading as Eden.’ (page 119 in my edition)

The reference to A Clockwork Orange was interesting, as this isn’t a novel that’s been filled with pop culture references, but the concept itself is a common theme in SF (and, for that matter, philosophy). It’s not hard to see why, especially when it comes to something like a Polis. What does morality mean in a Polis? A consciousness living in a Polis is essentially living in an entirely virtual environment – there are minimal physical limits, property doesn’t really exist as a concept, and so on. The inhabitants of any given Polis are modeled after humans, in a fashion, and yet many of our limitations are not applicable. Some polises have a profound respect for the physical world around them. Others have retreated into their virtual reality, some going as far as abandoning the laws of physics altogether in an effort to better understand the elegance of mathematics. Would it be moral to upload yourself into a Polis? Or would that be the cowards way out and represent the evasion of responsibility that free will provides? Would one still have a free will if their consciousness was run by a computer? Once in a Polis, is it necessary to respect the external, natural world? Could anything be gained from retreating into pure mathematics? Egan doesn’t quite address these issues directly, but this sort of indirect exploration of technological advancement is one of the things that the genre excels at.

Strangely, one of the things that seems to take on a more dangerous tone in the world of Diaspora is the concept of a meme (for more on this, check out this post by sd). The way ideas are transmitted and replicated among humans isn’t especially well understood, but it can certainly be dangerous. Egan is pretty clearly coming down against the humans who don’t want to escape to the Polis (to avoid disaster), and he seems to blame their attitude on “a virulent family of Palestinian theistic replicators”. This sort of thing seems even more dangerous to an artificial consciousness though, and Egan even gives an example. These AI consciousnesses can run a non-sentient program called an “outlook” which will monitor the consciousness and adjust it to maintain a certain set of values (in essence, it’s Clockwork Orange software). In the story, one character shows Yatima what’s happened to their outlook:

It was an old outlook, buried in the Ashton-Laval library, copied nine centuries before from one of the ancient memetic replicators replicators that had infested the fleshers. It imposed a hermetically sealed package of beliefs about the nature of the self, and the futility of striving … including explicit renunciations of every mode of reasoning able to illuminate the core belief’s failings.

Analysis with a standard tool confirmed that the outlook was universally self-affirming. Once you ran it, you could not change your mind. Once you ran it, you could not be talked out of it.

I find this sort of thing terrifying. It’s almost the AI equivalent to being a zombie. If you take on this outlook, why even bother existing in the first place? I guess ignorance is bliss…

In case you can’t tell, I’m very much enjoying Diaspora. I’m still not finished, but I only have a little more than a hundred pages left. It’s not much of a page turner, but that’s more because I have to stop every now and again to consider various questions that have arisen than lack of quality (though I will note that Egan is probably not a gateway SF author – he certainly doesn’t shy away from the technical, even in extremes). I’ll probably be posting more when I finish the book…

2 thoughts on “The Clockwork Orange Fallacy”

  1. Hrmm. Think the author might be trying to say something there? Or not, the point of the passage quoted could be to show how a computer would interpret human psychology. Either way I haven’t read the book, although it sounds interesting. I think the pronouns would frustrate me, though.

    Many sci-fi authors have explored belief systems and how messing around with physics and biology might (or might not) screw these systems up. Personally I think “self-affirming outlooks” are inherent to human beings, with the caveat that we constantly adjust or even radically change the details. The psychological construct itself remains the same, though.

  2. Egan certainly doesn’t seem to have much love for religion, but the way he phrased things does seem a bit interesting.

    I think the “self-affirming outlooks” thing in humanity isn’t nearly as scary. People change. Sometimes people change just out of boredom or laziness even. Other times it might take a big change in someone’s life or even a tragedy, but people change. The scary thing is that the “self-affirming outlooks” in software that I mentioned in the book is irreversible. It’s the same concept, but at least the human brain is flexible enough to change if necessary (err, usually – there are obviously situations where the brain is damaged somehow, whether that be from direct physical damage or drugs or whatever). The software thing seems less flexible (at least, the way it’s presented in the book).

Comments are closed.