Do we need a Butlerian Jihad?

Above: a war robot, intent on slaughtering anyone it finds, from "Metalhead", an episode of the latest series of Black Mirror.

When I was a boy, I did not see the threat in artificial intelligence, which I felt scientists and theorists such as the late Stephen Hawking (1942 - 2018) were so paranoid about. Killer robots? The extermination or enslavement of the human race? It seemed like a sci-fi fantasy. Real robots appeared to me to be crude, malfunction-prone, boring, and disappointing. Creating robots as sophisticated as those of science fiction seemed a long way off, and nothing to worry about for the meantime. Artificial intelligence appeared to be in many ways a positive development (for example, war robots could reduce the need for human soldiers, and therefore reduce soldiers' casualties in conflicts).

This is, of course, a stupid way of thinking. I was not thinking with foresight. Robots today may indeed be crude, malfunction-prone, boring, and disappointing, but the same could have been said of the first Wright Brothers' aeroplane in 1903, a flimsy wooden thing which only flew for less than a minute. But let us not forget that in the next decade aeroplanes were already dogfighting in the sky in the Great War, and the first air force was formed, and after that one of the Wright Brothers (Orville) actually lived to see the age of supersonic jets and the first spacecraft (the German V-2 in 1944).

Human evolutionary progress by natural selection was always slow (taking place over the course of millions of years) but it has surely been slowed down even more by the advents of civilisation, the Industrial Revolution, and welfarism - how can natural selection any longer take place when the weak are sustained by comfortable standards of living, advanced medical technology, charity, and public welfare? If anything the process may be reversed as sensible, intelligent (though, in my belief, sexually immoral) people restrict the number of children they have through new artificial means (such as the rubber condom and the pill) while more unintelligent sorts in their stupidity neglect to do so (as the 2006 film Idiocracy portrays). Don't worry, I'm not some sort of Social Darwinist eugenicist. Of course the poor should be provided for - the only sort of eugenics I support are smart people having even more children! My point is, human evolution is happening slowly, if at all, in this age more than in any other.

But technological evolution, fed by the forces of the free market and, especially with regard to military technology, the interests of competing sovereign states, is faster than ever (this is why things get outdated so very quickly these days).  Yes, there is the example of the Wright Brothers living to see the age of aerial warfare, supersonic jets and spacecraft, but do not neglect to marvel at the speedy development of the humble car, the telephone, the computer, industrial weaponry, etc. in the last century alone. What will an aeroplane, car, telephone, computer, or weapon look like in a century hence? It's unimaginable, but some of us may live to see it; as medical technology likewise evolves and improves supercentenarians are becoming more common.

What I'm trying to prove therefore, is not just that artificial intelligence is a genuine issue for our descendants (which we should care about anyway), but possibly for ourselves too, because technology is evolving far faster than humanity. Therefore, it is inevitable (and to a large extent already happening) that we will create technology greater than ourselves. And that brings into question the issue of natural selection - do we necessarily want to create our superiors? At the moment technology isn't so much of a threat as it has not yet developed a fully independent decisionmaking capacity - but what if it did? And, as is the case with all other aspects of technology, what if its artificial intelligence evolved faster than human intelligence, and the machines became superior to us in this regard too?

Someday, at the rate we're going probably a lot sooner than you might think, superior artificial intelligence will be developed, and if we do not figure out our response to it now, we will be forced to figure it out then (when it may be too late...).

We will be able to create things other than and superior to ourselves. This is really how the issue of artificial intelligence can be summarised. Actually, already computers are far more quick-witted than human beings (try doing all the calculations a mere pocket calculator can do in the same amount of time), and machines can be far more physically strong, quick and capable. Thanks to the speed of technological evolution I see no reason why this will not become truer and truer until computers are independent decisionmaking geniuses and robots are self-sufficient, agile, dangerous, mega-strong Herculeans. And this will be all the worse as the genius computers and the Herculean robots will often be combined into single super-smart, super-strong machines. And what if these all-powerful machines make the decision with their independent artificial intelligence, to kill all humans, having either malfunctioned or been programmed that way by a psychopathic or warmongering creator? We wouldn't stand a chance.

Yes, it sounds like something out of science fiction, but there is no reason, on our current trajectory, why it couldn't come to pass (and indeed, the whole wonder and importance of science fiction as a genre is to predict and preempt these possibilities). No reason at all. For the thinking machines will be superior to us in every way but not, necessarily, one: it should not be taken for granted that they will be programmed with a moral code, or that this programming will not necessarily malfunction.


In "Metalhead", an episode of Charlie Brooker's excellent TV show Black Mirror, in a not-too-distant future, near-indestructible military robots which have either malfunctioned or been programmed by an enemy power, go about their business mercilessly slaughtering anyone they can find. This particular malfunction (if it is a malfunction in the context of the episode) doesn't seem so unlikely to me - program a droid to kill a particular sort of human (e.g. a Russian) and it may well mistake any other person for being in that category; we are all human, after all (although this is set to change...).

The danger comes not only from metal machines, but biological creations as well. As Blade Runner (based on Philip K. Dick's Do Androids Dream of Electric Sheep?) and Blade Runner 2049 speculate, we could create bio-robots just as super-smart, super-strong, and (especially, in my view, if we tampered with their brains) conscienceless as the theorised metal "killer robots".

As well as creating new synthetic bio-robots like the replicants of the Blade Runner films, we could also tamper with existing human beings, through drugs, bioengineering, and/or cybernetics, to create transhumans, cyborgs, and post-humans, again, stronger and more intelligent than us, but not necessarily any more moral. An example in science fiction would be the Cybermen of Doctor Who, which are emotionless and merciless but also physically and mentally superior to us (and unfortunately, emotion and mercy don't win wars). Other examples would include the Borg of Star Trek (TNG onwards), and the splicers and Big Daddies of BioShock.

To summarise, artificial intelligence is concerning as we will be able to create things mentally and physically superior to ourselves, and, if endowed with a capacity for independent decisionmaking, these things may well decide to perform acts detrimental to the human race, and as their mental and physical inferiors we will be powerless to stop them.

"So what?" you might say. "This is evolution. It is natural for homo sapiens to be replaced with its superiors". To which I would respond, I don't care if biological homo sapiens is replaced - being human is about more than being a mere bipedal ape. What concerns me is whether our successors will themselves be human, for being human is infinitely more valuable than being mentally or physically advanced. It is how spiritually advanced we are which is what really matters. Only a human mind is made in the image of God and therefore sacrosanct. Hence, I would like it to continue, and not be given an artificial replacement. Only a human mind with free will can have a human conscience.¹

Thus, we must judge our created successors primarily on how human they are, rather than on how intelligent, powerful, or efficient. Based on this criterion, metal machines are immediately out of the running, as I do not believe that they can have consciousness. As I put it in a previous article, a machine doesn't "even have a "self" at all, being merely a collection of different and ultimately predictable mechanisms (all the high-tech machines and robots humankind has yet created are but highly complex series of dominoes. There is no soul or consciousness in a line of dominoes, no matter how long and complicated it is)". Put simply, machines are without souls. Therefore, a world populated solely by them would be a blind, empty, lifeless one.

It may be acceptable, on the other hand, if human bio-robots, cyborgs, and bioengineered humans were to replace us, so long as their minds remained basically untouched and still wholly human. I do believe that such creatures could have consciousness, as they would be biological, living animals, with living cells, and therefore, embodied souls, but to alter/create their brains would be a highly dangerous thing to do which could have dehumanising consequences (as we have already discovered with psychopath-producing mind-altering drugs).

But what are we to do about the metal machines and the fundamentally inhuman bio-robots, cyborgs, and bioengineered post-humans? In other words, what are we to do with the creatures which do not fit this criterion of being fundamentally human? In the backstory to a book of mine soon to be published, in a great war Man and all his inhuman creations battle for Darwinian survival. The only reason Man wins this war against his superiors is because some of his creations remain loyal (although in real life, if such a war or some sort of disagreement were to take place, such loyalty could not be guaranteed). Human victory results in the extermination of all the inhuman creations which pose a threat to humanity's survival, as well as the resources and technology used to make them, so that they could never be recreated. After I had written this, I remembered that Frank Herbert (1920-86) had come up with a similar event for the backstory of his 1965 science fiction novel Dune. This event, known as the Butlerian Jihad (or the Great Revolt)², was a violent revolution against all the "computers, thinking machines, and conscious robots" which will come to dominate humans' lives in the far future (a process already beginning today). The machines were annihilated and, as decreed by the Orange Catholic Bible (the universal religious text) it became forbidden for new such inventions to be created. A universal Empire was established to uphold this commandment, and a newly emancipated humankind learnt to live without artificial intelligence. Computers had obviously been necessary for space travel, so to replace them a drug called melange was ingested by Spacing Guild Navigators so they could become as intelligent and quick-thinking as artificial computers ever had been, to the extent of being able to successfully calculate the future, so that they may successfully navigate space. The Mentats and the Bene Gesserit "witches" likewise used drugs, and mind-training, to become effectively "human computers".

I'm not sure that a fully-fledged Butlerian Jihad will be necessary in the future, so long as most thinking machines are programmed with loyalty to humanity and human morality. To ensure that not all machines ever try to eliminate or enslave humanity, the control over them must be as decentralised as possible. For example, were the thinking machines of the world to be centralised into a single artificially intelligent Network (like some sort of Skynet), and that Network decided to harm humanity, we would be doomed, with all the machines against us and none on our side. The thinking machines must also be built with as much variety as possible so that, for instance, an anti-human computer virus or hack could not be compatible with (and therefore not spread through) them all, so some could remain loyal.

In any case, the point of this article is a warning. Be wary of artificial intelligence. It may sound like goofy science fiction now, but our worst AI fears will someday be possible, and when that day arrives we should be prepared.


¹This is the theme of the Cybermen of Doctor Who - they are humans who have technologically enhanced themselves to be mentally & physically stronger, but at the cost of their humanity.
²Unfortunately, unlike the great war I theorised, Frank Herbert's Butlerian Jihad is slightly less effective in the fact that the resources and technology to recreate thinking machines are never totally eliminated. Fringe societies such as the Ixians and Bene Tleilax eventually begin to develop mechanical and biological technology that, if not actually transgressing the commandment of the Jihad, at least come very close.