A Paperclip Civilisation

There’s a tale that engineers tell their children before bedtime, to scare them into behaving themselves. It originates in a 2003 paper by the philosopher Nick Bostrom, Ethical Issues in Advanced Artifical Intelligence:

The risks in developing superintelligence include the risk of failure to give it the supergoal of philanthropy. One way in which this could happen is that the creators of the superintelligence decide to build it so that it serves only this select group of humans, rather than humanity in general. Another way for it to happen is that a well-meaning team of programmers make a big mistake in designing its goal system. This could result, to return to the earlier example, in a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities. [emphasis mine]

The end point of this: a superintelligence that transforms the entire spacetime continuum into paperclips, including you, me and Mother Mcgee. Sends a shiver down your spine, doesn’t it?

We can argue back and forth about the chances of any such thing happening, although I’m not convinced that “the chances of any such happening” is a meaningful concept. Bostrom makes clear that it’s a thought experiment rather than a forecast; and rather obviously so, to the extent that it fails to stick the landing. An “intelligence” dedicated to turning space-time into a paperclip is not an “intelligence” in any meaningful sense; rather it’s an algorithm on singularity steroids, which strikes me as being significantly less of a threat than Skynet.

However we can agree that if it did happen, it would be a Bad Thing, and you can understand why people such as Bostrom want to make sure that this doesn’t happen, even if you don’t think the chances of this happening are very high. You can understand why there are people who are dedicating their careers (and, it often seems, their lives) to prevent artificial intelligence from being the end of us. The field of Friendly AI is not exactly crowded – much to the chagrin of those involved in it, who fail to see why the rest of us aren’t dedicating most of the resources of civilization to the subject – but it is committed.

What’s puzzling is that these same people – who I fully accept are more intelligent than me on most measures – have not noticed that such an intelligence already exists, and is busily paperclipping the planet around them.

***

In his 2003 paper Bostrom defined a superintelligence as

an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.

In his 2014 book Superintelligence he identified three different types of superintelligence — speed, collective and quality — and defined collective superintelligence as

A system composed of a large number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system.

In discussing collective superintelligence he acknowledged that humankind as a whole – what we commonly refer to as civilization – constituted an intelligence, and that

the threshold for collective superintelligence is indexed to the performance levels of the present—that is, the early twenty-first century… current levels of human collective intelligence could be regarded as approaching superintelligence relative to a Pleistocene baseline. Some improvements in communications technologies—especially spoken language, but perhaps also cities, writing, and printing—could also be argued to have, individually or in combination, provided super-sized boosts, in the sense that if another innovation of comparable impact to our collective intellectual problem-solving capacity were to happen, it would result in collective superintelligence.

This formulation makes it clear that superintelligence is a continuum rather than a binary, and that it is perfectly valid to argue that our current model of civilization can to some extent already be called a collective superintelligence.

***

Let’s accept, then, that we live in the midst of a collective superintelligence – a very limited superintelligence by Bostrom’s standards, but still one that has achieved a huge amount in every area of human endeavour. Even if you have serious beef with human civilization, our very own superintelligence has delivered significant improvements in terms of nearly every metric of human wellbeing. We can say that this – the improvement of human well-being – is the basis of the goal system of the algorithm of civilization. There are different opinions about the way to achieve those goals, but the algorithm of civilization has explored and will continue to explore multiple paths in order to achieve them.

By now you’ll have worked out where this argument leads. Our civilizational superintelligence has achieved its goals, but in the process it has degraded the planetary fabric – treating the substance of the planet itself, whether that is land, water, soil, sand or oil, merely as raw material required to achieve those goals. This is not in any meaningful sense different from the paperclip AI that Bostrom fears, at least regarding the planet.

So far civilization has not been able to extend its paperclipping to “increasing portions of space” – but that is entirely due to the lack of technological capability and economic incentives. As we paperclip the earth both of those lacks are being addressed, and civilization is turning its algorithm towards space. Proposals to mine asteroids for the raw materials that civilization needs to replicate itself show this most clearly, as are proposals to send out probes further afield in search of the same. Efforts to colonise Mars are counched in terms of the survival of civilization in the event of a global catastrophic event, and their plans reveal that the goal is essentially to rinse and repeat.

Returning to earth, the paperclipping of the planet by our civilizational superintelligence has proceeded possibly to the point where it may not recover and that civilizational superintelligence may destroy itself. The planet will forge ahead, and doubtless the remaining degraded and fragmented human societies will begin again, but overall the experiment will be a failure. Even Bostrom accepts this as a existential risk, one of several that he and his colleagues at the Future of Humanity Institute in Oxford are concerned about, but he also sees superintelligence as the best hope for humanity to avoid such existential crisis.

***

Why, then, does Bostrom fail to recognise that civilization is a series of paperclips, and that a superintelligence is already transforming all of earth into paperclips – as per his own description?

People who focus on Friendly AI aren’t tilting at windmills. It is a real thing. However they remind me of the Peace Corps volunteers listening to Ivan Illich when he pointed out that instead of going to fix other countries, they could perhaps first fix the US. Likewise, instead of fixing future problems, we would probably do better to fix the present problem – the algorithm on which our civilizational superintelligence runs, and from which our future problems will almost inevitably spring. As Illich pointed out, however, the reason why the Peace Corps volunteers were going overseas is because distant problems in which you have limited personal stake seem much more tractable than close problems of whose complexities you are well aware.

Those who point out that the algorithm on which our civilizational superintelligence is based might be the problem – whether they come from economic, environmental, social or other positions – are generally dismissed as idiots and/or utopians, although their views have a lot more traction now that the general edifice is increasingly seen to be significantly less stable than we were told by those with a vested interest in the algorithm. Instead we are directed to solve the problems produced as a result of the algorithm; essentially, to find ways to mitigate the production of paperclips, or at least to learn to stop worrying and love the paperclip. This is not a useful response.

We have a once-in-a-civilization opportunity to rewrite the algorithm on which our civilisational superintelligence runs. This is not any easier than creating Friendly AI – in fact it is likely to be far, far harder, the most wicked of wicked problems – but the payoff is exactly the same as Bostrom and colleagues are working for. If we rewrite the algorithm successfully, the result will still be a Friendly AI – it’s just that it will be a collective superintelligence instead of a singular intellect.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.