The gravity of the scenario we’ve outlined demands urgent consideration of potential paths forward. If we accept that advanced AI and automation technologies create rational incentives for population reduction, what options remain available to prevent such a catastrophic outcome?

We have little faith in attempts to preserve a high-population world indefinitely. We have presented mass civilization as a contingent state whose benefits could be passed on to a much smaller human population without its deficits. The goal should be to divert the path to that low-population world peacefully, through demographic contraction, and cooperatively, with all currently living humans given equal part in the project.

Returning to the New Earth thought experiment: what if the facts of the planet-destroying asteroid and the possibility of escape to an alternative world were well-known? How might we organize ourselves to ensure the species’ survival and to avoid a desperate free-for-all? Building an evacuation fleet now while planning to figure out how to allocate seats later would be a dangerous way to go about the problem, but is analagous to how AI is currently being developed.

While the purpose of this essay has been to sound the alarm, not attempt to find solutions, we have some suggestions for paths forward.

Even if backsliding away from an interdependent and cooperative world order has begun, the few, for the time being, still depend on the cooperation of the many. Resistance is still possible, and can hopefully be aided by a clear understanding of the stakes.

Regrettably, the public response so far to the possibility of misaligned AI driving humans to extinction has been something to the tune of “better dead than red!” The competition between the world’s democracies and authoritarian rivals has perpetuated the AI arms race. Fear of having to live under a despotic regime (or, from the other side, the fear of being forever dominated by the West) has overpowered any fear of the small likelihood that, despite the best efforts of the many great minds working on AI, the technology will fail catastrophically and kill everyone. Recognizing the lethality of aligned AI may change that. In democracies, even the more optimistic projection, that AI could push every nation-state into despotism or feudalism, ought to nullify the mandate for the continued development of AI.

But acknowledging the growing irrelevance of workers and voters to elites, who are focused on their private contest for relevance in the AI world order, should temper expectations around regulation and the distribution of its benefits. Universal Basic Income will not serve as a long-term solution, and as a short-term solution breeds a constituency that is dependent on the AI that will eventually betray them.

The most powerful defense that the masses have in their arsenal is the ability to establish a normative, rather than merely legal, culture of resistance to AI development. The social norms that would develop were it to be recognized for the weapon that it is, one that will be turned against our enemies, friends, and families indiscriminantly, are straightforward: anyone who participates is collaborating in genocide of unprecendented scale. While this may only drive development underground, that may buy some time.

Finally, there may be technical, zero-trust solutions that deter the use of AI, at least temporarily, to cause mass human casualties.

With his famous “Laws of Robotics” Isaac Asimov, the godfather of AI science fiction, baked the proscription of causing harm to humans into the inner-most workings of the “positronic brain.” This was a hack, but a necessary one to make recognizable AI-enriched human worlds possible. We cannot depend on such an unlikely coincidence as intelligent robots that are inherently incapable of causing intentional harm. Any security measures that can be built into real-world AI are likely to be superficial enough to be susceptible to being defeated at massively lethal scale.

It may be possible, though, while human majorities still have some say in the matter, to create “dead hand” mutual assured destruction systems that artificially prop up the value of human lives. Such terrifying and risky solutions will be necessary in the dark, low-trust transition from the high-population world to a brighter, more secure low-population one.

Continue