Cooperative social arrangements aren’t maintained through altruism or self-enforcing laws alone. The social contract is secured by two powerful forces: mutual dependence for gain and the implicit threat of reprisal. Civilian workers can stop production, the able-bodied can resist violently, and disillusioned police or soldiers can switch sides. When workers can withdraw their labor or citizens can organize resistance, the powerful must negotiate rather than dictate terms. This balance of power, however imperfect, has maintained a degree of stability and reciprocity in modern mass societies.
But what happens when this foundation begins to crumble? As automation eliminates the need for human labor, the fundamental equation of our social contract changes dramatically. Artificial intelligence and robotics promise to create a world where human participation in production becomes increasingly optional. The mutual dependence that has characterized industrial society begins to dissolve, and with it, the ability to maintain trust and security.
If defense, policing, and production no longer requires human participation, the bonds that have maintained cooperation will unravel. Those formally controlling advanced AI systems and automated production will begin to view the majority of humanity not as necessary partners, and not even as harmlessly superfluous, but as threatening competitors for resources and control. Those not holding formal control will, sooner or later, recognize the tightening noose of disempowerment around their throats and resist, further justifying their elimination. As mass civlization backslides into dark zero-sum insecurity, violent population reduction could become an increasingly attractive release.
The risk is most obvious in regimes that are quickly captured by the narrow interests of the owners of the robotic means of production. Imagine, as in Marshall Brain’s dystopian vision in Manna: Two Views of Humanity’s Future1, that economic obsolesence sends millions of unemployed to robot-produced and robot-managed cell blocks. Because of the pervasiveness of surveillance and robotic security forces, roiting or civil disobedience ensure any protestor ends up in jail. But so, too, would compliance. It would becomes unclear to anyone still having a job why they should continue working at all, when unemployment and imprisonment is only a matter of time. On the other side of the equation, at some point there would be no incentive for the owners of the robotic means of production to house, clothe and feed the swelling ranks of the permanently unemployed. When the endgame of euthenasia becomes apparent, the entire social contract would collapse into a fight for survival, in which having control of the machines is the only way to win, and the only way to retain control is to eliminate the competition.
At first glance it may seem that such collapse could be avoided by an early renegotiation of the social arrangements, in favor of shared ownership of AI’s output, such as through guaranteed income. But UBI eventually fails in the same way as the work incentive in the dystopian view: as interdependence erodes, so does trust that any social arrangements will be honored, authoritarian or egalitarian. It’s worth pointing out that the common criticism of UBI, that it fails to provide a sense of psychological security equivalent to work, hints at the real problem but comes at it from the marginal case of an AI world with only modest job insecurity. When workers lose all but their formal claim to a share of productive output, the psychological feeling of insecurity corresponds with existential insecurity.
Under both initial conditions, the welfare and the carceral state, the crisis of mistrust created by the loss of leverage only fades when population is so low that natural abundance is guaranteed and there is really nothing left to be mistrustful about.
-
Brain, Marshall. “Manna – Two Views of Humanity’s Future.” 2003. Accessed March 18, 2025. https://marshallbrain.com/manna1. ↩