We stand at the precipice of an unprecedented transformation in human capability. Advanced artificial intelligence and robotics promise to enable small groups to achieve levels of offensive capability previously impossible without mass cooperation. As these technologies advance, more groups may find themselves in the position of the Sentinelese—viewing others primarily as threats rather than potential collaborators. Unlike the Sentinelese, however, these autonomous groups would not be limited to firing arrows from their shores. They would possess technologies capable of enforcing their need for isolation on a global scale.
The core danger lies not in evil intentions or malicious AI, but in the shifting incentive structures that AI creates. As we approach a technological threshold where small groups can defeat large ones in the application of lethal force, the rational calculus regarding population size fundamentally changes. This shift occurs whether or not AI systems are perfectly aligned with their creators’ “values”– human values are where the danger lies.
We must reject the comforting but unfounded assumption that technological progress naturally produces a reduction in insecurity and violence. History demonstrates repeatedly that moral commitments usually follow material conditions rather than transcending them. When circumstances change dramatically, values that seemed inviolable can quickly erode. The technological capabilities we’re developing could create the most profound change in human circumstances since agriculature–a shift that may render our current moral frameworks obsolete.
The time to address these incentive structures is now, before they become irresistable. The goal must be technology and social structures that enable a future where humanity, while in all likelihood a much smaller one, flourishes without the violent sacrifice of the many by the few.