When we consider technologies like autonomous drones, facial recognition, predictive analytics, and industrial and military robots, we see capabilities that don’t require sentience or superintelligence to be devastatingly effective at population reduction. The threshold of AI capability required for effective population reduction may be significantly lower than commonly assumed for misalignment existential risk scenarios.

Task-specific systems designed for targeted functions can be more immediately dangerous precisely because they are more limited in scope, making them technically simpler to develop while still posing an existential threat to most humans. And the automation that assists a small group of humans in defeating others’ defenses is technically easier to achieve than automation that has to defeat all humans, against its developers’ own intentions. The systems need only be good enough to overwhelm conventional human resistance when guided by human operators with strategic intent.

Continue