Inside The Global Race To Build Killer Robot Armies By Joe Allen for The Federalist
There are AI-assisted weapons already in use, but they still require a human operator to confirm targets and order the kill. That is changing.
The temptation to open Pandora’s Box is irresistible. In early March, the U.S. National Security Commission on Artificial Intelligence completed its two-year inquiry, publishing its findings in a dense 750-page report. Its members unanimously concluded that the United States has a “moral imperative” to pursue the use lethal autonomous weapons, a.k.a. “killer robots.” Otherwise, we risk bringing a rusty knife to a superhuman gunfight.
Citing the threat of China or Russia leading the global artificial intelligence (AI) arms race, the commission’s chairman, former Google CEO Eric Schmidt, urged President Biden to reject a proposed international ban on AI-controlled weapons. Schmidt rightly suspects our major rivals won’t abide by such a treaty, warning U.S. leaders, “This is the tough reality we must face.”
Now is your chance to support Gospel News Network.
We love helping others and believe that’s one of the reasons we are chosen as Ambassadors of the Kingdom, to serve God’s children. We look to the Greatest Commandment as our Powering force.
If other superpowers are going to unleash demonic drone swarms on the world, the logic goes, the United States should be the first to open the gates of Hell.
America first deployed autonomous aerial vehicles in the aftermath of 9/11. The libertarian blog AntiWar.com has covered this transition in critical detail. Laurie Calhoun writes, “[O]n November 3, 2002, the Drone Age effectively began with the CIA’s extrajudicial execution of six men driving down a road in Yemen using a Hellfire missile launched from a Predator drone. The act went virtually unquestioned.” Since then, remote-controlled strikes have become a standard tactic to “fight terror” and save American lives.
Nearly two decades later, a new era of autonomous weapons is rapidly approaching. A wide array of AI-assisted weapons is already in use, but they still require a human operator to confirm the target and order the kill. That will likely change in the near future.
What Damage Can Drones Do?
The attack drones currently on the market are plenty dangerous as is. A good example is the KARGU loitering munitions system, currently deployed by Turkish forces. This lightweight quadcopter “can be effectively used against static or moving targets through its … real-time image processing capabilities and machine learning algorithms.”
KARGU’s mode of attack is full-on kamikaze. It hovers high in the air as the operator searches for victims. When one is located, the drone dive-bombs its target and explodes. If the concussion doesn’t kill them, the shrapnel will. Just imagine what a thousand could do.
A single quadcopter is only one cog in the AI war machine. The ultimate “death from above” technology will be the killer drone swarm. Even as a war-averse civilian, its hard not to feel deep admiration for its ingenious design.
Forbes reporter David Hambling describes the organizing principle: “True swarm behavior arises from a simple set of rules which each of the participating members follows, with no central controller. … [Computer simulations have] mimicked the collective movements seen in schools of fish and flocks of birds or swarms of insects with just three rules.”
Each drone in the swarm will separate at a minimum distance, align toward the direction of near neighbors, and cohere to maintain harmonious movement. This behavior allows attack drones to spread out over large areas and execute “omnidirectional attacks,” descending on the enemy from all angles.