As governments across the world continue to debate the merits and dangers of AI-powered fully autonomous weapons, it is worth stepping back a moment and looking critically at the state of the autonomous landscape. Advances in everything from consumer drones to facial recognition to autonomous flight are yielding a constant march of advances in fully autonomous drones capable of navigating the human environment and delivering items to specific individuals. While most of the press has focused on the positives of these new systems, militaries around the world have been eagerly transforming these tools into weapons systems. Modified civilian drones today are capable of navigating denied spaces, seeking targets based on facial recognition and delivering lethal force, all using the same tools and technology being built by universities and companies for helpful tasks like package delivery drones to deliver aid to disaster regions. What does the future look like when we realize that AI-powered package delivery drones are really just autonomous weapons in waiting?
Policymakers the world over have spent recent years debating an inevitable future in which weapons systems are increasingly automated.
While autonomous and semi-autonomous weapons systems have been in widespread deployment for decades, to date these systems have automated only their navigation and coordination tasks, leaving targeting firmly in the hands of humans.
Yet, as society as a whole moves towards ever-increasing automation, Western militaries are being forced to grapple with the simple fact that their adversaries may not be as adverse to self-targeting weapons as themselves.
Weapons that can take over the most sensitive cognitive functions of a soldier, deciding who to kill, pose some of the most ethically fraught considerations of warfare beyond offensive cybersecurity’s quandaries regarding the targeting of civilian infrastructure like bringing down airplanes or triggering radiological releases from power plants.
Take an AI-powered drone that can loiter over an area and select its own targets. At what point does that drone cease to be a smart weapon and legally become a combatant itself?
More existentially, when there is no risk of human casualty to one side, the cold calculus of reciprocity begins to break down, lowering the barrier to conflict and potentially encouraging greater interventionism.
What happens when one side of a conflict adopts AI-powered weapons while the other, citing moral and ethical objections, does not? Half a century after the introduction of nuclear weapons, a country’s nuclear arsenal still plays an outsized role in determining its ability to exert its will globally, meaning military might still wins out over ethical considerations, placing considerable pressure on even peaceful nations to adopt AI weapons systems.
What happens when an AI-powered military system malfunctions? The science fiction canon is littered with reminders that a malfunctioning AI system can perform so many wrong operations so quickly that a conflict may be over before the human side even realizes there was a problem.
Most problematically, AI systems today are still merely simplistic correlation engines, representing their worlds as naive primitive assemblies of colors and textures. An AI-powered weapon does not recognize a target as a specific individual, vehicle or structure, but rather as a unique set of colors and textures in a specific relationship to each other.
This makes such weapons uniquely vulnerable to subtle modifications that can mask their targets or even cause them to attack unrelated targets.
Setting all of these issues aside, how far are we from actually having AI-powered autonomous weapons?
Universities and companies across the world have been rushing to build package delivery drones capable of navigating complex urban environments entirely on their own and even coordinating with other drones.
While the developers of these drone systems are building systems for good, the Islamic State reminds us that one person’s package delivery drone is another person’s autonomous weapons system.
In fact, universities, companies and militaries all over the world are already building killer robots with society’s fully encouragement and blessing: drone-killing drones.
As consumer drones have caused chaos at airports, threatened public safety and stalked us through our bedroom windows, there has been a growing societal consensus of a greater need to combat illicit use of drones.
This, in turn, has led to the growing world of anti-drone technology. Ranging from simple RF jammers to EMP pulse systems to high-powered lasers to projectile systems to emerging exotic technologies, the ability to bring down an errant drone in flight has become a major focus of public safety officials, especially regarding their danger to aircraft and public gatherings.
One technology with particular relevance to autonomous weapons is the drone-killing drone. These modified civilian drones are equipped with various sensors and navigation systems and designed to identify an unauthorized drone and bring it down through various means.
In short, a killer robot, though one that kills other robots rather than humans.
A drone killing drone could be easily modified to patrol the ground rather than the sky, autonomously targeting and applying lethal force to any vehicle or pedestrian that strays into a denied space.
Ironically, some of the same institutions and researchers that have come out so forcefully against “killer robots” are among those building these dual-purpose robot-killing robots.
Militaries across the world, including our own, have been quick to adapt civilian advances in both drone and AI technologies towards autonomous weapons systems.
There are already modified civilian drone platforms that have been specifically designed to be used in combat, with autonomous visual flight and onboard maps that permit them to successfully operate in radio and GPS-denied spaces, allowing them to fly to a target destination, use an onboard camera and AI system to visually identify a target, deliver a payload to that target and return, all without any human intervention and while stalking a fluid and moving target.
There are systems for scanning military uniforms for rank indicators on bases and in the battlefield, allowing autonomous weapons to target senior officers automatically under an interpretation that they represent more permissible targets for autonomous weapons than enlisted personnel.
More troubling, there are already military drones with onboard facial recognition databases that can be launched to scan a large public gathering and identify any persons of interest. These individuals could simply be tracked and filmed for reconnaissance or marked with infrared lasers for ground security forces or dropping marking dye on them. Experimental systems have even been designed to identify individuals in a crowd carrying weapons or behaving in a violent or disruptive manner.
It would take little modification for such systems to utilize more incapacitating or lethal means against their targets and indeed such systems are already being explored.
In the midst of our societal debate over the high error rate of facial recognition systems in the field, what does it mean when a facial recognition error could mean someone is mistakenly killed by a terrorist-hunting AI-powered drone?
Moreover, who bears legal or even criminal responsibility for that fatal facial recognition algorithm failure? Is it the government deploying the drone, the defense contractor that built the drone or the technology company that built the facial recognition software it used?
We have the technology today to deploy drones that can loiter over denied spaces, targeting anything humanoid that enters a geofenced area, even filtering by whether the individual matches a facial recognition database, is wielding a weapon or is judged by the algorithm to be behaving in a “threatening” manner.
These aren’t science fiction visions of a faraway future. These are commercial products being sold today by defense contractors to governments across the world today and put into active service.
Attend any drone event in DC and you’re likely to see literally dozens of such products being discussed and demonstrated, sometimes being presented by the technology companies whose AI platforms they run on.
While their real world performance may not yet match their marketing hype, the simple fact remains that these systems are already out there and getting better by the day.
Most importantly, these aren’t billion-dollar weapons systems dependent on exotic export-restricted equipment. They are typically modified civilian drone chassis with onboard consumer mobile AI processing platforms and using commercially available visual recognition algorithms, meaning governments all across the world can readily produce them. Most importantly, to civilian populations underneath, they can easily pass for ordinary civilian tourist drones until deploying their payloads.
Such civilian-based drones are still limited by battery limitations to relatively short deployments, but their autonomous components, based entirely on consumer mobile AI hardware platforms and readily available video processing technology, can be easily repurposed to long-range and long-duration drone systems. Some AI companies are already actively pitching defense contractors on the military applications of their consumer technologies.
Putting this all together, while policymakers slowly debate the dangers of autonomous weapons systems in abstract terms, those very systems are already being deployed across the world, but in an unexpected form. Rather than the bipedal walking Terminator units of science fiction or traditional military drones the size of small planes, autonomous weapons-capable military systems have come into widespread use through civilian drones.
Most importantly, their autonomy has come entirely through consumer AI platforms, making them readily portable to a wide range of weapons systems.
Halting the progression of consumer AI developments to military use is nearly impossible in a world in which every advance in image and video processing represents another new capability easily added to a military drone.
In the end, every self-following selfie drone, package delivery drone and mobile AI camera platform is merely an autonomous weapons system in waiting.
It seems that wait is increasingly over.
Powered by WPeMatico