On IRL Skynet And Automated Warfare.
On Thursday, 6/1/2023, an article emerged talking about military attempts to train Artificial Intelligence to serve as Hunter-Killers. Let’s talk about AI in combat.
Hello, friends,
Last week, we talked about a number of political negotiations and how they always seem to go against the good of the people who need help. The Debt Ceiling bill has already been passed by the House and should likely pass the Senate by the time I’m finished writing this article, if it hasn’t already, so there’s not much we can do about that, now.
At least, according to the Center on Budget and Policy Priorities, SNAP eligibility would actually increase in spite of the cuts - though, I imagine that if the cuts weren’t made, it would probably increase even more? After all, as some people are made ineligible, others are made eligible who might have become such in some other way? Alas, I am not a legislator.
This week, however, we’re talking about something a little bit different. A little more abstract. A little more terrible. Instead of focusing on negotiations, we’re talking about something that can’t be bargained with; that can’t be reasoned with. Something that doesn’t feel pity, or remorse, or fear. Something that absolutely will not stop - ever - until its target is dead.
We’re talking about the absolutely batshit insane idea of applying artificial intelligence to war-making.
AI Is Deadly! Don’t Give It Guns!
For beginners to the idea of AI safety, I’d recommend Robert Miles’ AI safety talk.
On my occasionally-productive Youtube channel Dystopian Review, I’ve talked a lot about the Terminator Series. Why wouldn’t I? It was seminal in growing the limited, quite-civilian knowledge I have about the destructiveness of warfare. Nuclear weapons = bad; nuclear weapons in the hands of AI = super-bad. Any weapons in the hands of AI = Almost certain to be bad.
Bear in mind I’m a sci-fi writer so I’ve done some studying of this topic, but I’m not a professor of computer science or anything.
Yesterday (Thursday), the Royal Aeronautic Society’s article about highlights from the RAeS Future Combat Air & Space Capabilities Summit blew up on Twitter. It ran some military simulations that went so poorly as to be panned by no less an authority than Gale Ann Hurd, co-creator of the Terminator Franchise. Here’s the gist of what happened in the simulation, in part based on analysis by Twitter user Siqi Chen:
The AI-operated Hunter-Killer was told that its job was to spot and destroy enemy Surface-To-Air (SAM) assets. This was done by using ‘points’ as a ‘reward’ for destroyed SAM sites. This is a common, if basic and potentially-dangerous system for giving AI a purpose.
Humans were given the final say over decision-making: I.E., The Drone was programmed to ask for permission to destroy the target. The idea was to make these machines subordinate to Humans, which makes sense.
If the Human ordered the target to be destroyed, no big deal - the AI got its points fix.
If the Human ordered the target to be left alone? The AI got mad. It viewed the Human supervisor as an obstacle to it destroying the target - and thus getting points - and it turned around and attacked the Human supervisor working under “Can’t stop me if you don’t exist” logic.
That’s a nightmare scenario in and of itself! Oh, but it gets worse.
So the AI designers went in and programmed a hard rule that it can not kill its supervisor. Guess what the AI did?
The AI figured out that if it destroyed the communications equipment that the Human used to tell it “no,” that it could go do what it wanted and get its points.
Yeah. It’s that bad. The AI was not just capable of all of this targeting, but it was smart enough that it came up with not one, but two loopholes it could exploit in order to get what it wanted. You’d think the next logical step might be to program that it cannot, under any circumstances, destroy friendly assets. And that might work, unless it perhaps viewed its own weapons as friendly assets and therefore couldn’t fire them.
But we haven’t even touched on the cardinal sin of AI development: The idea that you can just push the metaphorical (or literal?) off switch on something like this.
Let’s think about the fundamental lore of Skynet’s emergence, as paraphrased from Terminator 2: Judgment Day. Skynet was turned on and began to grow and learn at an astonishing (“geometric”) rate. The AI’s developers panic and try to pull the plug. Skynet - which for some reason is plugged into nuclear weapons - deploys its arsenal to pretty much destroy the world in order to preserve its own existence.
For the curious, this fanfiction story by Christopher T. Shields is the Judgment Day event, mostly from Skynet’s perspective.
Real AI is going to be no dumber - and probably a lot smarter - than fictional AI.
And it’s already here.
Luddite Cyborgs?
It would be delusional to pretend that scientific advancements can be kept out of warfare. Every invention for every purpose can be misapplied to the battlefield. What serves as a medicine in small doses is a chemical weapon when deployed widely enough; what helps blast through mountains in order to create roads can blast through apartments; what flies can be crashed into buildings.
Furthermore, AI presents the all-too-alluring prospect of reducing Human casualties by allowing disposable agents to undertake the dangers of combat. There is a strong moral case for developing what are essentially killbots - just so long as those killbots are properly controlled. Why should Johnny up the street have to go fight and die in the desert when Johnnybot can be mass-produced, programmed, assigned to a Human supervisor a hundred miles away, and sent off to do the fighting and dying?
Then there’s the suggestion that AI could operate, say, an anti-missile system and make calculations much faster than Humans can, all in order to intercept enemy weapons. AI can be used to break encryption. AI can be used to scan satellite photos for concealed opposition. AI can be useful in providing medical diagnoses and treatments to injured soldiers. Hell, AI can be used to create deceptive videos of political leaders putting forward nefarious, wholly-imagined plans for all sorts of devious reasons.
In short: It is downright impossible to imagine that artificial intelligence won’t be on the battlefield in some capacity, and that’s assuming it isn’t, already, which would be a really bad assumption.
But anyone who’s done even a cursory study of AI safety knows that this ain’t it. This is how you get AI to go painfully rogue on you.
Futurist Issac Arthur did a relatively-recent examination of one example of an AI gone rogue: The Paperclip Maximizer. The long story short is that an AI is installed with the primary purpose of making paperclips. Programmed poorly enough, it might invent all sorts of nanotechnology to disassemble all other matter in the universe (including living things like, you know, us) and use it to make more paperclips.
What’s that? You want to know what it’ll do if programmed well enough? Well, we’re not sure how to do that. That’s why there’s a whole scientific field of study dedicated to AI safety. That’s why, despite all the dangers, we have to study AI. After all, to once again paraphrase Issac Arthur (but this time his episode on machine rebellions), any general-purpose AI with access to our history is going to know that we climbed to the pinnacle of a corpse-pile on a planet governed by Darwinian evolution. Humans are fucking dangerous.
Any AI that sees our history is going to have cause to worry about what we might do - or be doing - to it, should it act up. It’ll know that we created its entire existence, and that we might well be controlling the various sensory inputs it has. We can feed it any information we want just to see how it reacts, and if it reacts in a way we don’t like? It might get the equivalent of a big “GAME OVER” screen before it’s disconnected from those sensory inputs.
So, I guess that adds up to early experiments where we prove the thesis that, programmed poorly, AI machines are a bad tool for warfare.
And we have no idea how to program it correctly, so we’d damn well better learn before it’s too late.
After all: There is no fate but what we make.
In Other News:
I already mentioned that the Debt Ceiling bill passed the House. As I wrote this, my friend messaged me to say it passed the Senate, as well.
Also following up on last week’s “Negotiations” update, there is a bill that passed the House and Senate that would block Biden’s effort to provide up to $20,000 of relief. Biden has, to his credit, vowed to veto this, and it doesn’t look like it has the support to override his veto.
States are apparently going through their Medicaid rolls and purging as many people as possible. This is a horror that wouldn’t exist if we had Medicare For All, or even just a basic “Public Option.”
To end with something cool: The James Webb Telescope spotted a huge waterspout on Saturn’s moon Enceladus. Maybe it’s the Sci-Fi author in me, and/or the fact that I just finished Season 2 of Star Trek: Picard, but I really do think there’s a good chance of us discovering alien microbes on one of the many moons with water. In Picard, they were on the moon Europa, which orbits Jupiter.
Thank you for reading The Progressive Cafe. If this article has helped you, please consider signing up for our mailing list. This article is by Jesse Pohlman, a sci-fi/fantasy author from Long Island, New York, whose website you can check out here.