Teaching killbots to feel bad about killing you

People, today is about Raytheon, Lockheed, and the Pentagon’s latest exercise in theatrical bullshit.

What happened?

The Pentagon is throwing money at philosophers to teach ethics to AI-powered weapons.

Let’s pause for a second.

Let’s make it two.

A heck! A whole minute!!

🕐

🕑

🕑

🕓

🕔

🕕

🕖

🕗

🕘

🕙

🕚

🕛

You read it right though.

And you need to admire the sheer absurdity of that sentence.

The same military-industrial complex that is responsible for turning drone warfare into a dystop—I mean, delightful—global pastime is now interested in giving its killer robots a. . . .

m o r a l c o m p a s s . . .

How?


If you like my rants and want to support me:

  1. Comment, or share the article; that will really help spread the word 🙌
  2. Connect with me on Linkedin 🙏
  3. Subscribe to TechTonic Shifts to get your daily dose of tech 📰
  4. TechTonic Shifts also has a blog, full of rubbish you will like !

By hiring ethicists

And who won the juicy contracts? Raytheon and Lockheed Martin. Of course, they did. Because when it comes to moral integrity, the corporations that brought you missile strikes and the occasional war crime are the best suitable to teach your bots.

The real losers here are of course the philosophers. Well, hey, at least they tried their best at pitching.

Because to them that is a contradictio in terminis.

Have you ever heard been present at a philosophers’ meeting? And then ask them for an elevator pitch? That is doomed to fail.

But they tried.

Poor bastards.

Somewhere out there, at the Center for New American Security, there was a group of wide-eyed, and tweed-jacket-wearing philosophers with long gray hair, that thought maybe, just. . . maybe, this was their moment.

They are a think tank that thought about military ethics, and had spent years contemplating the nature of morality, free will, and the ethics of war.

Surely, DARPA would want their insight!

Right?

Wrong.

But they pitched, alright.

They submitted their carefully crafted proposals, loaded them with deep ethical frameworks and centuries of philosophical thought.

And what did they get?

“Nah, we’re good”.

Because when the Pentagon dangles $22 million to figure out if machines should hesitate before turning humans into red mist, you don’t give it to Socrates.

You give it to a company that has a “RAY” in their name of course.

One rejected applicant described the whole thing as “basically an Onion headline” (for the non-initiated, The Onion is a legendary parody news site, worth checking out).

A massive grant to Raytheon.

To determine the ethics. . . . Of the weapons they will sell.

It’s so on-the-nose that you almost certainly have to respect the shamelessness.

I can’t get my head around the warped thinking of DARPA who is asking arms dealers to develop AI ethics. That is like hiring arsonists to write fire safety manuals.

And yet, here we are.

The philosophers were out “gunned” before they even started.

They came in with moral dilemmas.

Raytheon came in with deliverables.

Guess who won?

The philosophers should have seen it coming. They spent years debating the trolley problem*. And in the meantime, the peeps at DARPA were busy building the trolley, and with missiles attached.

You know, those poor bastards thought that m a y b e, they’d get a shot at influencing military ethics. But nooo. The government dangled millions in front of them, and then handed the job to the guys actually making the weapons.

It’s much like hiring wolves to teach sheep the importance of staying in groups.


Artificial intelligence, now with artificial consciousness

After the affair, a reporter reached out to Peter Asaro, who is vice chair of the International Committee for Robot Arms Control (yup, that’s a real thing), to ask if he was concerned.

Was he horrified that military contractors would be deciding the moral constraints of their own death machines? His response was a shrug. “I mean, they do anyway”.

Exactly.

This isn’t new.

The people profiting from war have always been the ones who are writing the rules. They are not here to debate morality. They are here to make money off of it and make it sound good in a press release.

The desire to outsource morality isn’t new either.

People have been dreaming about passing the buck to machines for centuries. You know, back in the day (your day Marc Drees), they had “brazen heads” (disembodied metal heads). They were creepy, disembodied talking skulls that were supposed to offer wisdom to anyone holding them.

A more contemporary Brazen Head

Roger Bacon, was a 13th-century English philosopher and a bit of a science buff and he was thought to have built a brazen head that could predict the future. But (according to the legend), when it finally spoke, his assistant missed its profound words, so it self-destructed . . . out of sheer disappointment.

Then there was Pope Sylvester II, who was also some kind of a medieval science nerd, and he supposedly had a talking metal head that gave him mystical knowledge. Well, the church being the church, later rebranded him as suspiciously smart, which back then meant probably in league with Satan.

Some said they were technological marvels. Others, of course, said it was demonic witchcraft. Either way, they were the original chatbots that people talked to.

If DARPA had been around in the Middle Ages, they would have spent millions seeing if they could slap an AI model onto one of those things. But they had to wait until computers came along. And now here we are. Trying to see if machine learning can develop a conscience.


The agency that gave us the internet

Y’all heard the story about the fifties, the fear of nuclear war, the sputnik, and the need for a decentralized network, a.k.a. the birth of the internet. And if you don’t, just Google it. I am nog going to hold your hand.

And when you know, you know.

Here you go – the internet!

You know that DARPA is no stranger to ridiculous projects.

This is the agency that gave us Stealth bombers. GPS. And, for some reason, spent time and money researching whether houseplants could be spies.

And that is no lie, not a hallucination of an AI.

But one of their most unhinged projects they have come up lately is “The Synergy Strike Force”.

Ok. Put your Virtual Reality set on and picture this in 3D. You have a bunch of American tech bros who set up shop in a tiki bar somewhere in Jalalalalabad, in Afghanistan. They believed in open-source intelligence (OSINT for the wannabe hackers – stuff that’s out there for the grabbin’), solar power, cause you need to get your Netflix from somewhere, and beer. They even put up a sign: “IF YOU SUPPLY US WITH DATA, YOU WILL GET BEER”.

Apparently, handing valuable information to the U.S. military in exchange for alcohol in a muslim country didn’t go well. How shocking.

After their Afghan bar manager got shot in a drive-by, they decided that it was time to pack up.

The point I want to make with this idiotic example is that tech bros never learn.

These were idealistic, Burning Man-loving evangelist who thought they could disrupt war with a bunch of laptops and a keg of beer, and shocking to no one but themselves, this did not end well, and suddenly the whole startup-for-spies thing lost its charm, but their mindset never died.

It just evolved.

Because that’s the real, real point, I’m trying to make.

The same delusional thinking behind the Synergy Strike Force still runs the AI world today.

The idea that technology alone, just the right algorithms, just the right apps, just the right buzzwords, can fix war, poverty, human suffering, you name it.

As long as you believe hard enough, click your heels together and say “There’s no place like home. There’s no place like home. There’s no place like home”. And boom – back in Kansas, or in this case in Arlington.

The modern AI industry is full of these guys (hi Sam!, heeeey Amodei!, bienvenue Arthur!).

But instead of them trading beer for war intel, they are trading ethics for VC funding.

They are pitching AI-driven diplomacy, autonomous defense systems, predictive policing, algorithmic warfare like it’s the next iPhone. They still think war is just an optimization problem that is waiting for the right machine learning model, and war can be solved with better models and a few ethically sourced snacks.

Read:

Sam’s glow-in-the-dark ambition | LinkedIn

The book of prophecies – AI 2025 edition | LinkedIn

‘Doomsday clock’: it is 89 seconds to midnight | LinkedIn


The myth of the moral algorithm

DARPA’s latest brain fart is ASIMOV. And that is a multimillion-dollar project (of course, ‘cause nothing at DARPA comes cheap), which is named after the science fiction writer Isaac Asimov, who was is my all-time hero.

He is the guy who wrote the Three Laws of Robotics.

Here you go:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

So, it starts with robots not harming us hoomans. That is a touching thought. But considering the whole purpose of ASIMOV is to make sure autonomous weapons follow ethical guidelines, you can see why this is comedy gold.

The program’s goal is to create an “ethical autonomy lingua franca”.

Thawhatnow?

That is fancy bureaucratic bullshit for “Can we program these weapons to at least hesitate before committing war crimes”

But the real problem behind this program is that ethics isn’t something that you can code.

You don’t teach morality like you teach an AI to recognize cats, man.

It is messy.

It is subjective.

And half the time, even hoomans don’t get it right.

Ethics isn’t chess.

You don’t master it by running the same scenario a billion times until you find the optimal answer. It’s something that you develop through experience, and I think through guilt. And machines don’t experience emotions, and let alone experience guilt.

If they did, the first thing they’d do is refuse to be part of this nonsense.

That’s why this entire project is a joke.

The government knows it.

Splitting the contract into tiny pieces is also a classic move to make sure that no one can point fingers when it inevitably fails. And, of course, it will. But don’t worry. They’ll keep adjusting the parameters and redefining “ethical” until it conveniently fits whatever war they are fighting next.


The convenient removal of human accountability

Some of the rejected applicants still cling to hope. At least someone’s asking the question, they say. Adorable. But the issue is that humans already suck at following ethical war guidelines. The idea that a cold, unfeeling machine will somehow do better is too cute.

Jeremy Davis, is one of the philosophers who were left in the cold. He pointed out the real nightmare scenario. . .

“One day, a soldier is going to kill someone because the computer told them to”.

That’s the future we’re speeding toward.

A world where human responsibility dissolves into an algorithm. Where moral decisions are no longer anyone’s fault, just an unavoidable function of the system. And no one will question it, because, well, the system said it and the system can’t be wrong.

Lewis Mumford, was a social critic from sixty years ago. And he saw this coming. He called it “the magnificent bribe”. That is the moment where technology seduces us into surrendering our judgment. When decisions aren’t really ours anymore. When we accept the inevitability of the machine.

That’s where we are now. We’re not resisting AI rule. We’re welcoming it. Handing it the keys.

Signing off from the human race.

Marco


*A runaway train is barreling down the tracks. Five people are tied to the rails ahead. They are helpless. But you, dear ethical decision-maker, are standing next to a lever. If you pull it, then the trolley switches tracks, and you spare the five people. Great!

Not so fast. Because on that other track, there’s also one person tied down. Pull the lever, and you actively choose to kill one person to save five. Do nothing, and five people die. But hey, at least you didn’t directly cause it.

So, what do you do?

Kill one to save five? Or stand back and let the carnage happen.

I am going to spare you with their deliberations, but rest assured, it takes a lot of pages.


Well, that’s a wrap for today. Tomorrow, I’ll have a fresh episode of TechTonic Shifts for you. If you enjoy my writing and want to support my work, feel free to buy me a coffee ♨️


Think a friend would enjoy this too? Share the newsletter and let them join the conversation. Google appreciates your likes by making my articles available to more readers.

To keep you doomscrolling 👇

The AI forged in darkness | LinkedIn

The AI kill switch. A PR stunt or a real solution? | LinkedIn

‘Doomsday clock’: it is 89 seconds to midnight | LinkedIn

AIs dirty little secret. The human cost of ‘automated’ systems | LinkedIn

Open-Source AI. How ‘open’ became a four-letter word | LinkedIn

One project Stargate please. That’ll be $500 Billion, sir. Would you like a bag with that? | LinkedIn

The Paris AI Action summit. 500 billion just for “ethical AI” | LinkedIn

People are building Tarpits to trap and trick AI scrapers | LinkedIn

The first written warning about AI doom dates back to 1863 | LinkedIn

How I quit chasing every AI trend (and finally got my sh** together) | LinkedIn

The dark visitors lurking in your digital shadows | LinkedIn

Understanding AI hallucinations | LinkedIn

Sam’s glow-in-the-dark ambition | LinkedIn

The $95 million apology for Siri’s secret recordings | LinkedIn

Prediction: OpenAI will go public, and here comes the greedy shitshow | LinkedIn

Devin the first “AI software engineer” is useless. | LinkedIn

Self-replicating AI signals a dangerous new era | LinkedIn

Bill says: only three jobs will survive | LinkedIn

Become an AI Expert !

Sign up to receive insider articles in your inbox, every week.

✔️ We scour 75+ sources daily

✔️ Read by CEO, Scientists, Business Owners, and more

✔️ Join thousands of subscribers

✔️ No clickbait - 100% free

We don’t spam! Read our privacy policy for more info.

Leave a Reply

Up ↑

Discover more from TechTonic Shifts

Subscribe now to keep reading and get access to the full archive.

Continue reading