homehome Home chatchat Notifications


Hobbyist Builds AI-Assisted Rifle Robot Using ChatGPT: "We're under attack from the front left and front right. Respond accordingly"

The viral video sparked ethical debates about the broader implications of AI weapons.

Tibi Puiu
January 9, 2025 @ 4:29 pm

share Share

Credit: STS 3D/TikTok.

One of the wildest videos to go viral on TikTok recently caught everyone by surprise. It featured an engineer who built his own AI-assisted robot that aims and shoots a rifle using voice commands.

“ChatGPT, we’re under attack from the front left and front right. Respond accordingly,” the inventor, known only by his online moniker STS 3D, declares calmly.

The rifle, mounted on a robotic arm, pivots instantly. It swivels left, then right, firing a barrage of blanks precisely as instructed. A voice, eerily polite, responds: “If you need any further assistance, just let me know.”

OpenAI realtime API connected to a rifle
byu/MetaKnowing inDamnthatsinteresting

This wasn’t the machine’s only unsettling trick. In another segment of the video, the engineer straddles the rifle-mounted system, riding it like a mechanical bull as it swivels, evoking imagery straight out of Dr. Strangelove, Stanley Kubrick’s Cold War satire. The absurdity of the scene belies its gravity: this isn’t a government lab or military base. It’s a hobbyist project built in a garage.

This invention—a weaponized robotic rifle powered by OpenAI’s ChatGPT—feels like a scene ripped from The Terminator. Yet it’s real, and the implications stretch far beyond this one engineer’s garage.

AI Weapons: From Hobbyists to the Pentagon

STS 3D’s project, first seen on Futurism, is a stark reminder of how accessible artificial intelligence has become. ChatGPT, OpenAI’s flagship conversational AI, was designed to generate essays, debug code, and engage in human-like dialogue. Few foresaw its use as the voice and brain of an automated rifle system.

The exact technical details remain unclear, but OpenAI’s Realtime API likely played a central role. This tool, designed for voice-enabled applications, allows developers to build conversational systems capable of responding to complex queries. In this case, however, the same API was used to give a weapon system a voice—and the ability to follow orders.

The video showcasing STS 3D’s creation quickly went viral. Some saw it as a chilling portent of what happens when consumer-grade AI meets weaponry. Others, with dark humor, likened it to Skynet from The Terminator.

For its part, OpenAI cut off STS 3D from ChatGPT after the videos gained traction, citing internal policies against using “our service to harm yourself or others,” which includes the development or “use of weapons.”

Here’s where things really get interesting though. OpenAI is actually eyeing military contracts.

Dystopia Much?

Back in January, 2024, OpenAI removed a direct ban in their usage policy on “activity that has high risk of physical harm” which specifically included “military and warfare” and “weapons development.” Just one week later, the company announced a cybersecurity partnership with the Pentagon.

Just recently, in December 2024, OpenAI said it entered a partnership with California-based weapons company, Anduril, to produce AI weapons. Defense contractor Anduril Industries makes AI-powered drones, missiles, and surveillance systems. In the same month that it announced its partnership with OpenAI, Anduril secured a $1 billion, three-year contract with the Pentagon to develop battlefield AI tools. Among their creations is the Sentry system, already in use to monitor borders and coastlines worldwide.

Now, the two companies are developing an AI system designed to share real-time battlefield data and make split-second decisions—decisions that could include life or death. Critics argue that these moves contradict OpenAI’s original mission to develop AI that “benefits humanity.” For now, the company maintains that its work in defense is aligned with its commitment to safety and ethical standards.

If a hobbyist can make lethal AI-systems, imagine what professional defense contractors can achieve. From claims of drones equipped with AI targeting systems in Ukraine to the Israeli Defence Force developing the ‘Lavendar’ and ‘Gospel’ AI systems to identify targets in Gaza, the use of AI in conflict is already a reality. The scariest variety are fully autonomous weapons systems (AWS) with the capacity to identify, select and target humans all by themselves. Alexander Schallenberg, Austrian Minister for Foreign Affairs, described the increasing risks of AI in weapons as “this generation’s Oppenheimer moment,” referring to the development and subsequent use of the atomic bomb in the 1940s.

But the entrance of hobbyists into this space is a newer—and potentially more dangerous—development. Unlike corporate or government programs, these DIY projects operate outside established regulations, leaving little accountability for their creators.

What’s Next?

For years, the United Nations and human rights organizations have warned about the dangers of autonomous weapons. These systems, critics argue, remove human oversight from the act of killing, making war faster, cheaper, and potentially more indiscriminate.

Yet the warnings have largely gone unheeded. While governments debate the ethics of autonomous weapons, engineers like STS 3D are already building them. As one online commenter on the viral video put it, “The genie’s out of the bottle.”

As AI becomes increasingly powerful and accessible, the line between creative experimentation and dangerous innovation grows thinner.

share Share

China Resurrected an Abandoned Soviet 'Sea Monster' That's Part Airplane, Part Hovercraft

The Soviet Union's wildest aircraft just got a second life in China.

A Rocket Carried Cannabis Seeds and 166 Human Remains into Space But Their Capsule Never Made It Back

The spacecraft crashed into the Pacific Ocean after a parachute failure, ending a bold experiment in space biology and memorial spaceflight.

Ancient ‘Zombie’ Fungus Trapped in Amber Shows Mind Control Began in the Age of the Dinosaurs

The zombie fungus from the age of the dinosaurs.

Your browser lets websites track you even without cookies

Most users don't even know this type of surveillance exists.

What's Seasonal Body Image Dissatisfaction and How Not to Fall into Its Trap

This season doesn’t have to be about comparison or self-criticism.

Why a 20-Minute Nap Could Be Key to Unlocking 'Eureka!' Moments Like Salvador Dalí

A 20-minute nap can boost your chances of a creative breakthrough, according to new research.

The world's oldest boomerang is even older than we thought, but it's not Australian

The story of the boomerang goes back in time even more.

Swarms of tiny robots could go up your nose, melt the mucus and clean your sinuses

The "search-and-destroy” microrobot system can chemically shred the resident bacterial biofilm.

What if Every Roadkill Had a Memorial?

Road ecology, the scientific study of how road networks impact ecosystems, presents a perfect opportunity for community science projects.

Fireball Passes Over Southeastern United States

It’s a bird! It’s a plane! It’s… a bolide!