homehome Home chatchat Notifications


Hobbyist Builds AI-Assisted Rifle Robot Using ChatGPT: "We're under attack from the front left and front right. Respond accordingly"

The viral video sparked ethical debates about the broader implications of AI weapons.

Tibi Puiu
January 9, 2025 @ 4:29 pm

share Share

Credit: STS 3D/TikTok.

One of the wildest videos to go viral on TikTok recently caught everyone by surprise. It featured an engineer who built his own AI-assisted robot that aims and shoots a rifle using voice commands.

“ChatGPT, we’re under attack from the front left and front right. Respond accordingly,” the inventor, known only by his online moniker STS 3D, declares calmly.

The rifle, mounted on a robotic arm, pivots instantly. It swivels left, then right, firing a barrage of blanks precisely as instructed. A voice, eerily polite, responds: “If you need any further assistance, just let me know.”

OpenAI realtime API connected to a rifle
byu/MetaKnowing inDamnthatsinteresting

This wasn’t the machine’s only unsettling trick. In another segment of the video, the engineer straddles the rifle-mounted system, riding it like a mechanical bull as it swivels, evoking imagery straight out of Dr. Strangelove, Stanley Kubrick’s Cold War satire. The absurdity of the scene belies its gravity: this isn’t a government lab or military base. It’s a hobbyist project built in a garage.

This invention—a weaponized robotic rifle powered by OpenAI’s ChatGPT—feels like a scene ripped from The Terminator. Yet it’s real, and the implications stretch far beyond this one engineer’s garage.

AI Weapons: From Hobbyists to the Pentagon

STS 3D’s project, first seen on Futurism, is a stark reminder of how accessible artificial intelligence has become. ChatGPT, OpenAI’s flagship conversational AI, was designed to generate essays, debug code, and engage in human-like dialogue. Few foresaw its use as the voice and brain of an automated rifle system.

The exact technical details remain unclear, but OpenAI’s Realtime API likely played a central role. This tool, designed for voice-enabled applications, allows developers to build conversational systems capable of responding to complex queries. In this case, however, the same API was used to give a weapon system a voice—and the ability to follow orders.

The video showcasing STS 3D’s creation quickly went viral. Some saw it as a chilling portent of what happens when consumer-grade AI meets weaponry. Others, with dark humor, likened it to Skynet from The Terminator.

For its part, OpenAI cut off STS 3D from ChatGPT after the videos gained traction, citing internal policies against using “our service to harm yourself or others,” which includes the development or “use of weapons.”

Here’s where things really get interesting though. OpenAI is actually eyeing military contracts.

Dystopia Much?

Back in January, 2024, OpenAI removed a direct ban in their usage policy on “activity that has high risk of physical harm” which specifically included “military and warfare” and “weapons development.” Just one week later, the company announced a cybersecurity partnership with the Pentagon.

Just recently, in December 2024, OpenAI said it entered a partnership with California-based weapons company, Anduril, to produce AI weapons. Defense contractor Anduril Industries makes AI-powered drones, missiles, and surveillance systems. In the same month that it announced its partnership with OpenAI, Anduril secured a $1 billion, three-year contract with the Pentagon to develop battlefield AI tools. Among their creations is the Sentry system, already in use to monitor borders and coastlines worldwide.

Now, the two companies are developing an AI system designed to share real-time battlefield data and make split-second decisions—decisions that could include life or death. Critics argue that these moves contradict OpenAI’s original mission to develop AI that “benefits humanity.” For now, the company maintains that its work in defense is aligned with its commitment to safety and ethical standards.

If a hobbyist can make lethal AI-systems, imagine what professional defense contractors can achieve. From claims of drones equipped with AI targeting systems in Ukraine to the Israeli Defence Force developing the ‘Lavendar’ and ‘Gospel’ AI systems to identify targets in Gaza, the use of AI in conflict is already a reality. The scariest variety are fully autonomous weapons systems (AWS) with the capacity to identify, select and target humans all by themselves. Alexander Schallenberg, Austrian Minister for Foreign Affairs, described the increasing risks of AI in weapons as “this generation’s Oppenheimer moment,” referring to the development and subsequent use of the atomic bomb in the 1940s.

But the entrance of hobbyists into this space is a newer—and potentially more dangerous—development. Unlike corporate or government programs, these DIY projects operate outside established regulations, leaving little accountability for their creators.

What’s Next?

For years, the United Nations and human rights organizations have warned about the dangers of autonomous weapons. These systems, critics argue, remove human oversight from the act of killing, making war faster, cheaper, and potentially more indiscriminate.

Yet the warnings have largely gone unheeded. While governments debate the ethics of autonomous weapons, engineers like STS 3D are already building them. As one online commenter on the viral video put it, “The genie’s out of the bottle.”

As AI becomes increasingly powerful and accessible, the line between creative experimentation and dangerous innovation grows thinner.

share Share

A 5,000-Year-Old Cow Tooth Just Changed What We Know About Stonehenge

An ancient tooth reshapes what we know about the monument’s beginnings.

Astronomers See Inside The Core of a Dying Star For the First Time, Confirm How Heavy Atoms Are Made

An ‘extremely stripped supernova’ confirms the existence of a key feature of physicists’ models of how stars produce the elements that make up the Universe.

Rejoice! Walmart's Radioactive Shrimp Are Only a Little Radioactive

You could have a little radioactive shrimp as a treat. (Don't eat any more!)

Newly Found Stick Bug is Heavier Than Any Insect Ever Recorded in Australia

Bigger than a cockroach and lighter than a golf ball, a giant twig emerges from the misty mountains.

Chevy’s New Electric Truck Just Went 1,059 Miles on a Single Charge and Shattered the EV Range Record

No battery swaps, no software tweaks—yet the Silverado EV more than doubled its 493-mile range. How’s this possible?

Dolphins and Whales Can Be Friends and Sometimes Hang Out Together

They have a club and you're not invited.

Cats in a Finnish Village Have a Coat Pattern That's Never Been Seen Before

These beautiful and unique cats have similarly unique DNA.

Scientists Uncover 505-Million-Year-Old Penis Worm with a Mouthful of Bizarre Teeth

Evolution was trying things out.

The Bishop, the Cleric, and the Woman on Seashells: Scientists Recreate the Faces of Scotland’s Early Christians

Their faces were lost to the world. Now, science has brought them back.

Scientists May Have Found a New Mineral on Mars. It Hints The Red Planet Stayed Warm Longer

Scientists trace an enigmatic infrared band to heated, oxygen-altered sulfates.