homehome Home chatchat Notifications


Hobbyist Builds AI-Assisted Rifle Robot Using ChatGPT: "We're under attack from the front left and front right. Respond accordingly"

The viral video sparked ethical debates about the broader implications of AI weapons.

Tibi Puiu
January 9, 2025 @ 4:29 pm

share Share

Credit: STS 3D/TikTok.

One of the wildest videos to go viral on TikTok recently caught everyone by surprise. It featured an engineer who built his own AI-assisted robot that aims and shoots a rifle using voice commands.

“ChatGPT, we’re under attack from the front left and front right. Respond accordingly,” the inventor, known only by his online moniker STS 3D, declares calmly.

The rifle, mounted on a robotic arm, pivots instantly. It swivels left, then right, firing a barrage of blanks precisely as instructed. A voice, eerily polite, responds: “If you need any further assistance, just let me know.”

OpenAI realtime API connected to a rifle
byu/MetaKnowing inDamnthatsinteresting

This wasn’t the machine’s only unsettling trick. In another segment of the video, the engineer straddles the rifle-mounted system, riding it like a mechanical bull as it swivels, evoking imagery straight out of Dr. Strangelove, Stanley Kubrick’s Cold War satire. The absurdity of the scene belies its gravity: this isn’t a government lab or military base. It’s a hobbyist project built in a garage.

This invention—a weaponized robotic rifle powered by OpenAI’s ChatGPT—feels like a scene ripped from The Terminator. Yet it’s real, and the implications stretch far beyond this one engineer’s garage.

AI Weapons: From Hobbyists to the Pentagon

STS 3D’s project, first seen on Futurism, is a stark reminder of how accessible artificial intelligence has become. ChatGPT, OpenAI’s flagship conversational AI, was designed to generate essays, debug code, and engage in human-like dialogue. Few foresaw its use as the voice and brain of an automated rifle system.

The exact technical details remain unclear, but OpenAI’s Realtime API likely played a central role. This tool, designed for voice-enabled applications, allows developers to build conversational systems capable of responding to complex queries. In this case, however, the same API was used to give a weapon system a voice—and the ability to follow orders.

The video showcasing STS 3D’s creation quickly went viral. Some saw it as a chilling portent of what happens when consumer-grade AI meets weaponry. Others, with dark humor, likened it to Skynet from The Terminator.

For its part, OpenAI cut off STS 3D from ChatGPT after the videos gained traction, citing internal policies against using “our service to harm yourself or others,” which includes the development or “use of weapons.”

Here’s where things really get interesting though. OpenAI is actually eyeing military contracts.

Dystopia Much?

Back in January, 2024, OpenAI removed a direct ban in their usage policy on “activity that has high risk of physical harm” which specifically included “military and warfare” and “weapons development.” Just one week later, the company announced a cybersecurity partnership with the Pentagon.

Just recently, in December 2024, OpenAI said it entered a partnership with California-based weapons company, Anduril, to produce AI weapons. Defense contractor Anduril Industries makes AI-powered drones, missiles, and surveillance systems. In the same month that it announced its partnership with OpenAI, Anduril secured a $1 billion, three-year contract with the Pentagon to develop battlefield AI tools. Among their creations is the Sentry system, already in use to monitor borders and coastlines worldwide.

Now, the two companies are developing an AI system designed to share real-time battlefield data and make split-second decisions—decisions that could include life or death. Critics argue that these moves contradict OpenAI’s original mission to develop AI that “benefits humanity.” For now, the company maintains that its work in defense is aligned with its commitment to safety and ethical standards.

If a hobbyist can make lethal AI-systems, imagine what professional defense contractors can achieve. From claims of drones equipped with AI targeting systems in Ukraine to the Israeli Defence Force developing the ‘Lavendar’ and ‘Gospel’ AI systems to identify targets in Gaza, the use of AI in conflict is already a reality. The scariest variety are fully autonomous weapons systems (AWS) with the capacity to identify, select and target humans all by themselves. Alexander Schallenberg, Austrian Minister for Foreign Affairs, described the increasing risks of AI in weapons as “this generation’s Oppenheimer moment,” referring to the development and subsequent use of the atomic bomb in the 1940s.

But the entrance of hobbyists into this space is a newer—and potentially more dangerous—development. Unlike corporate or government programs, these DIY projects operate outside established regulations, leaving little accountability for their creators.

What’s Next?

For years, the United Nations and human rights organizations have warned about the dangers of autonomous weapons. These systems, critics argue, remove human oversight from the act of killing, making war faster, cheaper, and potentially more indiscriminate.

Yet the warnings have largely gone unheeded. While governments debate the ethics of autonomous weapons, engineers like STS 3D are already building them. As one online commenter on the viral video put it, “The genie’s out of the bottle.”

As AI becomes increasingly powerful and accessible, the line between creative experimentation and dangerous innovation grows thinner.

share Share

Largest Study of Its Kind Finds How Long-Term Cannabis Use Affects Memory

The study looked at the effect of cannabis use on young adults who are recent or heavy users.

This Moth’s Wings Create a Mind-Bending 3D Optical Illusion to Avoid Being Eaten

A moth's flat wings fool predators into seeing an inedible 3D leaf.

Scientists Just Linked Two Quantum Computers With "Quantum Teleportation" for the First Time and It Changes Everything

The future of computing might not be one giant quantum machine but many linked together.

Human-like Teeth Grown in Pigs Could Make Dental Implants a Thing of the Past

It's a glimpse into the future of tooth replacement.

Paleolithic culture cannibalized their enemies — and maybe their friends as well

In the 19th century, archaeologists in Poland unearthed a stunning cave filled with prehistoric secrets. The Maszycka Cave, as it’s called, once sheltered Magdalenian people 18,000 years ago. Now, a new study offers compelling evidence that the cave was the site of a grisly ritual — or perhaps something even darker. Did these ancient people […]

AI Is Supposed to Make You More Productive — It's Making You Dumber and Overconfident

Generative AI is supposed to make life easier. It drafts emails, summarizes documents, and even generates creative content, helping you offload some of that dreaded cognitive effort. But according to a new study from Carnegie Mellon University and Microsoft Research, it may also be making you dumber in the process. The study, based on a […]

Beavers Built a $1.2M Dam for Free — And Saved a Czech River

A Czech project that was stalled for years is now completed — by beavers.

A Single High Dose of Creatine Might Help the Brain to Power Through Sleep Deprivation

From the gym to your brain: the surprising new use of creatine.

A Spoonful of Peanut Butter Might Be the Key to Overcoming Peanut Allergies

A new study suggests that children with peanut allergies may be able to safely build tolerance through a simple, cost-effective treatment.

Inside China's 600 MPH Floating Train Faster Than a Boeing 737

It's basically a Hyperloop design on steroids.