homehome Home chatchat Notifications


A Cartoonish Crash Test Raises Real Questions About Tesla’s Autopilot

Mark Rober pits Tesla against lidar in a real-life Wile E. Coyote experiment.

Tudor Tarita
March 19, 2025 @ 10:34 am

share Share

Mark Rober wanted to know if Tesla’s self-driving car could be deceived in the most cartoonish way possible.

The former NASA engineer and YouTube sensation built a wall designed to trick a car. The wall stretched across the road, painted to look like the asphalt continuing straight ahead. Would Tesla’s Autopilot system recognize the deception in time to stop? Or would it speed forward like Wile E. Coyote chasing the Road Runner?

In the moment of truth, a Tesla Model Y, equipped with its camera-based Autopilot system, barreled forward at 40 miles per hour. The result was spectacular in the worst possible way: a gaping, cartoon-style hole as the car smashed through the fake road. Meanwhile, a second vehicle—this one fitted with Lidar, a laser-based sensing system—stopped cleanly before impact.

The video was an instant hit, racking up 10 million views in just two days. But as with anything related to Tesla, the crash test didn’t just spark curiosity—it ignited a firestorm.

Beep Beep!
Beep Beep! Credit: Mark Rober

Camera vs Lidar

Tesla’s approach to driver assistance has long been a subject of debate. Unlike most autonomous vehicle developers, who rely on a combination of cameras, radar, and lidar, Tesla has doubled down on vision alone, no Lidar. The company removed radar from its vehicles in 2021, betting that neural networks trained on camera data could replicate—and eventually surpass—human perception.

Elon Musk has called Lidar a “fool’s errand.” But Rober’s test suggests that, at least for now, the technology has a clear advantage. The Lidar-equipped vehicle correctly identified the fake road as an obstacle, while the Tesla—trusting its cameras—saw only an open highway.

That wasn’t Tesla’s only fumble. In a separate test, Autopilot successfully avoided a stationary dummy and another that suddenly ran into its path. But in fog and heavy rain, it failed, flattening the child-sized dummy. The Lidar system, by contrast, detected the mannequin every time.

This shouldn’t have been a surprise. Cameras struggle with poor visibility. Lidar, which actively scans the environment using lasers, doesn’t. The technology is more expensive and requires significant data processing, but as Rober’s experiment demonstrated, it can see what cameras miss.

Nnnnnope!
Nnnnnope! Credit: Mark Rober

Controversy and Conspiracies

The test was not without controversy, however. Some Tesla supporters questioned whether Autopilot had even been engaged during the wall crash. Others claimed Rober manipulated the footage, secretly pushing an anti-Tesla agenda on behalf of Big Lidar.

The scrutiny became so intense that Rober released unedited footage showing that Autopilot had, in fact, been active. But eagle-eyed viewers noticed something else: just before impact, the system appeared to disengage. That led to a new round of speculation—was this a deliberate Tesla feature to avoid responsibility for crashes?

It wouldn’t be the first time the issue had come up. In 2022, the National Highway Traffic Safety Administration (NHTSA) investigated dozens of Tesla crashes involving stationary emergency vehicles. In 16 cases, Autopilot “aborted vehicle control less than one second prior to the first impact.” Critics suspect this is a convenient way to avoid liability. Unsurprisingly, Tesla has denied any wrongdoing.

Well that went well
Well that went well. Credit: Mark Rober

The Real Takeaway

Rober’s test wasn’t perfect. We’re not sure if anything was tampered. Ultimately, the video was designed to be entertaining, and some elements, like the exaggerated hole in the wall, were added to the spectacle. But the core lesson is hard to ignore: Autopilot is not a true self-driving system. It’s a Level 2 driver assistance feature, meaning the driver is expected to remain engaged at all times.

Simply put, you can’t rely on it. You’re still driving the car.

Tesla’s defenders argue that Full Self-Driving (FSD), the company’s more advanced software, wasn’t tested. But FSD relies on the same camera-based approach, raising questions about whether it would have fared any better.

And while a painted wall might seem like an absurd scenario, the same underlying problem—camera-based systems misinterpreting their surroundings—has led to real-world tragedies. In 2016, a Tesla driver was killed when Autopilot failed to recognize a truck trailer crossing its path. The system mistook the bright white trailer for open sky.

Even if most drivers won’t encounter a Wile E. Coyote-style trap, fog, rain, and other visibility issues are everyday realities. And if a system that claims to be the future of autonomous driving can’t handle those, what else is it missing?

share Share

Brazil’s ‘Big Zero’ Stadium on the Equator Lets Teams Change Hemispheres at Half Time

Each team is defending one hemisphere!

Lab-Grown Beef Now Has Real Muscle Fibers and It’s One Step Closer to Burgers With No Slaughter

In lab dishes, beef now grows thicker, stronger—and much more like the real thing.

Solid-State Batteries Charge in 3 Minutes, Offer Nearly Double the Range, and Never Catch Fire. So Why Aren't They In Your Phones and Cars Yet?

Solid state are miles ahead lithium-ion, but several breakthroughs are still needed before mass adoption.

An AI Ran a Vending Machine. It Ended Just How You'd Think It Would, But Worse

For a few surreal weeks, the dystopian future ran inside a mini-fridge in San Francisco.

Nearly 3,000 People Tried a Four-Day Workweek With No Pay Cut and the Results Were Great

Largest study of its kind finds fewer workdays make for healthier, happier, more productive employees.

This Disturbing Phone Case Gets Sunburned Like Real Skin to Teach You a Lesson

The creepiest phone case ever made could maybe one day save your life.

AI Is Now Funny Enough to Make You Laugh. But Can It Ever Be Truly Humorous?

As people turn to AI for therapy and companionship, some say the models still need to learn the nuances of human humor.

Meta's New Bracelet Lets You Control Computers Directly

It's a completely new way to interact with computers.

Scientists Create a ‘Smart Sponge’ That Knows When to Heal and When to Fight Inflammation

This hydrogel could help millions of people lead a better life.

AI-designed autonomous underwater glider looks like a paper airplane and swims like a seal

An MIT-designed system lets AI evolve new shapes for ocean-exploring robots.