homehome Home chatchat Notifications


Anti-robot protest rallied at SXSW is just a marketing stunt - the message isn't

About two dozen University of Texas students gathered on Saturday at the entrance to the SXSW tech and entertainment festival to voice their concerns about the risks artificial intelligence might pose to humanity. Though largely ignored by hipster pedestrians nearby, the protest does raise some legitimate concerns even though technology is still far off from any Skynet scenario. Thankfully, we might never cross this SciFi threshold.

Tibi Puiu
March 17, 2015 @ 9:39 am

share Share

About two dozen University of Texas students gathered on Saturday at the entrance to the SXSW tech and entertainment festival to voice their concerns about the risks artificial intelligence might pose to humanity. Though largely ignored by hipster pedestrians nearby, the protest does raise some legitimate concerns even though technology is still far off from any Skynet scenario. Thankfully, we might never cross this SciFi threshold.

stop robots SXSW

Photo: RickJervis_twitter

“I say robot, you say no-bot!” chanted the mob which bore signs reading “Stop the Robots.” “Humans are the future.”

“This is is about morality in computing,” said Adam Mason, 23, who organized the protest.

 

The group called Stop the Robots said its goal was to shine a light on the dangers of artificial intelligence. The protesters seemed very organized, matching shirts included. Too organized, according to io9 which eventually found out the whole protest was a marketing stunt meant to promote a dating app. Ironically, the app is build on AI that matches people based on their interests. Personally, I feel this was really lame (which is why I decided not to name the app in question). Lame as it is, the fake protest does build an interesting discussion, for what it’s worth.

Stinkin’ robots

The protest viral stunt comes in the wake of a heated media debate over artificial intelligence. Prominent figures in science and technology like Bill Gates or Stephan Hawking have said that current efforts intended on creating thinking machines might be a threat to humanity. “The development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC. Elon Musk, chief executive of rocket-maker Space X, also fears artificial intelligence. Musk recently donated $10 million to the Future of Life Institute.

“In all those stories where there’s the guy with the pentagram and the holy water, it’s like, he’s sure he can control the demon. Didn’t work out,” Musk said at a MIT talk.

Artificial intelligence, though still primitive, is already making its way into our lives. Apple’s Siri and Google’s driverless cars are prime examples already in mainstream. Artificial neural networks can learn by themselves all sorts of things like recognize cats in photos. Yet, all of these are extremely limited. Siri is easily fooled and not that helpful on many occasions. Likewise, despite it takes hundreds of thousands of photos to train it, the artificial neural network is less accurate at spotting cats than a child. But technology is growing – fast. Senior ZME Science readers surely remember how only a couple of decades ago things like microprocessors were reserved only to the elite. Some might remember the days when computers where the size of buildings and the transistor wasn’t even around. Nowadays, everybody has a smartphone which packs a computing power bigger than all the transistors combined used to send man on the moon.

A lurking superintelligence

The paper clip monster.  Illustration by Jacob Escobedo

The paper clip monster. Illustration by Jacob Escobedo

The ultimate concern is that AI might grow to such an extent that it reaches a sort of singularity – AI superintelligence.  Nick Bostrom, a philosopher who directs the Future of Humanity Institute at the University of Oxford, actually wrote a book called Superintelligence. Inside, he outlines an oversimplified scenario. Imagine people in the not so distant future program a really smart machine tasked with making as many paper clips as possible. The paper clip bot, extremely intelligent and resourceful, will decide to maximize its output and efficiency, given its goals. To become more efficient, the machine will likely decide that it needs more computing power to become smarter, and better at making paper clips. Ok, just cap the damn thing at one million clips. The machine, however, is inherently plagued by obsessive compulsive disorder. It will stop at one million paper clips, but it then needs to count them and check for quality. With each doubt regarding paper clip quality, the machine will want more computing power to discern noise. In doing so, it might not stop until all the Earth is turned into a giant computer, meant to feed the machine’s growing urge for energy and computing power. So it can count paper clips better. Of course, there would be no room for humans.

This is only a thought experiment, but you get the idea. Yet, some robotics and AI experts aren’t convinced. Rodney Brooks,  a founder of iRobot and Rethink Robotics (featured by ZME previously), is skeptical of such scenarios. He believes folks like Bostrom and even Hawking misunderstand what computers are doing when we say they’re thinking or getting smart. Extrapolating from the state of AI today to suggest that superintelligence is looming is “comparable to seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner,” Brooks wrote recently on Edge.org. “Malevolent AI” is nothing to worry about, he says, for a few hundred years at least.

Indeed, a more immediate threat AI and automation in general poses to humanity deal with social issues (see previous article: half of all US jobs at risk of being taken over by computers). Technology has been replacing human labor since the industrial revolution, there’s nothing surprising here. But the pace at which robots are replacing human jobs, from plant workers to clerks, is greater than it ever was. Traditionally, a transitional period allowed communities to repurpose and hone new skills that are adequate in the new social space. The transition is getting ever steep, however.

Ultimately, Musk and Hawking might fret for nothing, but it doesn’t hurt to be safe.  Eliezer Yudkowski, a known figure in AI,  founded the Machine Intelligence Research Institute (MIRI) to develop technical controls that might limit the extent of a self-aware intelligence. These control measures would be far more sophisticated and far reaching than Asimov’s laws of robotics.

It’s good that we’re aware of the perils of a superintelligence, no matter how far off in time or technical possibility it might lie, but at the same time we might want to look inward as well. After all, historically speaking, the greatest threat to humanity has been and continues to be humanity itself.

share Share

This New Atomic Clock Is So Precise It Won’t Lose a Second for 140 Million Years

The new clock doesn't just keep time — it defines it.

A Soviet shuttle from the Space Race is about to fall uncontrollably from the sky

A ghost from time past is about to return to Earth. But it won't be smooth.

The world’s largest wildlife crossing is under construction in LA, and it’s no less than a miracle

But we need more of these massive wildlife crossings.

Your gold could come from some of the most violent stars in the universe

That gold in your phone could have originated from a magnetar.

Ronan the Sea Lion Can Keep a Beat Better Than You Can — and She Might Just Change What We Know About Music and the Brain

A rescued sea lion is shaking up what scientists thought they knew about rhythm and the brain

Did the Ancient Egyptians Paint the Milky Way on Their Coffins?

Tomb art suggests the sky goddess Nut from ancient Egypt might reveal the oldest depiction of our galaxy.

Dinosaurs Were Doing Just Fine Before the Asteroid Hit

New research overturns the idea that dinosaurs were already dying out before the asteroid hit.

Denmark could become the first country to ban deepfakes

Denmark hopes to pass a law prohibiting publishing deepfakes without the subject's consent.

Archaeologists find 2,000-year-old Roman military sandals in Germany with nails for traction

To march legionaries across the vast Roman Empire, solid footwear was required.

Mexico Will Give U.S. More Water to Avert More Tariffs

Droughts due to climate change are making Mexico increasingly water indebted to the USA.