Robot learns by doing. Starts off plain stupid, then grows smarter – just like us
Using a novel deep learning algorithm, a team at UC Berkeley demonstrated a robot that learns on the fly and performs various tasks that weren't pre-programmed. It starts off shy and clumsy, but eventually gets the ahead of it. For instance, after it stomped a bit around its environment, when given a new task, but with no further instructions, the robot learned by itself to assemble LEGO bricks or twist caps onto pill bottles.
Using a novel deep learning algorithm, a team at UC Berkeley demonstrated a robot that learns on the fly and performs various tasks that weren’t pre-programmed. It starts off shy and clumsy, but eventually gets the ahead of it. For instance, after it stomped a bit around its environment, when given a new task, but with no further instructions, the robot learned by itself to assemble LEGO bricks or twist caps onto pill bottles.
We humans are easily impressed by robots. They’re fast, efficient and seem to do most our jobs better than we’ll ever can. In fact, half of all jobs today (drivers, tellers, call center etc) could be replaced by bots within 20 years. But most robots aren’t smarter than a vacuum cleaner. Ask them to do anything beyond their dull, pre-programmed repetitive tasks and … well, they won’t respond in any way since they don’t have what you or me would call “thinking” or “consciousness”. This is where artificial intelligence comes in, you might say. Well, the artificial intelligence dream gained a lot of hype during the ’70s, but soon died off after specialists realized their forecasts of a sentient artificial being coming to life in the year 2000 or whatever is way off. Recently, the movement has gathered steam again. Berkeley’s BRETT (Berkley Robot of the Elimination of Tedious Tasks) is one prime example.
The bot was developed using deep learning algorithms, which are inspired by the way human neural networks fire and interact to help us make sense of the world.
“For all our versatility, humans are not born with a repertoire of behaviours that can be deployed like a Swiss army knife, and we do not need to be programmed,” said robotics researcher Sergey Levine in a press release. “Instead, we learn new skills over the course of our life from experience and from other humans. This learning process is so deeply rooted in our nervous system, that we cannot even communicate to another person precisely how the resulting skill should be executed. We can at best hope to offer pointers and guidance as they learn it on their own.”
Google’s StreetView or the equally impressive Siri are just a taste of what’s to come from the field of AI. Building similar networks for screw-and-bolt robots has proven a lot more difficult, however. BRETT is quite a milestone in this respect since it enforces deep learning to complete new motor tasks.
In a series of experiments, BREET was first tasked to assemble a toy airplane wheel. Clumsy and cumbersome, the robot eventually finished the operation in 12 long minutes. However, applying what it had learned previously, BRETT then quickly completed other motor tasks like stacking LEGO bricks or placing pegs into holes.
Key to its deep learning algorithm is a reward system which basically scores higher those movements that lead to a completed task than others.
“We still have a long way to go before our robots can learn to clean a house or sort laundry, but our initial results indicate that these kinds of deep learning techniques can have a transformative effect in terms of enabling robots to learn complex tasks entirely from scratch,” said Pieter Abbeel of UC Berkeley’s Department of Electrical Engineering and Computer Sciences. “In the next five to 10 years, we may see significant advances in robot learning capabilities through this line of work.”
Tibi is a science journalist and co-founder of ZME Science. He writes mainly about emerging tech, physics, climate, and space. In his spare time, Tibi likes to make weird music on his computer and groom felines.