From Blacklistednews:

Will robots soon be able to teach themselves … everything?

There’s a robot in California teaching itself to walk. Its name is Darwin, and like a toddler, it teeters back and forth in a UC Berkeley lab, trying and falling, and then trying again before getting it right. But it’s not actually Darwin doing all this. It’s a neural network designed to mimic the human brain.

Darwin’s baby steps speak to what many researchers believe will be the greatest leap in robotics — a kind of general machine learning that allows robots to adapt to new situations rather than respond to narrow programming.

Developed by Pieter Abbeel and his team at UC Berkeley’s Robot Learning Lab, the neural network that allows Darwin to learn is not programmed to perform any specific functions, like walking or climbing stairs. The team is using what’s called “reinforcement learning” to try and make the robots adapt to situations as a human child would.

Like a child’s brain, reinforcement technology invokes the trial-and-error process.

“Imagine learning a new skill, like how to ride a bike,” said John Schulman, a Ph.D. candidate in computer science at UC Berkeley in Abbeel’s group. You’re going to fall a lot, but then, “after some practice, you figure it out.”

Read More…

Continue Reading