Deep reinforcement-learning architecture combines pre-learned skills to create new sets of skills on the fly

Deep reinforcement-learning architecture combines pre-learned skills to create new sets of skills on the fly

A team of researchers from the University of Edinburgh and Zhejiang University has developed a way to combine deep neural networks (DNNs) to create a new type of system with a new kind of learning ability. The group describes their new architecture and its performance in the journal Science Robotics.

Deep neural networks are able to learn functions by training on multiple examples repeatedly. To date, they have been used in a wide variety of applications such as recognizing faces in a crowd or deciding whether a loan applicant is credit-worthy. In this new effort, the researchers have combined several DNNs developed for different applications to create a new system with the benefits of all of its constituent DNNs. They report that the resulting system was more than just the sum of its parts—it was able to learn new functions that none of the DNNs could do working alone. The researchers call it a multi-expert learning architecture (MELA).

More specifically, the work involved training several DNNs for different functions. One learned to make a robot trot, for example; another could navigate around obstacles. All of the DNNs were then connected to a gating neural network that learned over time how to call the other DNNs when something came up that required its special skillset as it controlled a robot moving around its environment. That resulting system was then able to carry out all of the skills of all of the combined DNNs.

Narrated video discussing how MELA works and its novelty. Yang et al., Sci Robot. 5, eabb2174 (2020)

But that was not the end of the exercise—as the MELA learned more about its constituent parts and their abilities, it learned to use them together through trial and error in ways that it had not been taught. It learned, for example, how to combine getting up after falling with dealing with a slippery floor, or what to do if one of its motors failed. The researchers suggest their work marks a new milestone in robotics research, providing a new paradigm in which humans do not have to intercede when a robot encounters problems it has not experienced before.

Video of outdoor experiments where a MELA-programmed robot recovers from being kicked to the ground. Yang et al., Sci Robot. 5, eabb2174 (2020)
Video of the four-legged robot trotting on different kinds of surfaces, including slippery ones. Yang et al., Sci Robot. 5, eabb2174 (2020)
Video of the MELA-programmed robot trotting on pebbles or grass outside, when out of nowhere, a human pushes it to the ground. The robot recovers quickly, thanks to MELA. Yang et al., Sci Robot. 5, eabb2174 (2020)
Video of MELA encountering various challenges in simulations. Yang et al., Sci Robot. 5, eabb2174 (2020)
Video of less efficient fall recovery using default controllers of the four-legged robot. Yang et al., Sci Robot. 5, eabb2174 (2020)
Using MELA, a four-legged robot learns new skills from pre-trained skills. Yang et al., Sci Robot. 5, eabb2174 (2020)

Researchers find a way to fool deep neural networks into ‘recognizing’ images that aren’t there

More information:
Chuanyu Yang et al. Multi-expert learning of adaptive legged locomotion, Science Robotics (2020). DOI: 10.1126/scirobotics.abb2174

2020 Science X Network

Citation:
Deep reinforcement-learning architecture combines pre-learned skills to create new sets of skills on the fly (2020, December 10)
retrieved 10 December 2020
from https://techxplore.com/news/2020-12-deep-reinforcement-learning-architecture-combines-pre-learned.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Access the original article