Deep Learning for Robust Robot Control (DL-foRCe)

Abstract:

While robots can flawlessly execute a set of commands to achieve a task, these commands are mostly encoded by hand. There is a need for effective learning methods that can deal with the uncertainty in the robot's environment, in particular when only broad goals are specified, and the learning algorithm has to learn motor commands to achieve these goals. This typically involves reinforcement learning (RL). However, current RL for robotics tasks relies on ad hoc function approximators and is typically not robust to changes in the task, environment, or robot uncertainty (compliant robot actuators, or wear and tear). The aim of this project is to integrate two emerging notions in order to make reinforcement learning for robot control more robust and efficient: dynamic feedback control policies for robust control combined with deep neural networks to learn low-dimensional parameterizations of such control policies. This approach promises a generic and robust approach to reinforcement learning for robotic control.

Project Type: NWO Natural Artificial Intelligence; 2015-2019

Members: ir. Tim de Bruin , Dr.-Ing. Jens Kober , Prof Dr Sander Bohté , Prof Karl Tuyls , prof.dr. Robert Babuška

Publications