Sie sind hier: Startseite Jobs NeuroHRI

NeuroHRI

Job Announcement of the Neurorobotics Lab

Join us in establishing neurotechnological human-robot interaction

 

We are looking for a highly motivated PhD student working on the intersection of reinforcement learning and neuroscience!

 

Summary

In this research project, the Robot Learning Lab of Prof. Abhinav Valada, the Neuromedical AI Lab of Prof. Tonio Ball, and the Neurorobotics Lab of Prof. Joschka Boedecker are collaborating to investigate the principles of interaction between the brain and novel autonomous robotic systems. More specifically, robotic systems controlled by brain-machine interfaces will be developed to perform service tasks. 

An important future target group of these systems are severely paralyzed patients who have no other means of feedback on the activities performed by the robots. Therefore, novel methods will be developed to measure MEG with optically pumped magnetometers (OPM), and to integrate the decoded brain signals into the learning process of new tasks as well as for adaptation of existing robotic abilities. 

Previously, we developed a system that allows a robot to learn new skills from its own experience while receiving interactive feedback from a user. This way, complex skills can be learned in a real-world setting requiring only one hour of training with easy to provide evaluative and corrective feedback. For more details visit the project website at http://ceiling.cs.uni-freiburg.de. Given the low dimensionality of the required feedback, in the next step we aim to decode the user intention and preferences from their brain signals to incorporate them into the learning process as an alternative form of feedback. As an additional perspective, this project will investigate the extent to which it is possible to provide the user of the system with a smooth transition between high-level control and low-level control (“sliding autonomy”). This would enable the user to control the robot completely on the level of the individual motors if desired, e.g., to demonstrate new solutions via the developed interactive feedback approach.

We will further investigate how a robot can perform mobile manipulation skills, e.g., pick up and place the object down while navigating. Such a capability is a critical requirement as it can significantly increase efficiency and minimize the waiting time for users. As a large amount of interaction data is required for learning, an approach with several identical robots that collect and aggregate data in parallel will be exploited.