Sie sind hier: Startseite Projects


Current Projects


High-Level Decision Making for Autonomous Driving

In this project, we strive to learn high-level driving strategies that account for performance, safety guarantees and comfort. Classical rule-based decision-making systems are limited due to their fragility in the light of ambiguous and noisy sensor data. Deep Reinforcement Learning (DRL) methods offer an attractive alternative for learning decision policies from data automatically and have shown great potential in a number of domains.

Partners: BMW Group


Optimization of capacity and traffic on railway as a multi-agent reinforcement learning problem

This research focuses on cooperation between agents which is critical for scheduling tasks where the goal of optimization is the improvement of overall schedule and not just of individual agents. These include considerations with respect to centralized and decentralized problem settings, cooperative versus competitive goals, homogeneous and heterogeneous agents acting together and the ability of the solution to scale to a large problem setting. 

Funded by: Deutsche Bahn 


Constrained Reinforcement Learning in Real World Applications

Intelligence for Cities (I4C) is a joint project between the Albert-Ludwigs-University of Freiburg, the Freiburg-based Fraunhofer Institutes ISE and IPM and the Sustainability Center Freiburg. I4C is funded by the German Federal Ministry for the Environment, Nature Conservation and Nuclear Safety (BMU) through the funding initiative "KI Lighthouses". The goal of I4C is to improve the adaptation of cities to climate change through a process chain from data collection, analysis and environmental forecasting to concrete measures.Our lab is working on developing a new control algorithm driven by Reinforcement Learning techniques in heating system, to achieve energy-efficient and universally applicable in different scenarios, and at the same time, taking the variant electricity-market and user preference into consideration.


 Bayesian Deep Learning for Embedded MPC 

Deep learning has brought significant progress in a variety of applications of machine learning in recent years. Currently, the most widely-used method of training these deep networks are maximum likelihood approaches, which only give a point estimate of the parameters that maximize the likelihood of the input data, and do not quantify how certain the model is about its predictions. The uncertainty of the model is, however, a crucial factor in robust and risk-averse control applications. Bayesian Deep Learning approaches offer a promising alternative that allows quantifying model uncertainty explicitly, but many current approaches are difficult to scale, have high computational overhead, and poorly calibrated uncertainties.

Funded by: Embedded learning and optimization for the next generation of smart industrial control systems (ELO-X)


Completed Projects


Early Seizure: Optimized early seizure detection for a closed loop intervention device in epilepsy

Brain-Links Brain-Tools Project

This project aimed at a sensitive and specific seizure detection with properties which allow an implementation in a closed-loop device to treat human epilepsy.

Collaboration with: Epilepsy Center and IMTEK


NeuroBots: Brain-controlled intelligent robotic devices

Brain-Links Brain-Tools Project

Brain-controlled prosthetic devices are a powerful tool for allowing paralyzed or otherwise bodily disabled individuals to regain their freedom to interact naturally with the world. Yet as the systems to get controlled become more and more complex, from single legs or arms over mobile robotic platforms to humanoid surrogates, ways to deal with the increased cognitive load and facilitate control will need to be found. The central idea of ​​this project is that by enhancing a prosthetic device with a certain degree of autonomy and adaptivity, control is possible on a higher cognitive level rather than on the lower level of raw motor signals.