Reinforced Holography
Join us in establishing holographic modulation of neuronal cell populations in vivo with reinforcement learning predictions
We are looking for a highly motivated PhD student working on the intersection of reinforcement learning and neuroscience!
Summary
There is considerable heterogeneity in the activity patterns among neurons, especially in higher cortical areas. To better understand the function of this functional diversity it is therefore necessary to dissect neural activity patterns during behavior on the level of individual neurons. However, experimentally addressing the behavioral contributions of functionally distinct, but spatially intermingled neurons, has been challenging due to the difficulty of perturbing the activity of individual neurons in a precise spatiotemporal manner in vivo. In this joint project with the Diester Lab, we propose to overcome this challenge by using an inverse reinforcement learning approach to generate predictions based on the analysis of neural activity patterns of 2-photon imaging data. In this approach, we first extract a functional representation of the factors driving observed animal behavior, and link it to the measured neural activity in a second step. We then use this representation to make precise predictions about expected behavioural changes based on simulated trials with perturbed neuronal activity.
Building on a recent technology which we developed, we will train a computational inverse reinforcement learning algorithm to predict the behaviour of subjects based on their neuronal activity (Kalweit et al., 2021). This technology is based on Inverse Q-learning (Kalweit et al., 2020) and will allow us to identify particularly relevant neurons for specific task aspects (e.g. correct choice, impatience measured as responses before the cue, late responses etc.). Based on these predictions we will then be able to target the identified cells via our holographic optogenetic stimulation unit. We will develop novel and improved ways of fast data handling, extraction and analysis to create a nearly closed loop situation in which we will be able to read neural activity via 2P imaging, extract activity patterns and create predictions via Inverse Q learning, and generate spatial masks for the holographic stimulation. With this approach we will be able to causally test the role of individual neurons and address the question of the critical threshold of perturbed neurons.
Previous Related Publications
Kalweit G., Kalweit M., Alyahyay M., Jaeckel Z., Steenbergen F., Hardung S., Diester I. and Boedecker J. (2021) NeuRL: Closed-form Inverse Reinforcement Learning for Neural Decoding. ICML 2021 Workshop on Computational Biology, PDF.
Kalweit G., Huegle M., Werling M. and Boedecker J. (2020) Deep Inverse Q-learning with Constraints. Advances in Neural Information Processing Systems 33 (NeurIPS 2020), PDF.