Neural computations: Learning and dynamics in recurrent networks
Organizers
Manuel Beiran | ENS, Paris, France
Friedrich Schuessler | Technion, Haifa, Israel
Abstract
Trained recurrent neural networks (RNNs) have become an important tool to understand how
biological neural networks perform cognitive tasks. On the one hand, RNNs can help to
understand the computations that are needed to solve a task, and how these are implemented by the network. On the other hand, learning itself can be studied in RNNs, with the goal of understanding general underlying mechanisms which are also relevant in biology. Both aspects, learning and dynamics, are far from being fully understood, but the last years have brought exciting insights.
In this workshop, we will hear about recent works from both perspectives. We will see how the activity of trained RNNs can be framed in the language of dynamical systems. This reveals how networks implement computations and multitasking based on low-dimensional activity. We will further hear about the interaction between neural dynamics and learning, and how biological constraints can be incorporated into the learning algorithms. Finally, we will see how trained RNNs can be applied to understand neural recordings and make experimental predictions. The workshop will end with a discussion on how the insights into learning and dynamics together can form a unified perspective on tasks, learning algorithms, and the necessary underlying computations.
Schedule
time (CEST) |
|
14:00 | Rémi Monasson | ENS, Paris, France Low-Dimensional Manifolds Support Multiplexed Integrations in Recurrent Neural Networks |
14:30 | Devika Narain | Erasmus MC, Rotterdam, The Netherlands Bayesian computations through latent cortical dynamics |
15:00 | Ran Darshan | HHMI Janelia Research Campus, Ashburn, USA Learning manifold attractors in heterogeneous networks |
15:30 | Guangyu Robert Yang | Columbia University, New York, USA NeuroGym: A dataset for studying RNNs across many neuroscience tasks |
16:00 | 30 min break |
16:30 | Kanaka Rajan | Mount Sinai Hospital, New York, USA Recurrent Neural Network Models of Adaptive and Maladaptive Learning |
17:00 | Alexis Dubreuil | ENS, Paris, France Disentangling the roles of dimensionality and sub-populations in neural computation |
17:30 | Cristina Savin | New York University, USA A Unified Framework of Online Learning Algorithms for Training Recurrent Neural Networks |
18:00 | Discussion |
18:30 | End |