Feb 11 – 14, 2020
Europe/Stockholm timezone

Discovering interpretable models of neural population dynamics from data (this talk will be streamed)

Feb 12, 2020, 3:00 PM
122:026 (Nordita)



Roslagstullsbacken 17, 106 91 Stockholm, Sweden


Tatiana Engel


Significant advances have been made recently to develop powerful machine learning methods for finding predictive structure in neural population recordings. However, most these techniques compromise between flexibility and interpretability. While simple ad hoc models are likely to distort defining features in the data, flexible models (such as artificial neural networks) are difficult to interpret. We developed a flexible yet intrinsically interpretable framework for discovering neural population dynamics from data. In our framework, population dynamics are governed by a non-linear dynamical system defined by a potential function. The activity of each neuron is related to the population dynamics through unique firing-rate functions, which account for heterogeneity of neural responses. The shapes of the potential and firing-rate functions are simultaneously inferred from data to provide high flexibility and interpretability. Using this framework, we find that good data prediction does not guarantee accurate interpretation of the model, and propose an alternative strategy for deriving models with correct interpretation. We demonstrate the power of our approach by discovering metastable dynamics in spontaneous spiking activity in the primate area V4.

Presentation materials

There are no materials yet.