Dimensionality Reduction and Population Dynamics in Neural Data
from
Tuesday 11 February 2020 (09:00)
to
Friday 14 February 2020 (18:00)
Monday 10 February 2020
Tuesday 11 February 2020
09:15
Introduction
Introduction
09:15 - 09:30
Room: 122:026
09:30
Identifying latent manifold structure from neural data with Gaussian process models (this talk will be streamed)
-
Jonathan Pillow
Identifying latent manifold structure from neural data with Gaussian process models (this talk will be streamed)
Jonathan Pillow
09:30 - 10:30
Room: 122:026
An important problem in neuroscience is to identify low-dimensional structure underlying noisy, high-dimensional spike trains. In this talk, I will discuss recent advances for tackling this problem in single and multi-region neural datasets. First, I will discuss the Gaussian Process Latent Variable Model with Poisson observations (Poisson-GPLVM), which seeks to identify a low-dimensional nonlinear manifold from spike train data. This model can successfully handle datasets that appear high-dimensional with linear dimensionality reduction methods like PCA, and we show that it can identify a 2D spatial map underlying hippocampal place cell responses from their spike trains alone. Second, I will discuss recent extensions to Poisson-spiking Gaussian Process Factor Analysis (Poisson-GPFA), which incorporates separate signal and noise dimensions as well as a multi-region model with coupling between latent variables governing activity in different regions. This model provides a powerful tool for characterizing the flow of signals between brain areas, and we illustrate its applicability using multi-region recordings from mouse visual cortex.
10:30
Coffee break
Coffee break
10:30 - 11:00
Room: 122:026
11:00
Neural manifolds for the stable control of movement (this talk will be streamed)
-
Sara Solla
Neural manifolds for the stable control of movement (this talk will be streamed)
Sara Solla
11:00 - 12:00
Room: 122:026
Animals, including humans, perform learned actions with remarkable consistency for years after acquiring a skill. What is the neural correlate of this stability? We explore this question from the perspective of neural populations. Recent work suggests that the building blocks of neural function may be the activation of population-wide activity patterns, the neural modes, rather than the independent modulation of individual neurons. These neural modes, the dominant co-variation patterns of population activity, define a low dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the time-dependent activation of the neural modes as their latent dynamics. We hypothesize that the ability to perform a given behavior in a consistent manner requires that the latent dynamics underlying the behavior also be stable. A dynamic alignment method allows us to examine the long term stability of the latent dynamics despite unavoidable changes in the set of neurons recorded via chronically implanted microelectrode arrays. We use the sensorimotor system as a model of cortical processing, and find remarkably stable latent dynamics for up to two years across three distinct cortical regions, despite ongoing turnover of the recorded neurons. The stable latent dynamics, once identified, allows for the prediction of various behavioral features via mapping models whose parameters remain fixed throughout these long timespans. We conclude that latent cortical dynamics within the task manifold are the fundamental and stable building blocks underlying consistent behavior
12:00
Lunch break
Lunch break
12:00 - 13:30
Room: Entre Restaurant
13:30
Low dimensional manifolds and temporal sequences of neuronal activity in the neocortex (this talk will be streamed)
-
Arvind Kumar
Low dimensional manifolds and temporal sequences of neuronal activity in the neocortex (this talk will be streamed)
Arvind Kumar
13:30 - 14:30
Room: 122:026
With recent advances in technology it has become possible to record 100s of neurons simultaneously from awake behaving animals. The analysis of such high-dimensional neuronal activity has revealed two interesting features: 1. neuronal activity is confined to a rather low-dimensional sub-manifold and animals find it very difficult (if not impossible) to change these low-dimensional intrinsic manifolds (2) Within the manifold, neuronal activity is organized as temporal (and some times spatial) sequences. The two properties provide new insights into the representation of information in the brain. In my talk, I will discuss the origin of these two features of the neuronal activity. First, I will argue that the low-dimensional manifold of cortical activity are consequence of the function the networks have learned to perform. I will show that within manifold changes entail small changes in the synaptic weights, while outside manifold changes require a massive rewiring of the whole network. This observation provides an explanation of why it is difficult to change the intrinsic manifold of neuronal activity. Next, to address how within a manifold neuronal activity is organized as a temporal sequences, I will focus on networks with distance dependent connectivity. For such networks we have found that when (1) neurons project a small fraction of their outputs to a preferential direction and (2) the preferred directions of neighboring neurons are similar, the network can generate temporal sequences without supervised or unsupervised learning. This generative rule implies that the need for a ‘correlated spatial anisotropic connectivity’. Such connectivity can arise when neighbouring neurons have similar shapes. In addition, I will argue that spatially patchy patterns of neuromodulator release not only allow for the formation of temporal sequences but also provide a biological plausible way to dynamical change the arrangement of sequences. Finally, I will discuss the implications of these results for brain dysfunction and control of neuronal activity dynamics.
14:30
Coffee break
Coffee break
14:30 - 15:00
Room: 122:026
15:00
Disentangling the roles of dimensionality and cell classes in neural computation
-
Srdjan Ostojic
Disentangling the roles of dimensionality and cell classes in neural computation
Srdjan Ostojic
15:00 - 16:00
Room: 122:026
The description of neural computations currently relies on two competing views: (i) a classical single-cell view that relates the activity of individual neurons to sensory or behavioural variables, and focuses on how different cell classes map onto computations; (ii) a more recent population view that instead characterises computations in terms of collective neural trajectories, and focuses on the dimensionality of these trajectories as animals perform tasks. How the two key concepts of cell classes and low-dimensional trajectories interact to shape neural computations is however at present not understood. Here we address this question by combining machine-learning tools for training recurrent neural networks with reverse-engineering and theoretical analyses of network dynamics.
Wednesday 12 February 2020
09:30
A local synaptic update rule for ICA and dimensionality reduction (this talk will be streamed)
-
Taro Toyoizumi
A local synaptic update rule for ICA and dimensionality reduction (this talk will be streamed)
Taro Toyoizumi
09:30 - 10:30
Room: 122:026
Humans can separately recognize individual sources when they sense their mixture. We have previously developed error-gated Hebbian rule (EGHR) for neural networks that achieves independent component analysis (ICA). The EGHR approximately maximizes the information flow through the network by updating synaptic strength using local information available at each synapse, which is also suitable for neuromorphic engineering. The update is described by the product of the presynaptic activity, the postsynaptic activity, and a global factor. If the number of sensors is higher than that of sources, the EGHR can perform dimensionality reduction in some useful ways in addition to simple ICA. First, how the sources are mixed can be dependent on the context. The EGHR can solve this multi-context ICA problem by extracting low-dimensional sources from the high-dimensional sensory inputs. Second, if the input dimensionality is much higher than the source dimensionality, the EGHR can accurately perform nonlinear ICA. I discuss an application of this nonlinear ICA technique for predictive coding of dynamic sources.
10:30
Coffee break
Coffee break
10:30 - 11:00
Room: 122:026
11:00
Stereotyped population dynamics in the medial entorhinal cortex
-
Soledad Gonzalo Cogno
Stereotyped population dynamics in the medial entorhinal cortex
Soledad Gonzalo Cogno
11:00 - 12:00
Room: 122:026
The medial entorhinal cortex (MEC) supports the brain’s representation of space with distinct cell types whose firing is tuned to features of the environment (grid, border, and object-vector cells) or navigation (head-direction and speed cells). These functionally-distinct cell types are anatomically intermingled in the superficial layers of the MEC. Since no single sensory stimulus can faithfully predict the firing of these cells, and activity patterns are preserved across environments and brain states, attractor network models postulate that spatially-tuned firing emerges from specific connectivity motives among neurons of the MEC. To determine how those connectivity motives constrain the self-organized activity in the MEC network, we tested mice in a spontaneous locomotion task under sensory-deprived conditions, when activity likely is determined primarily by the intrinsic structure of the network. Using 2-photon calcium imaging, we monitored the activity of large populations of MEC neurons in head-fixed mice running on a wheel in darkness, in the absence of external sensory feedback tuned to navigation. To reveal network dynamics under these conditions, we applied both linear and non-linear dimensionality reduction techniques to the spike matrix of each individual session. This way we were able to unveil the presence of motifs that involve the sequential activation of neurons over epochs of tens of seconds to minutes (“waves”). To characterize the nature of these waves we split neurons into ensembles of cells and computed the transition probabilities between ensembles. This temporal analysis revealed stereotyped trajectories across multiple ensembles, lasting up to 2-3 minutes. Waves were not found in spike-time-shuffled data. Waves swept through the entire network of active cells with slow temporal dynamics and did not exhibit any anatomical organization. Furthermore, waves were only partially modulated by behavioural features, such as running epochs and speed. Taken together, our results suggest that a large fraction of MEC-L2 neurons participates in common global dynamics that often takes the form of stereotyped waves. These activity patterns might progress through multiple subnetworks and couple the activity of neurons with distinct tuning characteristics in MEC.
12:00
Lunch break
Lunch break
12:00 - 13:30
Room: Entre restaurant
13:30
Merging E/I balance and low-dimensional dynamics to understand robustness to optogenetic stimulation in motor cortex (this talk will be streamed)
-
Maneesh Sahani
Merging E/I balance and low-dimensional dynamics to understand robustness to optogenetic stimulation in motor cortex (this talk will be streamed)
Maneesh Sahani
13:30 - 14:30
Room: 122:026
Targeted optogenetic perturbations are key to investigating functional roles of sub-populations within neural circuits, yet their effects in recurrent networks may be difficult to interpret. Previous work has shown that optogenetic stimulation of excitatory cells in macaque motor cortex creates large perturbations of task-related activity, but has only subtle effects on ongoing or upcoming behaviour, or the future dynamical evolution of neural population activity. We show that such behaviour can be accounted for within a low-dimensional dynamical system framework if the dynamics are nonnormal with a nullspace that is well-aligned with the optogenetic perturbation pattern. How might such alignment might arise? We hypothesize that circuit-level features such as E/I balance might contribute crucially. To evaluate this hypothesis from neural recordings, we develop a novel approach to fit a high-dimensional discrete-time balanced E/I network that expresses the low-dimensional and smooth dynamics observed in the recorded population responses. We indeed find that balanced networks can naturally create the appropriate non-normal structure to generate robustness to perturbation, while retaining the expressive capacity to recapitulate movement-related dynamics. Ultimately, techniques to establish more explicit links between circuit-level properties and population-level dynamics will be necessary to link neural perturbations, which are delivered in circuit coordinates, to the dynamics of computations.
14:30
Coffee break
Coffee break
14:30 - 15:00
Room: 122:026
15:00
Discovering interpretable models of neural population dynamics from data (this talk will be streamed)
-
Tatiana Engel
Discovering interpretable models of neural population dynamics from data (this talk will be streamed)
Tatiana Engel
15:00 - 16:00
Room: 122:026
Significant advances have been made recently to develop powerful machine learning methods for finding predictive structure in neural population recordings. However, most these techniques compromise between flexibility and interpretability. While simple ad hoc models are likely to distort defining features in the data, flexible models (such as artificial neural networks) are difficult to interpret. We developed a flexible yet intrinsically interpretable framework for discovering neural population dynamics from data. In our framework, population dynamics are governed by a non-linear dynamical system defined by a potential function. The activity of each neuron is related to the population dynamics through unique firing-rate functions, which account for heterogeneity of neural responses. The shapes of the potential and firing-rate functions are simultaneously inferred from data to provide high flexibility and interpretability. Using this framework, we find that good data prediction does not guarantee accurate interpretation of the model, and propose an alternative strategy for deriving models with correct interpretation. We demonstrate the power of our approach by discovering metastable dynamics in spontaneous spiking activity in the primate area V4.
Thursday 13 February 2020
09:30
The shape of neural state space
-
Benjamin Dunn
The shape of neural state space
Benjamin Dunn
09:30 - 10:30
Room: 122:026
A number of known neuron types appear to have interesting-shaped state spaces. The rodent head direction system is a nice example of this with distinct population responses at each angle on the circle. Grid cells are another interesting case and we expect that there are more. Topological data analysis provides a framework to describe such shapes in data. We will give a brief introduction to these ideas and show a few examples of circular and toroidal state spaces found in electrophysiological data. We will then discuss some of the challenges of these data, the approach and how pairwise maximum entropy models might help
10:30
Coffee break
Coffee break
10:30 - 11:00
Room: 122:026
11:00
TBD
-
Sophie Deneve
TBD
Sophie Deneve
11:00 - 12:00
Room: 122:026
12:00
Lunch break
Lunch break
12:00 - 13:30
Room: Entre Restaurant
13:30
Multiscale relevance and informative encoding in neuronal spike trains (this talk will be streamed)
-
Matteo Marsili
Multiscale relevance and informative encoding in neuronal spike trains (this talk will be streamed)
Matteo Marsili
13:30 - 14:30
Room: 122:026
Neuronal responses to complex stimuli and tasks can encompass a wide range of time scales. Understanding these responses requires measures that characterize how the information on these response patterns are represented across multiple temporal resolutions. In this paper we propose a metric -- which we call multiscale relevance (MSR) -- to capture the dynamical variability of the activity of single neurons across different time scales. The MSR is a non-parametric, fully featureless indicator in that it uses only the time stamps of the firing activity without resorting to any a priori covariate or invoking any specific structure in the tuning curve for neural activity. When applied to neural data from the mEC and from the ADn and PoS regions of freely-behaving rodents, we found that neurons having low MSR tend to have low mutual information and low firing sparsity across the correlates that are believed to be encoded by the region of the brain where the recordings were made. In addition, neurons with high MSR contain significant information on spatial navigation and allow to decode spatial position or head direction as efficiently as those neurons whose firing activity has high mutual information with the covariate to be decoded and significantly better than the set of neurons with high local variations in their interspike intervals. Given these results, we propose that the MSR can be used as a measure to rank and select neurons for their information content without the need to appeal to any a priori covariate.
14:30
Coffee break
Coffee break
14:30 - 15:00
Room: 122:026
15:00
Learning within and outside of the neural manifold (this talk will be streamed)
-
Barbara Feulner
Learning within and outside of the neural manifold (this talk will be streamed)
Barbara Feulner
15:00 - 16:00
Room: 122:026
How the brain controls complex behaviour is still an open question in neuroscience. Foremost, the ability to flexibly adapt movements to new conditions or goals is puzzling. Recent experimental evidence supports the idea of a fixed set of neural covariation patterns, called neural modes, which is flexibly used to create different kinds of movements [1]. The space these neural modes span is called neural manifold. Another set of studies suggest that fast motor adaptation is happening through changes within the original neural manifold, but new covariation patterns can be acquired over longer timescales [2,3,4]. By using computational modelling, we explore the underlying constraints for within- and outside-manifold learning from a network perspective. Firstly, we test whether a generic optimization algorithm which acts on the recurrent weights is enough to explain the experimental discrepancy between within- and outside-manifold learning. Interestingly, we find that there is no intrinsic limitation preferring within-manifold learning. We don’t find evidence that the change in recurrent connections is bigger for outside-manifold learning than for within-manifold learning. In a next step, we dismiss the assumption of a perfect teacher signal which is biologically implausible. Instead, we train a feedback model which infers the error signal on the single neuron level. This error signal is used by the generic algorithm to adapt the recurrent weights accordingly. We find that the feedback model of the within-manifold perturbation can be learned to some extent, whereas it is not possible to infer any meaningful error information on the single neuron level for the outside-manifold perturbation. By using the learned, imperfect teacher signals, our results are consistent with the experimental findings of Sadler et al. [2], where monkeys can learn to rearrange their neural activity to within-manifold perturbations, but not to outside-manifold ones. Our results suggest that the limitation for within- and outside-manifold learning is maybe not the relearning of the recurrent dynamics itself, but the learning of the error feedback model. Though, one of the main assumptions of our work is that the neural manifold is mainly constrained by the recurrent connectivity. It remains to be investigated whether the same holds true if the manifold is predominantly shaped by external drive. References 1. Gallego, J. A., Perich, M. G., Naufel, S. N., Ethier, C., Solla, S. A., & Miller, L. E. (2018). Cortical population activity within a preserved neural manifold underlies multiple motor behaviors. Nature communications, 9(1), 4233., 10.1038/s41467-018-06560-z 2. Sadtler, P. T., Quick, K. M., Golub, M. D., Chase, S. M., Ryu, S. I., Tyler-Kabara, E. C., Yu, B. M. & Batista, A. P. (2014). Neural constraints on learning. Nature, 512(7515), 423., 10.1038/nature13665 3. Golub, M. D., Sadtler, P. T., Oby, E. R., Quick, K. M., Ryu, S. I., Tyler-Kabara, E. C., Batista, A., Chase, S. M. & Yu, B. M. (2018). Learning by neural reassociation. Nature neuroscience, 21(4), 607-616., 10.1038/s41593-018-0095-3 4. Oby, E. R., Golub, M. D., Hennig, J. A., Degenhart, A. D., Tyler-Kabara, E. C., Yu, B. M., Chase, S. M. & Batista, A. P. (2019). New neural activity patterns emerge with long-term learning. Proceedings of the National Academy of Sciences, 201820296., 10.1073/pnas.1820296116
17:00
Reception
Reception
17:00 - 19:00
Room: Entrance
Friday 14 February 2020
09:30
Strong and weak principles for neural dimension reduction (this talk will be streamed)
-
Mark Humphries
Strong and weak principles for neural dimension reduction (this talk will be streamed)
Mark Humphries
09:30 - 10:30
Room: 122:026
Large-scale, single neuron resolution recordings are inherently high-dimensional, with as many dimensions as neurons. To make sense of them, for many the answer is: reduce the number of dimensions. Here I argue we can distinguish weak and strong principles of neural dimension reduction. The weak principle is that dimension reduction is a convenient tool for making sense of complex neural data. The strong principle is that dimension reduction moves us closer to how neural circuits actually operate and compute. Elucidating these principles is crucial, for which we subscribe to provides radically different interpretations of the same dimension reduction techniques applied to the same data. In this talk, I outline the experimental evidence for each principle; but argue that most well-described neural activity phenomena provide no evidence either way. I also illustrate how we could make either the weak or strong principles appear to be true based on innocuous looking analysis decisions. These insights suggest arguments over low and high-dimensional neural activity need better constraints from both experiment and theory.
10:30
Coffee break
Coffee break
10:30 - 11:00
Room: 122:026
11:00
Brain-state modulation of population dynamics and behavior (this talk will be streamed)
-
Alfonso Renart
Brain-state modulation of population dynamics and behavior (this talk will be streamed)
Alfonso Renart
11:00 - 12:00
Room: 122:026
During the last few years, it has become apparent that the way information is represented in sensory cortex is strongly dependent on 'brain state'. Brain states represent modes of global coordination in brain activity, and are modulated by the behavioral and neuro-modulatory state of the animal. A salient axis of variation in brain-state is the Activation/Inactivation continuum, which measures the extent to which local populations display (Inactive/Synchronized state) or not (Active/Desynchronized state) slow, global fluctuations in activity. The degree of Activation/Inactivation in the cortex strongly varies both within wakefulness, sleep, and anesthesia. I will describe how changes in cortical Activation shape the representation of sounds by populations of neurons in the rat auditory cortex during Urethane anesthesia, focusing on the coding of level intensity differences across the two ears. Using principal component analysis, we characterize the geometry of representations at the population level, showing how the signal and noise subspaces change as a function of brain-state. These subspaces tend to orthogonalize with respect to each other and to the direction modulating all neurons uniformly as the cortex becomes more desynchronized, leading to overall more accurate representations. Finally I will describe ongoing work where we seek to understand whether and how trial-by-trial changes in the degree of synchronization in auditory cortex before a pure tone is presented impacts the ability of head-fixed mice to perform frequency discrimination.
12:00
Lunch break
Lunch break
12:00 - 13:30
Room: Entre Restaurant
13:30
Bayesian time perception through latent cortical dynamics
-
Devika Narain
Bayesian time perception through latent cortical dynamics
Devika Narain
13:30 - 14:30
Room: 122:026
We possess the ability to effortlessly and precisely time our actions in anticipation of events in the world. The seemingly effortless precision with which we execute most timing behaviors is remarkable given that information received from the world is often ambiguous and is corrupted by the influence of noise as it traverses through neural circuitry. Decades of research has shown that we are able to mitigate the effects of such uncertainty by relying on our prior experiences with such variables in the world. Bayesian theory provides a principled framework to study how trade-offs between prior knowledge and sensory uncertainty can shape perception, cognition, and motor function. Here we study this problem in the domain of timing to understand how low-dimensional geometries of neural population dynamics support Bayesian computations. In the first part of the talk, using results from electrophysiology and recurrent neural network modeling, I will discuss how cortical populations represent Bayesian behavior in monkeys during a timing task. Our results suggest that prior knowledge establishes curved manifolds of neural activity that warp underlying representations to generate Bayes-optimal estimates. Next, I will discuss how subcortical inputs interact with cortical dynamics to generate time intervals with an emphasis on the role of context-dependent input. Using in-vivo and in-silico approaches, we find that neural dynamics is temporally stretched or compressed to encode different time intervals. Finally, I will discuss how prior knowledge for temporal statistics could be acquired in a supervised fashion by cerebellar circuitry that is disynaptically connected to frontal cortical regions. Overall, these findings attempt to bridge insights from normative frameworks of Bayesian inference with potential neural implementations for the acquisition, estimation and production of optimal timing behavior.
14:30
Coffee break
Coffee break
14:30 - 15:00
Room: 122:026
15:00
Nneurons→∞ (this talk will be streamed)
-
Kenneth Harris
Nneurons→∞ (this talk will be streamed)
Kenneth Harris
15:00 - 16:00
Room: 122:026
: Simultaneous recordings from tens of thousands of neurons allow a new framework for characterizing the neural code at large scales. As the number of neurons analyzed increases, population activity approximates a vector in an infinite-dimensional Hilbert space. In this limit, the independent activity of any single neuron is of no consequence, and the neural code reflects only activity dimensions shared across the population. Analyzing the responses of large populations in mouse visual cortex to natural image stimuli revealed an unexpected result: signal variance in the nth dimension decayed as a power law, with exponent just above 1. We proved mathematically that a differentiable representation of a d-dimensional stimulus requires variances decaying faster than n^(-1-2\/d). By recording neural responses to stimulus ensembles of varying dimension, we showed this bound is close to saturated. We conclude that the cortical representation of image stimuli is as high-dimensional as possible before becoming non-differentiable.