Dimensionality Reduction and Population Dynamics in Neural Data
from
Tuesday, February 11, 2020 (9:00 AM)
to
Friday, February 14, 2020 (6:00 PM)
Monday, February 10, 2020
Tuesday, February 11, 2020
9:15 AM
Introduction
Introduction
9:15 AM  9:30 AM
Room: 122:026
9:30 AM
Identifying latent manifold structure from neural data with Gaussian process models (this talk will be streamed)

Jonathan Pillow
Identifying latent manifold structure from neural data with Gaussian process models (this talk will be streamed)
Jonathan Pillow
9:30 AM  10:30 AM
Room: 122:026
An important problem in neuroscience is to identify lowdimensional structure underlying noisy, highdimensional spike trains. In this talk, I will discuss recent advances for tackling this problem in single and multiregion neural datasets. First, I will discuss the Gaussian Process Latent Variable Model with Poisson observations (PoissonGPLVM), which seeks to identify a lowdimensional nonlinear manifold from spike train data. This model can successfully handle datasets that appear highdimensional with linear dimensionality reduction methods like PCA, and we show that it can identify a 2D spatial map underlying hippocampal place cell responses from their spike trains alone. Second, I will discuss recent extensions to Poissonspiking Gaussian Process Factor Analysis (PoissonGPFA), which incorporates separate signal and noise dimensions as well as a multiregion model with coupling between latent variables governing activity in different regions. This model provides a powerful tool for characterizing the flow of signals between brain areas, and we illustrate its applicability using multiregion recordings from mouse visual cortex.
10:30 AM
Coffee break
Coffee break
10:30 AM  11:00 AM
Room: 122:026
11:00 AM
Neural manifolds for the stable control of movement (this talk will be streamed)

Sara Solla
Neural manifolds for the stable control of movement (this talk will be streamed)
Sara Solla
11:00 AM  12:00 PM
Room: 122:026
Animals, including humans, perform learned actions with remarkable consistency for years after acquiring a skill. What is the neural correlate of this stability? We explore this question from the perspective of neural populations. Recent work suggests that the building blocks of neural function may be the activation of populationwide activity patterns, the neural modes, rather than the independent modulation of individual neurons. These neural modes, the dominant covariation patterns of population activity, define a low dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the timedependent activation of the neural modes as their latent dynamics. We hypothesize that the ability to perform a given behavior in a consistent manner requires that the latent dynamics underlying the behavior also be stable. A dynamic alignment method allows us to examine the long term stability of the latent dynamics despite unavoidable changes in the set of neurons recorded via chronically implanted microelectrode arrays. We use the sensorimotor system as a model of cortical processing, and find remarkably stable latent dynamics for up to two years across three distinct cortical regions, despite ongoing turnover of the recorded neurons. The stable latent dynamics, once identified, allows for the prediction of various behavioral features via mapping models whose parameters remain fixed throughout these long timespans. We conclude that latent cortical dynamics within the task manifold are the fundamental and stable building blocks underlying consistent behavior
12:00 PM
Lunch break
Lunch break
12:00 PM  1:30 PM
Room: Entre Restaurant
1:30 PM
Low dimensional manifolds and temporal sequences of neuronal activity in the neocortex (this talk will be streamed)

Arvind Kumar
Low dimensional manifolds and temporal sequences of neuronal activity in the neocortex (this talk will be streamed)
Arvind Kumar
1:30 PM  2:30 PM
Room: 122:026
With recent advances in technology it has become possible to record 100s of neurons simultaneously from awake behaving animals. The analysis of such highdimensional neuronal activity has revealed two interesting features: 1. neuronal activity is confined to a rather lowdimensional submanifold and animals find it very difficult (if not impossible) to change these lowdimensional intrinsic manifolds (2) Within the manifold, neuronal activity is organized as temporal (and some times spatial) sequences. The two properties provide new insights into the representation of information in the brain. In my talk, I will discuss the origin of these two features of the neuronal activity. First, I will argue that the lowdimensional manifold of cortical activity are consequence of the function the networks have learned to perform. I will show that within manifold changes entail small changes in the synaptic weights, while outside manifold changes require a massive rewiring of the whole network. This observation provides an explanation of why it is difficult to change the intrinsic manifold of neuronal activity. Next, to address how within a manifold neuronal activity is organized as a temporal sequences, I will focus on networks with distance dependent connectivity. For such networks we have found that when (1) neurons project a small fraction of their outputs to a preferential direction and (2) the preferred directions of neighboring neurons are similar, the network can generate temporal sequences without supervised or unsupervised learning. This generative rule implies that the need for a ‘correlated spatial anisotropic connectivity’. Such connectivity can arise when neighbouring neurons have similar shapes. In addition, I will argue that spatially patchy patterns of neuromodulator release not only allow for the formation of temporal sequences but also provide a biological plausible way to dynamical change the arrangement of sequences. Finally, I will discuss the implications of these results for brain dysfunction and control of neuronal activity dynamics.
2:30 PM
Coffee break
Coffee break
2:30 PM  3:00 PM
Room: 122:026
3:00 PM
Disentangling the roles of dimensionality and cell classes in neural computation

Srdjan Ostojic
Disentangling the roles of dimensionality and cell classes in neural computation
Srdjan Ostojic
3:00 PM  4:00 PM
Room: 122:026
The description of neural computations currently relies on two competing views: (i) a classical singlecell view that relates the activity of individual neurons to sensory or behavioural variables, and focuses on how different cell classes map onto computations; (ii) a more recent population view that instead characterises computations in terms of collective neural trajectories, and focuses on the dimensionality of these trajectories as animals perform tasks. How the two key concepts of cell classes and lowdimensional trajectories interact to shape neural computations is however at present not understood. Here we address this question by combining machinelearning tools for training recurrent neural networks with reverseengineering and theoretical analyses of network dynamics.
Wednesday, February 12, 2020
9:30 AM
A local synaptic update rule for ICA and dimensionality reduction (this talk will be streamed)

Taro Toyoizumi
A local synaptic update rule for ICA and dimensionality reduction (this talk will be streamed)
Taro Toyoizumi
9:30 AM  10:30 AM
Room: 122:026
Humans can separately recognize individual sources when they sense their mixture. We have previously developed errorgated Hebbian rule (EGHR) for neural networks that achieves independent component analysis (ICA). The EGHR approximately maximizes the information flow through the network by updating synaptic strength using local information available at each synapse, which is also suitable for neuromorphic engineering. The update is described by the product of the presynaptic activity, the postsynaptic activity, and a global factor. If the number of sensors is higher than that of sources, the EGHR can perform dimensionality reduction in some useful ways in addition to simple ICA. First, how the sources are mixed can be dependent on the context. The EGHR can solve this multicontext ICA problem by extracting lowdimensional sources from the highdimensional sensory inputs. Second, if the input dimensionality is much higher than the source dimensionality, the EGHR can accurately perform nonlinear ICA. I discuss an application of this nonlinear ICA technique for predictive coding of dynamic sources.
10:30 AM
Coffee break
Coffee break
10:30 AM  11:00 AM
Room: 122:026
11:00 AM
Stereotyped population dynamics in the medial entorhinal cortex

Soledad Gonzalo Cogno
Stereotyped population dynamics in the medial entorhinal cortex
Soledad Gonzalo Cogno
11:00 AM  12:00 PM
Room: 122:026
The medial entorhinal cortex (MEC) supports the brain’s representation of space with distinct cell types whose firing is tuned to features of the environment (grid, border, and objectvector cells) or navigation (headdirection and speed cells). These functionallydistinct cell types are anatomically intermingled in the superficial layers of the MEC. Since no single sensory stimulus can faithfully predict the firing of these cells, and activity patterns are preserved across environments and brain states, attractor network models postulate that spatiallytuned firing emerges from specific connectivity motives among neurons of the MEC. To determine how those connectivity motives constrain the selforganized activity in the MEC network, we tested mice in a spontaneous locomotion task under sensorydeprived conditions, when activity likely is determined primarily by the intrinsic structure of the network. Using 2photon calcium imaging, we monitored the activity of large populations of MEC neurons in headfixed mice running on a wheel in darkness, in the absence of external sensory feedback tuned to navigation. To reveal network dynamics under these conditions, we applied both linear and nonlinear dimensionality reduction techniques to the spike matrix of each individual session. This way we were able to unveil the presence of motifs that involve the sequential activation of neurons over epochs of tens of seconds to minutes (“waves”). To characterize the nature of these waves we split neurons into ensembles of cells and computed the transition probabilities between ensembles. This temporal analysis revealed stereotyped trajectories across multiple ensembles, lasting up to 23 minutes. Waves were not found in spiketimeshuffled data. Waves swept through the entire network of active cells with slow temporal dynamics and did not exhibit any anatomical organization. Furthermore, waves were only partially modulated by behavioural features, such as running epochs and speed. Taken together, our results suggest that a large fraction of MECL2 neurons participates in common global dynamics that often takes the form of stereotyped waves. These activity patterns might progress through multiple subnetworks and couple the activity of neurons with distinct tuning characteristics in MEC.
12:00 PM
Lunch break
Lunch break
12:00 PM  1:30 PM
Room: Entre restaurant
1:30 PM
Merging E/I balance and lowdimensional dynamics to understand robustness to optogenetic stimulation in motor cortex (this talk will be streamed)

Maneesh Sahani
Merging E/I balance and lowdimensional dynamics to understand robustness to optogenetic stimulation in motor cortex (this talk will be streamed)
Maneesh Sahani
1:30 PM  2:30 PM
Room: 122:026
Targeted optogenetic perturbations are key to investigating functional roles of subpopulations within neural circuits, yet their effects in recurrent networks may be difficult to interpret. Previous work has shown that optogenetic stimulation of excitatory cells in macaque motor cortex creates large perturbations of taskrelated activity, but has only subtle effects on ongoing or upcoming behaviour, or the future dynamical evolution of neural population activity. We show that such behaviour can be accounted for within a lowdimensional dynamical system framework if the dynamics are nonnormal with a nullspace that is wellaligned with the optogenetic perturbation pattern. How might such alignment might arise? We hypothesize that circuitlevel features such as E/I balance might contribute crucially. To evaluate this hypothesis from neural recordings, we develop a novel approach to fit a highdimensional discretetime balanced E/I network that expresses the lowdimensional and smooth dynamics observed in the recorded population responses. We indeed find that balanced networks can naturally create the appropriate nonnormal structure to generate robustness to perturbation, while retaining the expressive capacity to recapitulate movementrelated dynamics. Ultimately, techniques to establish more explicit links between circuitlevel properties and populationlevel dynamics will be necessary to link neural perturbations, which are delivered in circuit coordinates, to the dynamics of computations.
2:30 PM
Coffee break
Coffee break
2:30 PM  3:00 PM
Room: 122:026
3:00 PM
Discovering interpretable models of neural population dynamics from data (this talk will be streamed)

Tatiana Engel
Discovering interpretable models of neural population dynamics from data (this talk will be streamed)
Tatiana Engel
3:00 PM  4:00 PM
Room: 122:026
Significant advances have been made recently to develop powerful machine learning methods for finding predictive structure in neural population recordings. However, most these techniques compromise between flexibility and interpretability. While simple ad hoc models are likely to distort defining features in the data, flexible models (such as artificial neural networks) are difficult to interpret. We developed a flexible yet intrinsically interpretable framework for discovering neural population dynamics from data. In our framework, population dynamics are governed by a nonlinear dynamical system defined by a potential function. The activity of each neuron is related to the population dynamics through unique firingrate functions, which account for heterogeneity of neural responses. The shapes of the potential and firingrate functions are simultaneously inferred from data to provide high flexibility and interpretability. Using this framework, we find that good data prediction does not guarantee accurate interpretation of the model, and propose an alternative strategy for deriving models with correct interpretation. We demonstrate the power of our approach by discovering metastable dynamics in spontaneous spiking activity in the primate area V4.
Thursday, February 13, 2020
9:30 AM
The shape of neural state space

Benjamin Dunn
The shape of neural state space
Benjamin Dunn
9:30 AM  10:30 AM
Room: 122:026
A number of known neuron types appear to have interestingshaped state spaces. The rodent head direction system is a nice example of this with distinct population responses at each angle on the circle. Grid cells are another interesting case and we expect that there are more. Topological data analysis provides a framework to describe such shapes in data. We will give a brief introduction to these ideas and show a few examples of circular and toroidal state spaces found in electrophysiological data. We will then discuss some of the challenges of these data, the approach and how pairwise maximum entropy models might help
10:30 AM
Coffee break
Coffee break
10:30 AM  11:00 AM
Room: 122:026
11:00 AM
TBD

Sophie Deneve
TBD
Sophie Deneve
11:00 AM  12:00 PM
Room: 122:026
12:00 PM
Lunch break
Lunch break
12:00 PM  1:30 PM
Room: Entre Restaurant
1:30 PM
Multiscale relevance and informative encoding in neuronal spike trains (this talk will be streamed)

Matteo Marsili
Multiscale relevance and informative encoding in neuronal spike trains (this talk will be streamed)
Matteo Marsili
1:30 PM  2:30 PM
Room: 122:026
Neuronal responses to complex stimuli and tasks can encompass a wide range of time scales. Understanding these responses requires measures that characterize how the information on these response patterns are represented across multiple temporal resolutions. In this paper we propose a metric  which we call multiscale relevance (MSR)  to capture the dynamical variability of the activity of single neurons across different time scales. The MSR is a nonparametric, fully featureless indicator in that it uses only the time stamps of the firing activity without resorting to any a priori covariate or invoking any specific structure in the tuning curve for neural activity. When applied to neural data from the mEC and from the ADn and PoS regions of freelybehaving rodents, we found that neurons having low MSR tend to have low mutual information and low firing sparsity across the correlates that are believed to be encoded by the region of the brain where the recordings were made. In addition, neurons with high MSR contain significant information on spatial navigation and allow to decode spatial position or head direction as efficiently as those neurons whose firing activity has high mutual information with the covariate to be decoded and significantly better than the set of neurons with high local variations in their interspike intervals. Given these results, we propose that the MSR can be used as a measure to rank and select neurons for their information content without the need to appeal to any a priori covariate.
2:30 PM
Coffee break
Coffee break
2:30 PM  3:00 PM
Room: 122:026
3:00 PM
Learning within and outside of the neural manifold (this talk will be streamed)

Barbara Feulner
Learning within and outside of the neural manifold (this talk will be streamed)
Barbara Feulner
3:00 PM  4:00 PM
Room: 122:026
How the brain controls complex behaviour is still an open question in neuroscience. Foremost, the ability to flexibly adapt movements to new conditions or goals is puzzling. Recent experimental evidence supports the idea of a fixed set of neural covariation patterns, called neural modes, which is flexibly used to create different kinds of movements [1]. The space these neural modes span is called neural manifold. Another set of studies suggest that fast motor adaptation is happening through changes within the original neural manifold, but new covariation patterns can be acquired over longer timescales [2,3,4]. By using computational modelling, we explore the underlying constraints for within and outsidemanifold learning from a network perspective. Firstly, we test whether a generic optimization algorithm which acts on the recurrent weights is enough to explain the experimental discrepancy between within and outsidemanifold learning. Interestingly, we find that there is no intrinsic limitation preferring withinmanifold learning. We don’t find evidence that the change in recurrent connections is bigger for outsidemanifold learning than for withinmanifold learning. In a next step, we dismiss the assumption of a perfect teacher signal which is biologically implausible. Instead, we train a feedback model which infers the error signal on the single neuron level. This error signal is used by the generic algorithm to adapt the recurrent weights accordingly. We find that the feedback model of the withinmanifold perturbation can be learned to some extent, whereas it is not possible to infer any meaningful error information on the single neuron level for the outsidemanifold perturbation. By using the learned, imperfect teacher signals, our results are consistent with the experimental findings of Sadler et al. [2], where monkeys can learn to rearrange their neural activity to withinmanifold perturbations, but not to outsidemanifold ones. Our results suggest that the limitation for within and outsidemanifold learning is maybe not the relearning of the recurrent dynamics itself, but the learning of the error feedback model. Though, one of the main assumptions of our work is that the neural manifold is mainly constrained by the recurrent connectivity. It remains to be investigated whether the same holds true if the manifold is predominantly shaped by external drive. References 1. Gallego, J. A., Perich, M. G., Naufel, S. N., Ethier, C., Solla, S. A., & Miller, L. E. (2018). Cortical population activity within a preserved neural manifold underlies multiple motor behaviors. Nature communications, 9(1), 4233., 10.1038/s4146701806560z 2. Sadtler, P. T., Quick, K. M., Golub, M. D., Chase, S. M., Ryu, S. I., TylerKabara, E. C., Yu, B. M. & Batista, A. P. (2014). Neural constraints on learning. Nature, 512(7515), 423., 10.1038/nature13665 3. Golub, M. D., Sadtler, P. T., Oby, E. R., Quick, K. M., Ryu, S. I., TylerKabara, E. C., Batista, A., Chase, S. M. & Yu, B. M. (2018). Learning by neural reassociation. Nature neuroscience, 21(4), 607616., 10.1038/s4159301800953 4. Oby, E. R., Golub, M. D., Hennig, J. A., Degenhart, A. D., TylerKabara, E. C., Yu, B. M., Chase, S. M. & Batista, A. P. (2019). New neural activity patterns emerge with longterm learning. Proceedings of the National Academy of Sciences, 201820296., 10.1073/pnas.1820296116
5:00 PM
Reception
Reception
5:00 PM  7:00 PM
Room: Entrance
Friday, February 14, 2020
9:30 AM
Strong and weak principles for neural dimension reduction (this talk will be streamed)

Mark Humphries
Strong and weak principles for neural dimension reduction (this talk will be streamed)
Mark Humphries
9:30 AM  10:30 AM
Room: 122:026
Largescale, single neuron resolution recordings are inherently highdimensional, with as many dimensions as neurons. To make sense of them, for many the answer is: reduce the number of dimensions. Here I argue we can distinguish weak and strong principles of neural dimension reduction. The weak principle is that dimension reduction is a convenient tool for making sense of complex neural data. The strong principle is that dimension reduction moves us closer to how neural circuits actually operate and compute. Elucidating these principles is crucial, for which we subscribe to provides radically different interpretations of the same dimension reduction techniques applied to the same data. In this talk, I outline the experimental evidence for each principle; but argue that most welldescribed neural activity phenomena provide no evidence either way. I also illustrate how we could make either the weak or strong principles appear to be true based on innocuous looking analysis decisions. These insights suggest arguments over low and highdimensional neural activity need better constraints from both experiment and theory.
10:30 AM
Coffee break
Coffee break
10:30 AM  11:00 AM
Room: 122:026
11:00 AM
Brainstate modulation of population dynamics and behavior (this talk will be streamed)

Alfonso Renart
Brainstate modulation of population dynamics and behavior (this talk will be streamed)
Alfonso Renart
11:00 AM  12:00 PM
Room: 122:026
During the last few years, it has become apparent that the way information is represented in sensory cortex is strongly dependent on 'brain state'. Brain states represent modes of global coordination in brain activity, and are modulated by the behavioral and neuromodulatory state of the animal. A salient axis of variation in brainstate is the Activation/Inactivation continuum, which measures the extent to which local populations display (Inactive/Synchronized state) or not (Active/Desynchronized state) slow, global fluctuations in activity. The degree of Activation/Inactivation in the cortex strongly varies both within wakefulness, sleep, and anesthesia. I will describe how changes in cortical Activation shape the representation of sounds by populations of neurons in the rat auditory cortex during Urethane anesthesia, focusing on the coding of level intensity differences across the two ears. Using principal component analysis, we characterize the geometry of representations at the population level, showing how the signal and noise subspaces change as a function of brainstate. These subspaces tend to orthogonalize with respect to each other and to the direction modulating all neurons uniformly as the cortex becomes more desynchronized, leading to overall more accurate representations. Finally I will describe ongoing work where we seek to understand whether and how trialbytrial changes in the degree of synchronization in auditory cortex before a pure tone is presented impacts the ability of headfixed mice to perform frequency discrimination.
12:00 PM
Lunch break
Lunch break
12:00 PM  1:30 PM
Room: Entre Restaurant
1:30 PM
Bayesian time perception through latent cortical dynamics

Devika Narain
Bayesian time perception through latent cortical dynamics
Devika Narain
1:30 PM  2:30 PM
Room: 122:026
We possess the ability to effortlessly and precisely time our actions in anticipation of events in the world. The seemingly effortless precision with which we execute most timing behaviors is remarkable given that information received from the world is often ambiguous and is corrupted by the influence of noise as it traverses through neural circuitry. Decades of research has shown that we are able to mitigate the effects of such uncertainty by relying on our prior experiences with such variables in the world. Bayesian theory provides a principled framework to study how tradeoffs between prior knowledge and sensory uncertainty can shape perception, cognition, and motor function. Here we study this problem in the domain of timing to understand how lowdimensional geometries of neural population dynamics support Bayesian computations. In the first part of the talk, using results from electrophysiology and recurrent neural network modeling, I will discuss how cortical populations represent Bayesian behavior in monkeys during a timing task. Our results suggest that prior knowledge establishes curved manifolds of neural activity that warp underlying representations to generate Bayesoptimal estimates. Next, I will discuss how subcortical inputs interact with cortical dynamics to generate time intervals with an emphasis on the role of contextdependent input. Using invivo and insilico approaches, we find that neural dynamics is temporally stretched or compressed to encode different time intervals. Finally, I will discuss how prior knowledge for temporal statistics could be acquired in a supervised fashion by cerebellar circuitry that is disynaptically connected to frontal cortical regions. Overall, these findings attempt to bridge insights from normative frameworks of Bayesian inference with potential neural implementations for the acquisition, estimation and production of optimal timing behavior.
2:30 PM
Coffee break
Coffee break
2:30 PM  3:00 PM
Room: 122:026
3:00 PM
Nneurons→∞ (this talk will be streamed)

Kenneth Harris
Nneurons→∞ (this talk will be streamed)
Kenneth Harris
3:00 PM  4:00 PM
Room: 122:026
: Simultaneous recordings from tens of thousands of neurons allow a new framework for characterizing the neural code at large scales. As the number of neurons analyzed increases, population activity approximates a vector in an infinitedimensional Hilbert space. In this limit, the independent activity of any single neuron is of no consequence, and the neural code reflects only activity dimensions shared across the population. Analyzing the responses of large populations in mouse visual cortex to natural image stimuli revealed an unexpected result: signal variance in the nth dimension decayed as a power law, with exponent just above 1. We proved mathematically that a differentiable representation of a ddimensional stimulus requires variances decaying faster than n^(12\/d). By recording neural responses to stimulus ensembles of varying dimension, we showed this bound is close to saturated. We conclude that the cortical representation of image stimuli is as highdimensional as possible before becoming nondifferentiable.