Dimensionality Reduction and Population Dynamics in Neural Data

Europe/Stockholm
122:026 (Nordita)

122:026

Nordita

Roslagstullsbacken 17, 106 91 Stockholm, Sweden
John Hertz (Nordita), Soledad Gonzalo Cogno, Yasser Roudi (Nordita)
Description

The brain represents and processes information through the activity of many neurons whose firing patterns are correlated with each other in non-trivial ways. These correlations, in general, imply that the activity of a population of neurons involved in a task has a lower dimensional representation. Naturally, then, discovering and understanding such representations are important steps in understanding the operations of the nervous system, and theoretical and experimental neuroscientists have been making interesting progress on this subject. The aim of this conference is to gather together a number of key players in the effort for developing methods for dimensionality reduction in neural data and studying the population dynamics of networks of neurons from this angle. We aim to review the current approaches to the problem, identify the major questions that need to be addressed in the future, and discuss how we should move forward with those questions.

Thanks to our generous speakers, the talks in this meeting will be streamed and can be watched online; see the talks online.

Recorded talks are available here

List of speakers:

  • Sara A. Solla (Northwestern Univ)
  • Jonathan Pillow (Princeton Univ)
  • Matteo Marsili (ICTP)
  • Maneesh Sahani (Gatsby Unit, UCL)
  • Ken Harris (UCL)
  • Arvind Kumar (KTH)
  • Alfonso Renart (Champalimaud Neuroscience Programme)
  • Taro Toyoizumi (Riken)
  • Sophie Deneve (École normale supérieure)
  • Barbara Feulner (Imperial College London)
  • Soledad Gonzalo Cogno (Kavli Institute, NTNU)
  • Srdjan Ostojic (École normale supérieure)
  • Benjamin Dunn (Math Dept, NTNU)
  • Devika Narain (Erasmus University Medical Center)
  • Tatiana Engel (Cold Spring Harbor Laboratory)
  • Mark Humphries (University of Nottingham)

 

Organizers:

  • Yasser Roudi (Kavli Institute, NTNU and Nordita)
  • Soledad Gonzalo Cogno (Kavli Institute, NTNU)
  • John Hertz (Nordita and the Niels Bohr Institute)

 

Sponsored by:

This event is financially supported by the Norwegian Research Council (Centre for Neural Computation grant) and the Hertie Foundation (the Eric Kandel Young Neuroscientist Prize to Yasser Roudi).

 

    • 1
      Introduction 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden
    • 2
      Identifying latent manifold structure from neural data with Gaussian process models (this talk will be streamed) 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden

      An important problem in neuroscience is to identify low-dimensional structure underlying noisy, high-dimensional spike trains. In this talk, I will discuss recent advances for tackling this problem in single and multi-region neural datasets. First, I will discuss the Gaussian Process Latent Variable Model with Poisson observations (Poisson-GPLVM), which seeks to identify a low-dimensional nonlinear manifold from spike train data. This model can successfully handle datasets that appear high-dimensional with linear dimensionality reduction methods like PCA, and we show that it can identify a 2D spatial map underlying hippocampal place cell responses from their spike trains alone. Second, I will discuss recent extensions to Poisson-spiking Gaussian Process Factor Analysis (Poisson-GPFA), which incorporates separate signal and noise dimensions as well as a multi-region model with coupling between latent variables governing activity in different regions. This model provides a powerful tool for characterizing the flow of signals between brain areas, and we illustrate its applicability using multi-region recordings from mouse visual cortex.

      Speaker: Jonathan Pillow
    • 3
      Coffee break 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden
    • 4
      Neural manifolds for the stable control of movement (this talk will be streamed) 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden

      Animals, including humans, perform learned actions with remarkable consistency for years after acquiring a skill. What is the neural correlate of this stability? We explore this question from the perspective of neural populations. Recent work suggests that the building blocks of neural function may be the activation of population-wide activity patterns, the neural modes, rather than the independent modulation of individual neurons. These neural modes, the dominant co-variation patterns of population activity, define a low dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the time-dependent activation of the neural modes as their latent dynamics. We hypothesize that the ability to perform a given behavior in a consistent manner requires that the latent dynamics underlying the behavior also be stable.

      A dynamic alignment method allows us to examine the long term stability of the latent dynamics despite unavoidable changes in the set of neurons recorded via chronically implanted microelectrode arrays. We use the sensorimotor system as a model of cortical processing, and find remarkably stable latent dynamics for up to two years across three distinct cortical regions, despite ongoing turnover of the recorded neurons. The stable latent dynamics, once identified, allows for the prediction of various behavioral features via mapping models whose parameters remain fixed throughout these long timespans. We conclude that latent cortical dynamics within the task manifold are the fundamental and stable building blocks underlying consistent behavior

      Speaker: Sara Solla
    • 5
      Lunch break Entre Restaurant (Albanova )

      Entre Restaurant

      Albanova

      Roslagstullsbacken 21, 106 91 Stockholm, Sweden
    • 6
      Low dimensional manifolds and temporal sequences of neuronal activity in the neocortex (this talk will be streamed) 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden

      With recent advances in technology it has become possible to record 100s of neurons simultaneously from awake behaving animals. The analysis of such high-dimensional neuronal activity has revealed two interesting features: 1. neuronal activity is confined to a rather low-dimensional sub-manifold and animals find it very difficult (if not impossible) to change these low-dimensional intrinsic manifolds (2) Within the manifold, neuronal activity is organized as temporal (and some times spatial) sequences. The two properties provide new insights into the representation of information in the brain. In my talk, I will discuss the origin of these two features of the neuronal activity.
      First, I will argue that the low-dimensional manifold of cortical activity are consequence of the function the networks have learned to perform. I will show that within manifold changes entail small changes in the synaptic weights, while outside manifold changes require a massive rewiring of the whole network. This observation provides an explanation of why it is difficult to change the intrinsic manifold of neuronal activity.
      Next, to address how within a manifold neuronal activity is organized as a temporal sequences, I will focus on networks with distance dependent connectivity. For such networks we have found that when (1) neurons project a small fraction of their outputs to a preferential direction and (2) the preferred directions of neighboring neurons are similar, the network can generate temporal sequences without supervised or unsupervised learning. This generative rule implies that the need for a ‘correlated spatial anisotropic connectivity’. Such connectivity can arise when neighbouring neurons have similar shapes. In addition, I will argue that spatially patchy patterns of neuromodulator release not only allow for the formation of temporal sequences but also provide a biological plausible way to dynamical change the arrangement of sequences. Finally, I will discuss the implications of these results for brain dysfunction and control of neuronal activity dynamics.

      Speaker: Arvind Kumar
    • 7
      Coffee break 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden
    • 8
      Disentangling the roles of dimensionality and cell classes in neural computation 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden

      The description of neural computations currently relies on two competing views: (i) a classical single-cell view that relates the activity of individual neurons to sensory or behavioural variables, and focuses on how different cell classes map onto computations; (ii) a more recent population view that instead characterises computations in terms of collective neural trajectories, and focuses on the dimensionality of these trajectories as animals perform tasks. How the two key concepts of cell classes and low-dimensional trajectories interact to shape neural computations is however at present not understood. Here we address this question by combining machine-learning tools for training recurrent neural networks with reverse-engineering and theoretical analyses of network dynamics.

      Speaker: Srdjan Ostojic
    • 9
      A local synaptic update rule for ICA and dimensionality reduction (this talk will be streamed) 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden

      Humans can separately recognize individual sources when they sense their mixture. We have previously developed error-gated Hebbian rule
      (EGHR) for neural networks that achieves independent component analysis (ICA). The EGHR approximately maximizes the information flow through the network by updating synaptic strength using local information available at each synapse, which is also suitable for neuromorphic engineering. The update is described by the product of the presynaptic activity, the postsynaptic activity, and a global factor. If the number of sensors is higher than that of sources, the EGHR can perform dimensionality reduction in some useful ways in addition to simple ICA. First, how the sources are mixed can be dependent on the context. The EGHR can solve this multi-context ICA problem by extracting low-dimensional sources from the high-dimensional sensory inputs. Second, if the input dimensionality is much higher than the source dimensionality, the EGHR can accurately perform nonlinear ICA. I discuss an application of this nonlinear ICA technique for predictive coding of dynamic sources.

      Speaker: Taro Toyoizumi
    • 10
      Coffee break 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden
    • 11
      Stereotyped population dynamics in the medial entorhinal cortex 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden

      The medial entorhinal cortex (MEC) supports the brain’s representation of space with distinct cell types whose firing is tuned to features of the environment (grid, border, and object-vector cells) or navigation (head-direction and speed cells). These functionally-distinct cell types are anatomically intermingled in the superficial layers of the MEC. Since no single sensory stimulus can faithfully predict the firing of these cells, and activity patterns are preserved across environments and brain states, attractor network models postulate that spatially-tuned firing emerges from specific connectivity motives among neurons of the MEC. To determine how those connectivity motives constrain the self-organized activity in the MEC network, we tested mice in a spontaneous locomotion task under sensory-deprived conditions, when activity likely is determined primarily by the intrinsic structure of the network. Using 2-photon calcium imaging, we monitored the activity of large populations of MEC neurons in head-fixed mice running on a wheel in darkness, in the absence of external sensory feedback tuned to navigation.

      To reveal network dynamics under these conditions, we applied both linear and non-linear dimensionality reduction techniques to the spike matrix of each individual session. This way we were able to unveil the presence of motifs that involve the sequential activation of neurons over epochs of tens of seconds to minutes (“waves”). To characterize the nature of these waves we split neurons into ensembles of cells and computed the transition probabilities between ensembles. This temporal analysis revealed stereotyped trajectories across multiple ensembles, lasting up to 2-3 minutes. Waves were not found in spike-time-shuffled data. Waves swept through the entire network of active cells with slow temporal dynamics and did not exhibit any anatomical organization. Furthermore, waves were only partially modulated by behavioural features, such as running epochs and speed. Taken together, our results suggest that a large fraction of MEC-L2 neurons participates in common global dynamics that often takes the form of stereotyped waves. These activity patterns might progress through multiple subnetworks and couple the activity of neurons with distinct tuning characteristics in MEC.

      Speaker: Soledad Gonzalo Cogno
    • 12
      Lunch break Entre restaurant (Albanova)

      Entre restaurant

      Albanova

      Roslagstullsbacken 21, 106 91 Stockholm, Sweden
    • 13
      Merging E/I balance and low-dimensional dynamics to understand robustness to optogenetic stimulation in motor cortex (this talk will be streamed) 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden

      Targeted optogenetic perturbations are key to investigating functional roles of sub-populations within neural circuits, yet their effects in recurrent networks may be difficult to interpret. Previous work has
      shown that optogenetic stimulation of excitatory cells in macaque motor cortex creates large perturbations of task-related activity, but has only subtle effects on ongoing or upcoming behaviour, or the future dynamical evolution of neural population activity. We show that such behaviour can be accounted for within a low-dimensional dynamical
      system framework if the dynamics are nonnormal with a nullspace that is well-aligned with the optogenetic perturbation pattern. How might such alignment might arise? We hypothesize that circuit-level features such as E/I balance might contribute crucially. To evaluate this hypothesis from neural recordings, we develop a novel approach to fit a high-dimensional discrete-time balanced E/I network that
      expresses the low-dimensional and smooth dynamics observed in the recorded population responses. We indeed find that balanced networks can naturally create the appropriate non-normal structure to generate robustness to perturbation, while retaining the expressive capacity to recapitulate movement-related dynamics. Ultimately, techniques to establish more explicit links between circuit-level properties and population-level dynamics will be necessary to link neural perturbations, which are delivered in circuit coordinates, to the dynamics of computations.

      Speaker: Maneesh Sahani
    • 14
      Coffee break 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden
    • 15
      Discovering interpretable models of neural population dynamics from data (this talk will be streamed) 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden

      Significant advances have been made recently to develop powerful machine learning methods for finding predictive structure in neural population recordings. However, most these techniques compromise between flexibility and interpretability. While simple ad hoc models are likely to distort defining features in the data, flexible models (such as artificial neural networks) are difficult to interpret. We developed a flexible yet intrinsically interpretable framework for discovering neural population dynamics from data. In our framework, population dynamics are governed by a non-linear dynamical system defined by a potential function. The activity of each neuron is related to the population dynamics through unique firing-rate functions, which account for heterogeneity of neural responses. The shapes of the potential and firing-rate functions are simultaneously inferred from data to provide high flexibility and interpretability. Using this framework, we find that good data prediction does not guarantee accurate interpretation of the model, and propose an alternative strategy for deriving models with correct interpretation. We demonstrate the power of our approach by discovering metastable dynamics in spontaneous spiking activity in the primate area V4.

      Speaker: Tatiana Engel
    • 16
      The shape of neural state space 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden

      A number of known neuron types appear to have interesting-shaped state spaces. The rodent head direction system is a nice example of this with distinct population responses at each angle on the circle. Grid cells are another interesting case and we expect that there are more. Topological data analysis provides a framework to describe such shapes in data. We will give a brief introduction to these ideas and show a few examples of circular and toroidal state spaces found in electrophysiological data. We will then discuss some of the challenges of these data, the approach and how pairwise maximum entropy models might help

      Speaker: Benjamin Dunn
    • 17
      Coffee break 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden
    • 18
      TBD 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden
      Speaker: Sophie Deneve
    • 19
      Lunch break Entre Restaurant (Albanova)

      Entre Restaurant

      Albanova

      Roslagstullsbacken 21, 106 91 Stockholm, Sweden
    • 20
      Multiscale relevance and informative encoding in neuronal spike trains (this talk will be streamed) 122:026 (Nordita,)

      122:026

      Nordita,

      Roslagstullsbacken 17, 106 91, Stockholm, Sweden

      Neuronal responses to complex stimuli and tasks can encompass a wide range of time scales. Understanding these responses requires measures that characterize how the information on these response patterns are represented across multiple temporal resolutions. In this paper we propose a metric -- which we call multiscale relevance (MSR) -- to capture the dynamical variability of the activity of single neurons across different time scales. The MSR is a non-parametric, fully featureless indicator in that it uses only the time stamps of the firing activity without resorting to any a priori covariate or invoking any specific structure in the tuning curve for neural activity. When applied to neural data from the mEC and from the ADn and PoS regions of freely-behaving rodents, we found that neurons having low MSR tend to have low mutual information and low firing sparsity across the correlates that are believed to be encoded by the region of the brain where the recordings were made. In addition, neurons with high MSR contain significant information on spatial navigation and allow to decode spatial position or head direction as efficiently as those neurons whose firing activity has high mutual information with the covariate to be decoded and significantly better than the set of neurons with high local variations in their interspike intervals. Given these results, we propose that the MSR can be used as a measure to rank and select neurons for their information content without the need to appeal to any a priori covariate.

      Speaker: Matteo Marsili
    • 21
      Coffee break 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden
    • 22
      Learning within and outside of the neural manifold (this talk will be streamed) 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden

      How the brain controls complex behaviour is still an open question in neuroscience. Foremost, the ability to flexibly adapt movements to new conditions or goals is puzzling. Recent experimental evidence supports the idea of a fixed set of neural covariation patterns, called neural modes, which is flexibly used to create different kinds of movements [1]. The space these neural modes span is called neural manifold. Another set of studies suggest that fast motor adaptation is happening through changes within the original neural manifold, but new covariation patterns can be acquired over longer timescales [2,3,4].
      By using computational modelling, we explore the underlying constraints for within- and outside-manifold learning from a network perspective. Firstly, we test whether a generic optimization algorithm which acts on the recurrent weights is enough to explain the experimental discrepancy between within- and outside-manifold learning. Interestingly, we find that there is no intrinsic limitation preferring within-manifold learning. We don’t find evidence that the change in recurrent connections is bigger for outside-manifold learning than for within-manifold learning. In a next step, we dismiss the assumption of a perfect teacher signal which is biologically implausible. Instead, we train a feedback model which infers the error signal on the single neuron level. This error signal is used by the generic algorithm to adapt the recurrent weights accordingly. We find that the feedback model of the within-manifold perturbation can be learned to some extent, whereas it is not possible to infer any meaningful error information on the single neuron level for the outside-manifold perturbation. By using the learned, imperfect teacher signals, our results are consistent with the experimental findings of Sadler et al. [2], where monkeys can learn to rearrange their neural activity to within-manifold perturbations, but not to outside-manifold ones.
      Our results suggest that the limitation for within- and outside-manifold learning is maybe not the relearning of the recurrent dynamics itself, but the learning of the error feedback model. Though, one of the main assumptions of our work is that the neural manifold is mainly constrained by the recurrent connectivity. It remains to be investigated whether the same holds true if the manifold is predominantly shaped by external drive.
      References
      1. Gallego, J. A., Perich, M. G., Naufel, S. N., Ethier, C., Solla, S. A., & Miller, L. E. (2018). Cortical population activity within a preserved neural manifold underlies multiple motor behaviors. Nature communications, 9(1), 4233., 10.1038/s41467-018-06560-z
      2. Sadtler, P. T., Quick, K. M., Golub, M. D., Chase, S. M., Ryu, S. I., Tyler-Kabara, E. C., Yu, B. M. & Batista, A. P. (2014). Neural constraints on learning. Nature, 512(7515), 423., 10.1038/nature13665
      3. Golub, M. D., Sadtler, P. T., Oby, E. R., Quick, K. M., Ryu, S. I., Tyler-Kabara, E. C., Batista, A., Chase, S. M. & Yu, B. M. (2018). Learning by neural reassociation. Nature neuroscience, 21(4), 607-616., 10.1038/s41593-018-0095-3
      4. Oby, E. R., Golub, M. D., Hennig, J. A., Degenhart, A. D., Tyler-Kabara, E. C., Yu, B. M., Chase, S. M. & Batista, A. P. (2019). New neural activity patterns emerge with long-term learning. Proceedings of the National Academy of Sciences, 201820296., 10.1073/pnas.1820296116

      Speaker: Barbara Feulner
    • 23
      Reception Entrance (Albanova)

      Entrance

      Albanova

      Roslagstullsbacken 21, 106 91 Stockholm, Sweden
    • 24
      Strong and weak principles for neural dimension reduction (this talk will be streamed) 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden

      Large-scale, single neuron resolution recordings are inherently high-dimensional, with as many dimensions as neurons. To make sense of them, for many the answer is: reduce the number of dimensions. Here I argue we can distinguish weak and strong principles of neural dimension reduction. The weak principle is that dimension reduction is a convenient tool for making sense of complex neural data. The strong principle is that dimension reduction moves us closer to how neural circuits actually operate and compute. Elucidating these principles is crucial, for which we subscribe to provides radically different interpretations of the same dimension reduction techniques applied to the same data. In this talk, I outline the experimental evidence for each principle; but argue that most well-described neural activity phenomena provide no evidence either way. I also illustrate how we could make either the weak or strong principles appear to be true based on innocuous looking analysis decisions. These insights suggest arguments over low and high-dimensional neural activity need better constraints from both experiment and theory.

      Speaker: Mark Humphries
    • 25
      Coffee break 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden
    • 26
      Brain-state modulation of population dynamics and behavior (this talk will be streamed) 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden

      During the last few years, it has become apparent that the way information is represented in sensory cortex is strongly dependent on 'brain state'. Brain states represent modes of global coordination in brain activity, and are modulated by the behavioral and neuro-modulatory state of the animal. A salient axis of variation in brain-state is the Activation/Inactivation continuum, which measures the extent to which local populations display (Inactive/Synchronized state) or not (Active/Desynchronized state) slow, global fluctuations in activity. The degree of Activation/Inactivation in the cortex strongly varies both within wakefulness, sleep, and anesthesia. I will describe how changes in cortical Activation shape the representation of sounds by populations of neurons in the rat auditory cortex during Urethane anesthesia, focusing on the coding of level intensity differences across the two ears. Using principal component analysis, we characterize the geometry of representations at the population level, showing how the signal and noise subspaces change as a function of brain-state. These subspaces tend to orthogonalize with respect to each other and to the direction modulating all neurons uniformly as the cortex becomes more desynchronized, leading to overall more accurate representations. Finally I will describe ongoing work where we seek to understand whether and how trial-by-trial changes in the degree of synchronization in auditory cortex before a pure tone is presented impacts the ability of head-fixed mice to perform frequency discrimination.

      Speaker: Alfonso Renart
    • 27
      Lunch break Entre Restaurant (Albanova)

      Entre Restaurant

      Albanova

      Roslagstullsbacken 21, 106 91 Stockholm, Sweden
    • 28
      Bayesian time perception through latent cortical dynamics 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden

      We possess the ability to effortlessly and precisely time our actions in anticipation of events in the world. The seemingly effortless precision with which we execute most timing behaviors is remarkable given that information received from the world is often ambiguous and is corrupted by the influence of noise as it traverses through neural circuitry. Decades of research has shown that we are able to mitigate the effects of such uncertainty by relying on our prior experiences with such variables in the world. Bayesian theory provides a principled framework to study how trade-offs between prior knowledge and sensory uncertainty can shape perception, cognition, and motor function. Here we study this problem in the domain of timing to understand how low-dimensional geometries of neural population dynamics support Bayesian computations. In the first part of the talk, using results from electrophysiology and recurrent neural network modeling, I will discuss how cortical populations represent Bayesian behavior in monkeys during a timing task. Our results suggest that prior knowledge establishes curved manifolds of neural activity that warp underlying representations to generate Bayes-optimal estimates. Next, I will discuss how subcortical inputs interact with cortical dynamics to generate time intervals with an emphasis on the role of context-dependent input. Using in-vivo and in-silico approaches, we find that neural dynamics is temporally stretched or compressed to encode different time intervals. Finally, I will discuss how prior knowledge for temporal statistics could be acquired in a supervised fashion by cerebellar circuitry that is disynaptically connected to frontal cortical regions. Overall, these findings attempt to bridge insights from normative frameworks of Bayesian inference with potential neural implementations for the acquisition, estimation and production of optimal timing behavior.

      Speaker: Devika Narain
    • 29
      Coffee break 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden
    • 30
      Nneurons→∞ (this talk will be streamed) 122:026

      122:026

      Nordita

      Roslagstullsbacken 17, 106 91 Stockholm, Sweden

      : Simultaneous recordings from tens of thousands of neurons allow a new framework for characterizing the neural code at large scales. As the number of neurons analyzed increases, population activity approximates a vector in an infinite-dimensional Hilbert space. In this limit, the independent activity of any single neuron is of no consequence, and the neural code reflects only activity dimensions shared across the population. Analyzing the responses of large populations in mouse visual cortex to natural image stimuli revealed an unexpected result: signal variance in the nth dimension decayed as a power law, with exponent just above 1. We proved mathematically that a differentiable representation of a d-dimensional stimulus requires variances decaying faster than n^(-1-2\/d). By recording neural responses to stimulus ensembles of varying dimension, we showed this bound is close to saturated. We conclude that the cortical representation of image stimuli is as high-dimensional as possible before becoming non-differentiable.

      Speaker: Kenneth Harris