CBN (Computational Biology and Neurocomputing) seminars

Spike-Based Probabilistic Computation Underlying Sequence Learning

by Phil Tully (CB/CSC/KTH and Institute for Adaptive and Neural Computation, University of Edinburgh)

Europe/Stockholm
RB35

RB35

Description
Shaped by a variety of plasticity mechanisms acting over divergent timescales, recurrently connected excitatory and inhibitory neurons are capable of carrying out a rich set of computations. The attractor-memory paradigm provides a simple yet compelling framework for understanding how network dynamics could enable many functional aspects of neural circuits [1]. In models of this type, each memory is stored in a distributed fashion represented by increased firing in pools of excitatory neurons, and excitatory activity is locally modulated by lateral inhibitory connections that produce winner-take-all dynamics [2]. However, it remains an open question how stable associations between attractors could be self-organized and maintained through different forms of plasticity despite the deleterious influence of ongoing activity upon stored memories. Furthermore, it is unclear how attractor trajectories could be reliably processed and represented in a sequential fashion, which reflects the temporal nature of sensory stimuli. We address these issues by modeling intrinsic and synaptic plasticity using a spike-based version of the Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. Supporting the view that neurons can process information in the form of probability distributions, weights are inferred by estimating the posterior likelihood of activation for a postsynaptic cell upon presentation of evidence in the form of presynaptic activity patterns. Probabilities are estimated on-line using local exponentially weighted moving averages with time scales that are biologically motivated by the cascade of events involved in the induction and maintenance of long-term plasticity. Several other key ingredients of BCPNN plasticity confer stable associations between distinct network states in the spiking attractor network model. Modulating the presynaptic and postsynaptic trace kinetics to exhibit fast (AMPA-type) and slow (NMDA-type) dynamics shapes the STDP kernel, yielding an asymmetrical weight matrix [3]. Akin to synaptic scaling, emergent synaptic competition together with a stable unimodal weight distribution over long time scales leads to robust sequences of attractor activations. Inference additionally requires modification of a neuronal component, which we interpret as a correlate of intrinsic excitability. Such synaptic [4] and nonsynaptic [5] mechanisms were shown to be relevant for probabilistic computation. In broader terms, our model suggests that inference could be a by-product of coupled plasticity mechanisms whose functional effects are only partially understood in concert. We demonstrate the feasibility of the model using large-scale network simulations of integrate and fire neurons, and explore the ability and capacity of the network to store spatiotemporal patterns of discrete network states with varying degrees of overlap. [1] Sandberg, A., Lansner, A., Petersson, K. M., & Ekeberg, Ö. (2002). A Bayesian attractor network with incremental learning. Network: Comput. Neural Syst. 13:179-194. [2] Lundqvist, M., Compte, A., & Lansner, A. (2010) Bistable, irregular firing and population oscillations in a modular attractor memory network. PLoS Comput Biol 6(6). [3] Kleinfeld, D. (1986). Sequential state generation by model neural networks. Proc Natl Acad Sci USA. 83(24):9469-9473. [4] Keck, C., Savin, S. & Lücke J. (2012). Feedforward inhibition and synaptic scaling - two sides of the same coin? PLoS Comput Biol, 8(3). [5] Habenschuss, S., Bill, J. & Nessler, B. (2012). Homeostatic plasticity in Bayesian spiking networks as Expectation Maximization with posterior constraints. Adv Neural Inf Process Syst. 25:782-790.