In order to enable an iCal export link, your account needs to have an API key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation.
Additionally to having an API key associated with your account, exporting private event information requires the usage of a persistent signature. This enables API URLs which do not expire after a few minutes so while the setting is active, anyone in possession of the link provided can access the information. Due to this, it is extremely important that you keep these links private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately create a new key pair on the 'My Profile' page under the 'HTTP API' and update the iCalendar links afterwards.
Permanent link for public information only:
Permanent link for all public and protected information:
(CB/CSC/KTH and Institute for Adaptive and Neural Computation, University of Edinburgh)
Shaped by a variety of plasticity mechanisms acting over divergent timescales, recurrently connected excitatory and inhibitory neurons are capable of carrying out a rich set of computations. The attractor-memory paradigm provides a simple yet compelling framework for understanding how network dynamics could enable many functional aspects of neural circuits . In models of this type, each memory is stored in a distributed fashion represented by increased firing in pools of excitatory neurons, and excitatory activity is locally modulated by lateral inhibitory connections that produce winner-take-all dynamics . However, it remains an open question how stable associations between attractors could be self-organized and maintained through different forms of plasticity despite the deleterious influence of ongoing activity upon stored memories. Furthermore, it is unclear how attractor trajectories could be reliably processed and represented in a sequential fashion, which reflects the temporal nature of sensory stimuli.
We address these issues by modeling intrinsic and synaptic plasticity using a spike-based version of the Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. Supporting the view that neurons can process information in the form of probability distributions, weights are inferred by estimating the posterior likelihood of activation for a postsynaptic cell upon presentation of evidence in the form of presynaptic activity patterns. Probabilities are estimated on-line using local exponentially weighted moving averages with time scales that are biologically motivated by the cascade of events involved in the induction and maintenance of long-term plasticity.
Several other key ingredients of BCPNN plasticity confer stable associations between distinct network states in the spiking attractor network model. Modulating the presynaptic and postsynaptic trace kinetics to exhibit fast (AMPA-type) and slow (NMDA-type) dynamics shapes the STDP kernel, yielding an asymmetrical weight matrix . Akin to synaptic scaling, emergent synaptic competition together with a stable unimodal weight distribution over long time scales leads to robust sequences of attractor activations. Inference additionally requires modification of a neuronal component, which we interpret as a correlate of intrinsic excitability. Such synaptic  and nonsynaptic  mechanisms were shown to be relevant for probabilistic computation. In broader terms, our model suggests that inference could be a by-product of coupled plasticity mechanisms whose functional effects are only partially understood in concert. We demonstrate the feasibility of the model using large-scale network simulations of integrate and fire neurons, and explore the ability and capacity of the network to store spatiotemporal patterns of discrete network states with varying degrees of overlap.
 Sandberg, A., Lansner, A., Petersson, K. M., & Ekeberg, Ö. (2002). A Bayesian attractor network with incremental learning. Network: Comput. Neural Syst. 13:179-194.
 Lundqvist, M., Compte, A., & Lansner, A. (2010) Bistable, irregular firing and population oscillations in a modular attractor memory network. PLoS Comput Biol 6(6).
 Kleinfeld, D. (1986). Sequential state generation by model neural networks. Proc Natl Acad Sci USA. 83(24):9469-9473.
 Keck, C., Savin, S. & Lücke J. (2012). Feedforward inhibition and synaptic scaling - two sides of the same coin? PLoS Comput Biol, 8(3).
 Habenschuss, S., Bill, J. & Nessler, B. (2012). Homeostatic plasticity in Bayesian spiking networks as Expectation Maximization with posterior constraints. Adv Neural Inf Process Syst. 25:782-790.