In order to enable an iCal export link, your account needs to have an API key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation.
Additionally to having an API key associated with your account, exporting private event information requires the usage of a persistent signature. This enables API URLs which do not expire after a few minutes so while the setting is active, anyone in possession of the link provided can access the information. Due to this, it is extremely important that you keep these links private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately create a new key pair on the 'My Profile' page under the 'HTTP API' and update the iCalendar links afterwards.
Permanent link for public information only:
Permanent link for all public and protected information:
Experiences from GPU implementation of ANN-simulator
(NeuroLogic Sweden AB, Wish-IT AB)
The GPU (Graphical Processing Unit) is a SIMD (Single Instruction Multiple Data) type of computer which is ubiquitous in almost all computers nowadays. It was found quite early middle 90-ies that this could be used as a general parallel number crunching accelerator. Over time the technology has been developed and specialized and this methodology is nowadays denoted CUDA (Compute Unified Device Architecture) where the NVIDIA CUDA API (Application Programming Interface) is most well known. What is presented is experiences of porting an artificial neural network simulator for modelling of populations of neurons and projections between populations written by Anders Lansner in C++. We start with a description of the NVIDIA CUDA computational model, then define what need to be considered when performing porting or new implementation of CUDA code. Basically the task is not hard, but there are a lot of traps to deal with on the way to get the code running with a verifiable result. Some practical hints will be given, as well as scaling behavior and speedup that can easily be achieved. We will also discuss how a rethinking of the original computational model could possibly speedup the performance towards the limit of the available GPUs.