Document Actions

You are here: Home / Bernstein Conference / Past Conferences / 2016 / Satellite Workshops / High-Performance Computing in Neuroscience (Peyser, Wachtler)

High-Performance Computing in Neuroscience (Peyser, Wachtler)


Neuroscience is grappling with problems of increasing complexity and scale as exemplified by projects such as the Human Brian Project. At the same time, a new generation of exascale supercomputers is becoming available, combined with significant computational infrastructure directed towards the neuroscientific community. Applications such as computationally intensive simulations and neuroinformatic analyses of large data sets were often originally designed for desktop PCs or small clusters. To tackle massive neuroscientific problems through the use of High-Performance Computing (HPC) systems, theoretical tools need to be developed to efficiently explore these newly accessible scales, while software needs to be extended with new algorithms, libraries and tools.

In this workshop, we would like to discuss neuroscientific problems that are becoming tractable due to the use of HPC, as well as issues in HPC that are of particular concern to neuroscientists. We will cover simulators adapted to HPC from morphologically detailed simulators like NEURON to neural mass models such as VirtualBrain models which have been scaled for extremely large scale parameter sweeps, as well as analytical tools adapted to HPC and large scale atlases such as BigBrain. Additionally, issues such as storage at scale and emerging architectures such as neuromorphic hardware will be discussed.