Three years ago the United States government launched the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative to accelerate the development and application of novel technologies that will give us a better understanding about how brains work.
Since then, dozens of technology firms, academic institutions, scientists and other have been developing new tools to give researchers unprecedented opportunities to explore how the brain processes, utilizes, stores and retrieves information. But without a coherent strategy to analyze, manage and understand the data generated by these new technologies, advancements in the field will be limited.
For this reason, an international team of interdisciplinary researchers—including mathematicians, computer scientists, physicists and experimental and computational neuroscientists— was assembled to develop a plan for managing, analyzing and sharing neuroscience data. Their recommendations were published in a recent issue of Neuron.
To maximize the return on investments in global neuroscience initiatives, the researchers argue that the international neuroscience community should have an integrated strategy for data management and analysis. This coordination would facilitate the reproducibility of workflows, which then allows researchers to build on each other’s work.
For a first step, the authors recommend that researchers from all facets of neuroscience agree on standard descriptions and file formats for products derived from data analysis and simulations. After that, the researchers should work with computer scientists to develop hardware and software ecosystems for archiving and sharing data.
The authors suggest an ecosystem similar to the one used by the physics community to share data collected by experiments like the Large Hadron Collider (LHC). In this case, each research group has their own local repository of physiological or simulation data that they’ve collected or generated. But eventually, all of this information should also be included in “meta-repositories” that are accessible to the greater neuroscience community. Files in the “meta-repositories” should be in a common format, and the repositories would ideally be hosted by an open-science supercomputing facility.
Because novel technologies are producing unprecedented amounts of data, the researchers also propose that neuroscientists collaborate with mathematicians to develop new approaches for data analysis and modify existing analysis tools to run on supercomputers. To maximize these collaborations, the analysis tools should be open-source and should integrate with brain-scale simulations, they say.
Paper: “High-Performance Computing in Neuroscience for Data-Driven Discovery, Integration, and Dissemination”
Reprinted from materials provided by Berkeley Lab.