Collaborative Minds Bringing Sounds to Brain Data in Yearlong Project
Data from functional magnetic resonance imaging (fMRI) have provided eye-popping pictures of the way the brain is wired, and allowed neuroscientists and laypeople alike to view intricate anatomical and functional connections between regions of the brain. But what if a new tool could be applied to MRI and other data, to listen to the way the brain works and how it is forged with connections?
An emerging effort to “sonify” imaging data is taking root at UT Dallas’ Center for Vital Longevity, in the lab of Dr. Gagan Wig. The approach, now funded by the Defense Advanced Research Projects Agency (DARPA), allows data to be represented by sounds from which a trained listener might be able to discern patterns of brain connectivity not readily seen in available visualization strategies.
Wig, an assistant professor in the School of Behavioral and Brain Sciences, is working with his UT Dallas colleagues Dr. Roger Malina, Arts and Technology Distinguished Chair, Scot Gresham-Lancaster, assistant professor in the sound design program in the School of Arts, Technology, and Emerging Communication, and a mix of scientists, computer programmers and artists to translate data to sight and sound.
The yearlong effort is designed to create a dynamic prototype tool that will enable exploration of brain connections in a three-dimensional interactive video game environment.
“We have largely tried to understand how brain networks function by visualizing them. Certain insights, however, might be contained in a neural ‘song’ or signature that allows researchers to discern distinct rhythms or patterns of brain networks that might in themselves reflect sonic signatures, much like a chorus or the way a beehive might hum rhythmically with activity.”
The approach can be likened to the 1960s sci-fi movie Fantastic Voyage, where a small medical crew was shrunk to microscopic size and injected into the bloodstream of an injured scientist, yielding a stunning panorama of visuals and sounds in the inner workings of the human vascular system, traveling in a small submarine to repair a blood clot in the brain.
“We have largely tried to understand how brain networks function by visualizing them,” Wig said. “Certain insights, however, might be contained in a neural ‘song’ or signature that allows researchers to discern distinct rhythms or patterns of brain networks that might in themselves reflect sonic signatures, much like a chorus or the way a beehive might hum rhythmically with activity.”
The team’s initial data set is functional brain connectivity MRI information collected from a large sample of healthy adults ranging in age from 20 to 89 years, taken while they were at rest. Initial results, based on analyzing the data using mathematical analysis and visualization of brain networks, were recently reported by Wig’s group in the Proceedings of the National Academy of Sciences. In addition, the team is particularly emphasizing generating a user interface that allows a broader range of data sets to be incorporated, thus allowing use of data sets beyond ones studied by Wig’s lab
“The project aims both to create data exploration software for the use of the scientists, but also to enable performance,” Malina said. Gresham-Lancaster, with collaborating artists Tim Perkis and Andrew Blanton, will “perform” the data in art settings.
“Our new approach for representing complex patterns from information will have applications in medical research and also extend beyond the neuroscience domain,” Wig said. “The yearlong project will conclude with the development of a prototype that could be called or analogized to a data stethoscope that allows us to compare brain networks of healthy and unhealthy individuals,” just as standard stethoscopes are used by physicians to take vital signs and quickly determine abnormalities in the heart and lungs.