AI, VR, MRI and Skull Caps: Technology Turning Brain Research on its Head

By: Lee Rickwood

December 23, 2022

Toronto has the brain on its mind.

Medical technologies, including wearable brain-imaging gear, new AI and VR programs and advanced ‘hyperscanning’ applications, are being used to peer inside the brain − while it operates.

At leading hospitals and universities in the city, imaging devices using radio waves and magnetic fields are helping medical technicians create detailed 3-D images of the internal structures of the brain.

And they’re moving outside traditional brain-scanning devices like MRI to develop complementary technologies that can be used in real-world situations to track the brain activity of more than one individual at a time, opening up amazing possibilities for new research, including clinical trials aimed at improving one person’s empathy for another.

Many neuroscience researchers want to see what are called mirror neurons, a type of brain cell, in action. Such cells are believed to respond in the same way, a synchronous or sympathetic way, whether we perform an action or when we watch someone else do the same thing.

At Ontario Tech University, researchers are building on their experience using brain scanning devices to wrap their heads around the brain’s socio-emotional processing patterns, including the way we understand, relate to and empathize with others.

Faculty of Social Science and Humanities members Dr. Matthew Shane and Dr. Bobby Stojanoski have received nearly $400,000 in grant money to work with a new device there, called a functional near-infrared spectroscopy (fNIRS) system.

Their big Canada Foundation for Innovation John R. Evans Leaders Fund award will fund a rather small device: a mesh cap that fits over the head.

Through a variety of sensors that attach to the cap, brain activity can be indexed by tracking blood flow.

Two young children sit at table, playing with coloured objects. They wear 'skull caps', connected to wires leading to a computer-type device in the background.

Near-infrared spectroscopy caps measure brain function through a variety of sensors. The caps allow patients to move naturally in a clinical setting and will help researchers discover neural mechanisms related to social interactions.

The portable nature of the fNIRS cap means it can be taken into the field, to psychiatric or forensic facilities, for example, to evaluate people who can’t get to a big in-hospital MRI machine. The ‘skull cap’ makes evaluation of brain processes in real-world situations a real-world possibility.

“Humans are inherently social creatures, and every aspect of our mental lives, including various cognitive and emotional processes, exist within a social context,” Dr. Stojanoski explained in a release describing the new device. “Yet, most studies examining the human brain and mind are done with a focus on the individual in isolation [inside the tube-like structure of an MRI machine]. The new fNIRS system will allow us to discover the set of neural mechanisms that give rise to complex and naturalistic social interactions.”

One exciting possibility afforded by the fNIRS system is conducting what is referred to as ‘hyperscanning’ studies, during which neural activity is simultaneously recorded from two interacting individuals to evaluate the degree to which their brains synchronize. Neural synchrony or mirroring has been associated with shared experiences, classroom-based learning outcomes, and clinician-patient engagement. Planned hyperscanning projects at Ontario Tech will assess brain activity underlying real-world social-cognitive processes in clinical and average populations.

Another group of Toronto-area neurological researchers and clinicians is using fMRI virtual-reality technology to assess brain activity in a potentially life-threatening real-world situation: driving, and in particular driving ability after brain damage.

They implemented a novel, virtual-reality technique using a driving simulator within traditional functional magnetic resonance imaging; the team of eight researchers from various area hospitals and universities, including Sunnybrook, U of T and St. Michael’s, can measure cerebellar brain activity during various daily driving tasks in a safe environment.

Safety, sympathy and empathy seem like admirable assets in any driving situation, so it is fascinating to think about combining the two above-mentioned technologies and research scenarios.

Another potential but purely speculative partnership imagines the imaging output from portable brain scan technology like that being deployed at Ontario Tech University at the BRAIN-TO lab, part of Toronto’s University Health Network.Graphic of human brain with CN Tower in the middle

BRAIN-TO (for Brain Research in Advanced Imaging and Neuromodeling – Toronto) is home to a multidisciplinary team led by Dr. Kâmil Uludağ that is pushing the technical limits of magnetic resonance imaging to improve both clinical research and cognitive neuroscience in humans.

Team members are making the invisible visible as they develop and apply advanced artificial intelligence methods to human fMRI data, unleashing new ways to identify models of brain connectivity and imaging-based biomarkers or tell-tale signs of disease or degeneration in the brain or the spine. They call it a neuroscience revolution (“neurobiologically-plausible artificial intelligence (AI) models”) in our understanding of human brain function.

In a clinical lab, a patient wears skull cap with multiple sensors and wires. A clinician in the background is watching the patient.

Photo credit: Baburov (Own work) [CC BY-SA 4.0], via Wikimedia Commons

Imagine what else is plausible in techno-mediated neuroscience, following word from researchers at the University of Toronto Scarborough about their techniques to translate brain signals into visible images.

Imagine seeing what someone else just saw – in their mind!

Led by Professor Adrian Nestor, a research team there used electroencephalography, or EEG technology, to record a person’s electrical brain activity when that person looked at pictures of faces. By mapping and manipulating those signals, an image of the face just seen could then be digitally recreated by a computer algorithm. The work was published in the journal eNeuro.

Using technology that captures signals in the brain, what that brain saw (what it was processing in the visual cortex) could be pictured elsewhere.

Even outside Toronto.

 

-30-


Leave a Reply

Your email address will not be published. Required fields are marked *