
Imagine being able to control machines by thinking.
This communication link is known as a brain-machine interface and a new algorithm developed in Professor Brokoslaw Laschowski’s Computational Neuroscience Lab could soon make these interfaces more accurate and efficient.
For brain-machine interfaces to work, an algorithm is needed to predict or “decode” human behaviour — such as speech or movement — from patterns of neural activity in the brain. This brain activity can be measured using functional MRI, electroencephalograms, or implanted electrodes, such as those developed by Neuralink.
Today, brain-decoding algorithms exist, but they have significant limitations.
“Brain activity is highly subject-specific,” says Laschowski, a research scientist at the University Health Network, University of Toronto Robotics, and assistant professor (status) in the Faculty of Applied Science & Engineering.
“Neural population activity in the brain varies considerably between and within subjects. That’s why building a universal brain-decoding algorithm is so challenging.”
Most brain-decoding algorithms are optimized for individual subjects and tasks, requiring additional data collection and model retraining for each scenario, which is time-consuming and impedes clinical translation. Researchers in the Computational Neuroscience Lab are exploring ways to improve generalization.
“There’s also an interesting phenomenon known as negative transfer,” says Laschowski.
“In machine learning, the standard practice to improve model performance is to increase the size and diversity of the training dataset. However, due to negative transfer, increasing dataset diversity can sometimes degrade performance, leading to counterintuitive results where models trained on smaller datasets outperform those trained on larger ones,” he says.
“This is why source selection for multi-subject brain decoding is important.”
In a new study published on bioRxiv, Laschowski and Aidan Dempster (EngSci 2T5), now a PhD student in robotics at the University of Michigan, developed a new computational framework to minimize negative transfer in brain decoding by reframing source selection as a mixture model parameter estimation problem. This allows each source subject to contribute through a continuous mixture weight rather than being outright included or excluded.
To calculate these weights, they developed a novel convex optimization algorithm based on the Generalized Method of Moments. By using model performance metrics as the generalized moment functions, their algorithm also more closely aligns with the mathematical foundations of domain adaptation theory, enhancing optimality guarantees.
When tested on a brain-decoding dataset of more than 105 subjects, their algorithm achieved state-of-the-art performance while using 62% less training data, suggesting that performance gains stem from reduced negative transfer.
“These findings challenge the dominant practice in machine learning, which focuses on developing and using large-scale datasets for training,” says Laschowski.
“Our study shows that quality, not just quantity, is important when selecting source subjects to train a machine learning model for brain decoding.”
A preliminary version of their algorithm was awarded Best Poster Award at the 2024 Toronto Robotics Conference.
These brain decoding algorithms can be used for a variety of applications.
One such example is a new interdisciplinary collaboration between Laschowski and Professor Hugh Liu (UTIAS), exploring how brain-decoding algorithms can be used to control and interact with autonomous drones.
“My lab specializes in computational neuroscience, and his lab specializes in autonomous flight systems,” says Laschowski. “Together, we’re exploring how to combine our expertise to build something novel and advance our understanding of brains and machines.”
In addition to brain-machine interfaces, his algorithms are also being used to support research in computational neuroscience, such as studying the underlying mechanisms and computations in the brain that give rise to the mind.
“What is the mind? What is thinking? Can we build an artificial brain? These are the sorts of grand questions that drive my research program. Our long-term mission is to reverse-engineer the human brain and discover fundamental principles of learning and intelligence,” says Laschowski.
“Understanding how the brain works is perhaps the greatest scientific question of all time.”
– This story was originally published on the University of Toronto’s Faculty of Applied Science and Engineering News Site on October 9 by U of T Robotics Institute.