Why Machine Learning is Key to Neuroscience

When I completed my undergraduate degree in Neurobiology, machine learning was something I had little exposure to. I knew it powered many of the systems we encountered, but I hand-waved it away—“the system does the signal processing and feature detection”—as if that was enough explanation. I wanted to know why things worked, but not badly enough to get swallowed into the math-heavy world of computational neuroscience. 

This curiosity never left me, and in fact, got stronger when I first saw real, live EEG readouts on the monitor. Readouts of my brain activity. The first question that came to mind after seeing my jagged waves blip onto the screen was: how were we able to make sense of it all? The shaky waveforms had some subtle patterns, and some obvious ones (my eye blinks registered loudly on channel recording from a frontal electrode). To my untrained eye, they were inscrutable. I wondered, how were we able to make sense of all the jiggling squiggles on screen? And in particular, from each recording electrode? What did it mean that the waves were wider on this electrode vs the one next to it?

As we’ve previously discussed, BCIs can not only restore, but also potentially augment human ability. They allow people to control prosthetic limbs and perform manual tasks like typing without actually lifting a finger. These systems started out simple and have become much more complex. But BCIs are just one aspect of the future of neuroscience.

Within the brain, cells communicate to each other via electrochemical signaling. Non-invasive devices such as electroencephalograms (EEGs), or invasive methods like electrocorticography (ECoG) capture these signals. These recordings can then be fed into algorithms that map neural patterns to certain functions. For instance, a person with ALS might use EEG-based BCI to control a computer cursor by imagining moving their hand. The system detects brain signals linked with the thought, machine learning models interpret them, and that allows the cursor to move. Thoughts become actions in the external world. 

The implication of this is major. While old BCI systems may send information one way–from human brain to device–newer systems could allow bidirectional flow. With the newer systems, devices could offer feedback based on the recorded brain activity, which then gets processed by the brain through senses (touch, and sound for example). 

Big Data Problems

With over 86 billion neurons, understanding the signaling between them is like trying to dig out a hole with plastic spoons. Recording techniques like EEGs create lots of data, too much to mark up by human hand. This is where machine learning shines. Algorithms have been able to help researchers parse terabytes of neural data at higher rate and accuracy than manual analysis alone.

When it comes to decoding in real-time, machine learning aids this effort. UCSF research was able to decode speech from brain activity with up to 97% accuracy. How does it work? Electrodes capture recordings from the brain while a person imagines speaking. ML models translate these signals in real time.

Mapping the Territory

Connectomics is another major field that looks at how various brain regions interact using neural networks. Deep learning algorithms make it possible to model neuron connectivity for pathways that may be involved in emotion, or cognition among other things. These models not only show how information flows in the brain but may also help us diagnose neurodegenerative diseases much earlier. This is why machine learning has its place in neuroscience as a valuable tool. 

Understanding the capability and shift towards machine learning is critical because the work is foundational for new markets–especially in neurotech, brain-computer interfaces and even cognitive enhancement. 

Take care,
Eashan

Reply

or to participate.