The Review – Notes from the Field

First Edition - Phone Drain, Movies for Mice, and Brain Replays

Sometimes, when researching, I come across interesting studies, papers, or articles. The Review is a roundup of these links with a brief explanation for your reference. These will be a little more technical in nature but I figure curious minds will delight in them. Enjoy!

Turns out, even the mere presence of your phone is enough to sap your ability to be at your cognitive best.

They ran this study with college students, looking at different conditions, phone in pocket, on table, on and off. The effect was remarkable. If the phone was further out of sight like outside the lecture hall, the students scored better on cognitive measures relative to the control group. What’s your takeaway? Keep your phone as far away from you as possible when working to really hit your flow state. I tried leaving my phone in another room a few times to get productive and it actually felt much easier to do things. Try it and tell me if you noticed a difference!

Experimentation on rodents like mice or rats often raise questions about ethics and animal welfare. Maybe in the future, we won’t need as much of that at all. Researchers used recordings from 135,000 neurons of 14 mice watching action movies to train a foundation model, a kind of digital replica of the mouse visual cortex. Once trained, the model could simulate how a new mouse's brain would respond to both familiar and unfamiliar stimuli, even with just a few minutes of data

That’s crazy?! What’s powerful here is that the model didn’t just replicate activity. It also predicted things like a neuron's cell type and dendritic structure. This points to a future where neuroscience experiments can be run in silico, faster, cheaper, and with more flexibility than is possible in the lab. Not to mention the ethical implications of such work!

It’s possible to “decode” brain activity to reconstruct visuals, but the problem is it’s not very efficient. It used to take dozens of hours of recordings per person, and if you’ve worked with big data before, you know that it gets unwieldy and expensive really fast. 

Each brain is unique, so how do we reconcile that variability when building reconstruction models?

The team behind MindEye2 uses the idea of “shared latent space” as a clever solution. By training on several individuals’ brain activity and learning to map each person’s data into a common space, they can then use a single neural network that generalizes better—even to people who weren’t part of the original training set. It becomes a better universal decoder because it accounts for the uniqueness of each person without having to start from scratch.

This breakthrough means MindEye2 can use just 1 hour of fMRI data to make image reconstructions like the one seen in Figure 1, below. Look at the increase from MindEye1!

That’s all for today, thanks for reading! I love answering and researching your questions so if you have a topic you’re curious about, please send it my way. I’ll consider looking into it.

Take care,
Eashan

Reply

or to participate.