Search

One Step Closer to a Mind Reader

brain2

Brain-computer interfaces (BCIs) have long been the subject of research and development, with the aim of translating brain activity into natural language. However, previous BCIs required invasive procedures that involved electrodes being implanted in the brain. While noninvasive methods like electroencephalogram (EEG) have been attempted, they only generate a few phrases and not coherent language.

Recently, Alexander Huth and his colleagues at the University of Texas, Austin, have developed a new noninvasive method using functional magnetic resonance imaging (fMRI) and GPT-1 (an earlier version of GPT-4). Using fMRI to track neural activity in the brain is not a new idea: it achieves so by monitoring blood flow changes. However, it does not tell us what a person is hearing or thinking.

To overcome this limitation, the team trained an AI model that matches the words being heard and the brain scans of language-related regions. Once trained, the model can predict what a person is hearing based on new brain images. The GPT-1 was used to predict likely words so that the model only needed to check a small number of candidate words and select from them. The system is able to capture the main idea of the story one hears, even though not every word. This method also worked for people who imagined telling a story, and those who saw a silent video.

The potential applications of this noninvasive method are numerous. For example, it might help people with brain injuries, stroke, or paralysis to communicate. However, it’s important to note that each person has different brain image patterns, so a new training is needed for each individual. This breakthrough in BCI research opens the possibility of significant advancements in our ability to understand and communicate with individuals who have difficulty expressing themselves through traditional means. While further research is needed to refine and improve the method, the results so far are promising and exciting.

Link: Research Article
Contact: Email
Location: Texas, US

Summary

The purpose
To use brain scans to predict the stories that are heard
The idea 
Train an AI model with brain scans and the stories that are heard

Further Possibilities


1. Use GPT-4 instead of GPT-1 in the method
2. Use BCI and generative AI to generate contents
3. Develop an operating system based on BCI
4. Develop a wearable that is a BCI-based computer
5. Use BCI to train AI models with human language and thoughts

Questions


1. HMW combine other signals with noninvasive brain images to predict thoughts?
2. HMW protect privacy with brain signals?
3. HMW communicate with the Internet through BCI?

MORE CREATIVE IDEAS

Share this post

Leave a Comment