HomeOpinionScientists have developed a decoder that converts brain activity...

Scientists have developed a decoder that converts brain activity to text

Researchers at the University of Texas at Austin have developed a semantic decoder that converts brain activity into text. This artificial intelligence system, which is non-invasive and does not require surgical implants, could provide a new communication tool for people who cannot physically speak. The decoder is trained to have a participant listen to podcasts for hours in an fMRI scanner and then generate text based solely on brain activity.

A new artificial intelligence system called the semantic decoder can turn a person’s brain activity—while listening to a story or silently imagining a story being told—into a continuous stream of text. A system developed by researchers at the University of Texas at Austin could help people who are mentally aware but physically unable to speak, such as those weakened by a stroke, re-communicate intelligibly.

Study published today (May 1) in the journal Nature Neuroscience , was led by Jerry Tang, a computer science doctoral student, and Alex Huth, an assistant professor in the Department of Neuroscience and Computer Science at UT Austin. The work is based in part on a transformer model similar to those used by Open AI’s ChatGPT and Google’s Bard.

Unlike other speech decoding systems under development, this system does not require subjects to have surgical implants, making the process non-invasive. Participants don’t need to use just words from a set list, either. Brain activity is measured using an fMRI scanner after long decoder training, in which a person listens to hours of podcasts in the scanner. Then, listening to a new story or imagining the storytelling allows the machine to generate the appropriate text based solely on brain activity, provided the participant is ready to decode their thoughts.

“For a non-invasive method, this is a real leap forward compared to what has been done before, often with single words or short sentences,” Huth said. “We derive a model for decoding continuous speech over long periods of time with complex ideas.”

The result is not a literal transcription. Instead, the researchers designed it to convey the essence of what was said or thought, albeit imperfectly. About half the time, when the decoder was trained to monitor the participant’s brain activity, the machine produced text that closely (and sometimes exactly) matched the intended meaning of the source words.

For example, in the experiments, the thoughts of a participant listening to a speaker who said, “I don’t have a driver’s license yet,” translated as “He hasn’t even started learning to drive yet.” Hearing these words: “I didn’t know whether to scream, cry or run. Instead, ‘Leave me alone!’

Starting with an earlier version of the article that appeared online as a preprint, the researchers addressed possible abuse of the technology. The article explains how decoding works only with cooperative members willing to participate in decoder training. Results for non-decoder-trained subjects were ambiguous, and if decoder-trained participants later resisted—for example, by considering other thoughts—the results were also unavailable.

Source: Port Altele

- Advertisement -

Worldwide News, Local News in London, Tips & Tricks

- Advertisement -