Helth Tech
LOS ANGELES, May 2 (Xinhua) -- U.S. researchers have developed a new artificial intelligence system that might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes.
The system, called a semantic decoder, can translate a person's brain activity -- while listening to a story or silently imagining telling a story -- into a continuous stream of text, according to the study published Monday in the journal Nature Neuroscience.
Unlike other language decoding systems in development, this system does not require subjects to have surgical implants, making the process noninvasive.
In the newly developed decoder, speech reconstructions are not word-for-word, but can recover the "gist" of what the user is hearing, according to the study.
The system was developed by researchers at The University of Texas at Austin (UT Austin).
"For a noninvasive method, this is a real leap forward compared to what's been done before, which is typically single words or short sentences," said Alex Huth, an assistant professor of neuroscience and computer science at UT Austin. "We're getting the model to decode continuous language for extended periods of time with complicated ideas."