
Researchers connect the brain implant to the voice synthesizer computer. (Photo by Noah Berger/Courtesy of UC Berkeley)
A team of researchers in California say they developed a way to restore naturalistic speech for those with severe paralysis using brain-computer interfaces (BCIs).
At UC Berkeley and UC San Francisco, researchers used AI-based modeling to develop a streaming method that synthesizes brain signals into audible speech in near-real-time. The researchers say it marks a critical step toward enabling communication for people who lost the ability to speak. They published their findings in Nature Neuroscience.
In a post on UC Berkeley’s website, Goapala Anumanchipalli, assistant professor of electrical engineering and computer sciences at UC Berkeley and the study’s co-principal investigator, likened the approach to the Alexa and Siri offerings from Amazon and Apple, respectively. Using a similar type of algorithm, Anumanchipalli says, they found a way to decode neural data and enable near-synchronous voice streaming.
“The result is more naturalistic, fluent speech synthesis,” said Anumanchipalli.
Co-lead author Cheol Jun Cho, a Berkeley Ph.D student in electrical engineering and computer sciences, says the neuroprosthesis works by sampling neural data from the motor cortex, the part of the brain that controls speech production. It then uses AI to decode brain function into speech.
“We are essentially intercepting signals where the thought is translated into articulation and in the middle of that motor control,” Cho said. “So what we’re decoding is after a thought has happened, after we’ve decided what to say, after we’ve decided what words to use and how to move our vocal-tract muscles.”