New News

Paralyzed Man’s Brain Waves Get Turned Into Sentences on Computer, Scientists ‘Thrilled’ Beyond Words

[ad_1]

Researchers at UC San Francisco have successfully developed a “speech neuroprosthesis” that has allowed a man with severe paralysis to communicate in sentences, translating signals from his brain to the vocal tract directly into words that appear as text on a screen.

The achievement builds on more than a decade of effort by UCSF neurosurgeon Edward Chang to develop technology that enables people with paralysis to communicate even if they cannot speak for themselves.

“To our knowledge, this is the first successful demonstration of direct whole-word decoding of the brain activity of someone who is paralyzed and unable to speak,” said Chang, lead author of the study. “It holds great promise for restoring communication by harnessing the brain’s natural speech machinery.”

Every year thousands of people lose the ability to speak due to a stroke, accident or illness. With further development, the approach described in this study could one day allow these people to communicate fully.

Translate brain signals into speech

Previously, work in the field of communication neuroprosthetics has focused on restoring communication through spelling-based approaches to writing letters one by one in text.

Chang’s study differs from these efforts in a critical way: His team is translating signals intended to control the muscles of the vocal system to speak words, rather than signals to move the arm or hand to enable writing.

Chang said that this approach takes advantage of the natural and fluid aspects of speech and promises faster and more organic communication.

READ: Breakthrough for spinal cord injuries and dementia as protein spawns ‘surprising’ repairs

“With speech, we typically communicate information at a very high speed, up to 150 to 200 words per minute,” he said, noting that spelling-based approaches that use typing, writing, and control of a cursor are considerably slower. and laborious. “Going straight to words, as we are doing here, has great advantages because it is closer to how we normally speak.”

Over the past decade, Chang’s progress toward this goal was facilitated by patients at the UCSF Epilepsy Center undergoing neurosurgery to identify the source of their seizures using sets of electrodes placed on the surface of their brains.

These patients, all of whom had normal speech, volunteered to analyze their brain recordings for speech-related activity. Initial success with these volunteer patients paved the way for the current trial in people with paralysis.

Previously, Chang and his colleagues at UCSF’s Weill Institute of Neurosciences mapped the cortical activity patterns associated with the vocal tract movements that each consonant and vowel produce.

To translate those findings into whole word speech recognition, David Moses, PhD, a postdoctoral engineer in Chang’s lab, developed new methods for real-time decoding of those patterns and statistical language models to improve accuracy.

But its success in decoding speech in participants who could speak did not guarantee that the technology would work in a person whose vocal tract is paralyzed. “Our models needed to learn the mapping between complex patterns of brain activity and intentional speech,” Moses said. “That poses a great challenge when the participant cannot speak.”

Furthermore, the team did not know whether the brain signals that control the vocal tract would remain intact for people who have not been able to move their vocal muscles for many years. “The best way to find out if this might work was to try it,” Moses said.

The first 50 words

To investigate the potential of this technology in patients with paralysis, Chang teamed up with colleague Karunesh Ganguly, associate professor of neurology, to launch a study known as “BRAVO” (Restoration of the Brain-Computer Interface of the Arm and Voice).

The first participant in the trial is a man in his 30s who suffered a devastating stroke more than 15 years ago that severely damaged the connection between his brain and his vocal tract and limbs.

Since his injury, he has had extremely limited head, neck, and limb movements, and communicates by using a stylus attached to a baseball cap to mark letters on a screen.

CHECK: Yale Scientists Successfully Repair Injured Spinal Cord Using Patients’ Own Stem Cells

The participant, who asked to be referred to as BRAVO1, worked with the researchers to create a 50-word vocabulary that Chang’s team was able to recognize from brain activity using advanced computer algorithms. The vocabulary, which includes words like “water”, “family” and “good”, was enough to create hundreds of sentences that express concepts applicable to the daily life of BRAVO1.

For the study, Chang surgically implanted a high-density electrode array over the motor cortex of BRAVO1’s speech. After the participant’s complete recovery, his team recorded 22 hours of neural activity in this region of the brain over 48 sessions and several months. In each session, BRAVO1 tried to say each of the 50 vocabulary words many times while the electrodes recorded the brain signals from his speech cortex.

Translate a voice attempt to text

To translate the patterns of recorded neural activity into specific words, the other two lead authors of the study used custom neural network models, which are forms of artificial intelligence. When the participant tried to speak, these networks distinguished subtle patterns in brain activity to detect speech attempts and identify which words he was trying to say.

RELATED: Northwestern Scientists Repair and Reverse Damage to ALS Neurons in the Lab Using a New Non-Toxic Compound

To test their approach, the team first presented BRAVO1 with short sentences constructed from the 50 vocabulary words and asked it to try to say them several times. As he made his attempts, the words were decoded from his brain activity, one by one, on a screen.

The team then went on to goad him with questions like “How are you doing today?” and “Do you want some water?” As before, BRAVO1’s attempted speech appeared on the screen. “I’m fine” and “No, I’m not thirsty.”

The team found that the system could decode words from brain activity at a rate of up to 18 words per minute with an accuracy of up to 93 percent (median 75 percent).

The language model that Moses applied contributed to the success and he implemented an “auto-correct” feature, similar to that used by consumer text messaging and speech recognition programs.

Moses characterized the first results of the test, which appear in the New England Journal of Medicineas proof of principle. “We were excited to see the accurate decoding of a variety of meaningful sentences,” he said. “We have shown that it is actually possible to facilitate communication in this way and that it has the potential to be used in conversational settings.”

Looking ahead, Chang and Moses said they will expand the trial to include more participants affected by severe paralysis and communication deficits. Currently, the team is working to increase the number of words in the available vocabulary, as well as to improve the speed of speech.

Both said that while the study focused on a single participant and limited vocabulary, those limitations do not diminish achievement. “This is an important technological milestone for a person who cannot communicate naturally,” Moses said, “and it demonstrates the potential of this approach to give a voice to people with severe paralysis and loss of speech.”

(LOOK the video on this research below).

Fountain: University of California San Francisco

SHARE this fascinating trailer with friends on social media …



[ad_2]

Original

You may also like

Comments are closed.

More in:New News