After Ann Johnson suffered a stroke 18 years ago, she became paralyzed and lost the ability to speak. Now, with the help of a brain implant and artificial intelligence, she is able to communicate verbally again through a digital avatar.
In a study published last week in the journal Nature, researchers implanted an array of electrodes on the surface of Ann’s brain to transmit her brain activity to computers. There, A.I. algorithms translate the signals to words. After a brief delay, the on-screen avatar speaks Ann’s words out loud and captures her sentiment with facial expressions.
“There’s nothing that can convey how satisfying it is to see something like this actually work in real time,” Edward Chang, a co-author of the study and a neurosurgeon at the University of California, San Francisco (UCSF), said at a news briefing, per NBC News’ Aria Bendix.
“This is quite a jump from previous results,” Nick Ramsey, a neuroscientist at the University of Utrecht in the Netherlands who did not contribute to the study, tells the Guardian’s Hannah Devlin. “We’re at a tipping point.”
Ann currently communicates using a device that allows her to type words on a screen by moving her head, according to a statement from UCSF. That device only produces 14 words per minute—spoken human conversation, on the other hand, averages about 160 words per minute. But with the new interface, which Ann can only use as part of the study, she can produce 78 words per minute, bringing her closer to a natural speaking cadence. The device deciphered her intended speech with around 75 percent accuracy.
The interface is a big step forward over an earlier iteration from the same research team, which translated intended speech to text at a rate of 15 words per minute, writes Wired’s Emily Mullin.
The improved system relies on an implant with 253 electrodes placed over parts of the brain important for communication. Before Ann’s stroke, these brain regions sent signals to muscles involved in speech, like those in the larynx, lips and tongue. Now, a cable plugged into a port on Ann’s head transports the signals to computers.
Next, A.I. translates these signals into individual sounds—chunks of words called phonemes. It then combines the phonemes to make words. The digital avatar that speaks the words was designed to look like Ann, and its voice was trained to sound like her by using clips of her speaking in her wedding video. The avatar’s face also moves and visually expresses emotions based on Ann’s brain signals.
“The simple fact of hearing a voice similar to your own is emotional,” Ann told the researchers after the study, according to Nature News’ Miryam Naddaf.
Ann had to train with the interface for weeks, silently speaking the same phrases over and over again so that it could understand her brain’s signals. Through these trials, the algorithm was taught to recognize terms from a bank of 1,024 conversational words.
“She’s extremely dedicated and hardworking,” Kaylo Littlejohn, a co-author of the study and an electrical engineer at UCSF, tells Fortune’s Erin Prater. “She’s willing to record as long as needed, and she really understands that her efforts will go toward creating a speech neuroprosthesis that many people who have this kind of disability will be able to use.”
In a second study from different researchers, published the same day in Nature, a woman who lost the ability to speak due to ALS used another speech-producing, brain-computer interface that translates intended speech to text. The interface decoded her speech at a rate of 62 words per minute, with a 23.8 percent error rate for a vocabulary of 125,000 words.
“It is now possible to imagine a future where we can restore fluid conversation to someone with paralysis, enabling them to freely say whatever they want to say with an accuracy high enough to be understood reliably,” Frank Willett, a co-author of the second paper and research scientist at Stanford University, said at a press briefing, per Wired.
Still, these interfaces have only been tested on a couple of people, notes Nature News. “We have to be careful with over-promising wide generalizability to large populations,” Judy Illes, a neuroethicist at the University of British Columbia in Canada who wasn’t involved in either study, tells the publication. “I’m not sure we’re there yet.”
Additionally, if the devices are to be useful in daily life, they’ll need to be wireless—unlike the one Ann uses—and small enough to be portable, Littlejohn tells Fortune. But the researchers hope that one day in the near future, their technology will inspire an FDA-approved communication system.
“These advancements bring us much closer to making this a real solution for patients,” Chang says in the statement.
Recommended Videos
Credit: Source link