New AI System Translates Human Brain Signals Into Text With Up to 97% Accuracy

The world is only just getting used to the power and sophistication of virtual assistants made by companies like Amazon and Google, which can decode our spoken speech with eerie precision compared to what the technology was capable of only a few short years ago.

 

In truth, however, a far more impressive and mind-boggling milestone may be just around the corner, making speech recognition seem almost like child’s play: artificial intelligence (AI) systems that can translate our brain activity into fully formed text, without hearing a single word uttered.

It’s not entirely science fiction. Brain-machine interfaces have evolved in leaps and bounds over recent decades, proceeding from animal models to human participants, and are, in fact, already attempting this very kind of thing.

Just not with much accuracy yet, researchers from the University of California San Francisco explain in a new study.

To see if they could improve upon that, a team led by neurosurgeon Edward Chang of UCSF’s Chang Lab used a new method to decode the electrocorticogram: the record of electrical impulses that occur during cortical activity, picked up by electrodes implanted in the brain.

In the study, in which four patients with epilepsy wore the implants to monitor seizures caused by their medical condition, the UCSF team ran a side experiment: having the participants read and repeat a number of set sentences aloud, while the electrodes recorded their brain activity during the exercise.

 

This data was then fed into a neural network that analysed patterns in the brain activity corresponding to certain speech signatures, such as vowels, consonants, or mouth movements, based on audio recordings of the…

Access the full article

Subscribe
Don't miss the best news ! Subscribe to our free newsletter :