Comment: Artificial intelligence turns brain activity into speech

People who have lost the ability to speak after a stroke or disease can use their eyes or make other small movements to control a cursor or select on-screen letters. (Cosmologist Stephen Hawking tensed his cheek to trigger a switch mounted on his glasses.) But if a brain-computer interface could re-create their speech directly, they might regain much more: control over tone and inflection, for example, or the ability to interject in a fast-moving conversation.

Servick, K. (2019). Artificial intelligence turns brain activity into speech. Science.

To be clear, this research doesn’t describe the artificial recreation of imagined speech i.e. the internal speech that each of us hears as part of the personal monologue of our own subjective experiences. Rather, it maps the electrical activity in the areas of the brain that are responsible for the articulation of speech as the participant reads or listens to sounds being played back to them. Nonetheless, it’s an important step for patients who have suffered damage to those areas of the brain responsible for speaking.

I also couldn’t help but get excited about the following; when electrical signals from the brain are converted into digital information (as they would have to be here, in order to do the analysis and speech synthesis) then why not also transmit that digital information over wifi? If it’s possible for me to understand you “thinking about saying words”, instead of using your muscles of articulation to actually say them, how long will it be before you can send those words to me over a wireless connection?