Our Thoughts Can Be Translated Into Intelligible Speech

Our Thoughts Can Be Translated Into Intelligible Speech

Using the power of a speech synthesizer and artificial intelligence, engineers converted brain signals directly into speech

Neuroengineers from Columbia University have created a system that translates human thoughts into intelligible, recognizable speech. They monitored a person's brain activity, and then they managed to reconstruct words the person had heard with unprecedented clarity.

The technology developed uses joint power of speech synthesizers and artificial intelligence. Its details were published in Scientific Reports on January 29, 2019. The research team has shown that using the right technology people's thoughts could be decoded and understood by any listener.

Previously, it has been proven that distinct patterns of activity appear in a human brain when someone speaks, or listens to someone’s speach, or even imagines listening. Previous efforts to decode brain signals were focused on simple computer models which were used to analyze sound spectrograms. However, those efforts failed to synthesize intelligible speech. That is why the research team led by Nima Mesgarani decided to try the vocoder – a computer algorithm that is able to synthesize speech after being trained on recordings of people talking. N. Mesgarani together with a neurosurgeon Ashesh Dinesh Mehta have taught the vocoder to interpret brain activity. The researchers asked epilepsy patients to listen to a speech of different people, then they measured patterns of the patients’ brain activity, and then used those neural patterns to train the vocoder. The sound produced by the vocoder in response to patients’ brain signals was analyzed and cleaned up by neural networks. The end result was a robotic-sounding voice, declaring the initial text quite understandable. The level of intelligibility was assessed as about 75%.

This breakthrough can potentially be used in developing new ways for computers to communicate directly with the human brain. It could also help patients who lost their ability to speak due to injury or disease, e.g. those recovering from stroke or suffering from amyotrophic lateral sclerosis.

Dr. N. Mesgarani, the senior author of the paper, said that the scenario the team develops is following: “If the wearer thinks ‘I need a glass of water,’ our system could take the brain signals generated by that thought, and turn them into synthesized, verbal speech.” “This would be a game changer. It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them,” – he added.

Author: Alena Snezhnaya

arrow-down arrow-sm arrow bitcoin bookmark calendar docs facebook fb-sq github instagram linkedin material medium my-world ok pdf reddit scroll search slack telegram-sq telegram twitter-sq twitter viber-sq vk-sq vk whatsapp-sq xls yt