How Artificial Intelligence will Help People with Speech Disorders

Last Updated February 28, 2020

For years, people have tried to break down communication barriers for those with speech disorders by developing new speech technologies. Luckily, we are getting closer to a real solution.

That’s thanks to the latest discovery by scientists from the Indian Institute of Technology in Madras.

At present, people who have lost their ability to speak can use a fairly limited piece of technology that allows them to select words and letters by minimal movements controlling a cursor on a screen. A speech synthesizer then processes the text.

This is exactly how the famous physicist Stephen Hawking, who suffered from amyotrophic lateral sclerosis, could give lectures.

However, this tool is far from perfect. Its main disadvantage is speed—or a lack thereof. The user can only speak 10 words per minute. For comparison, unimpaired speakers can speak about 150 words per minute.

To provide more a more seamless vehicle for communication, scientists are developing a solution that recognizes brain signals and synthesizes them into speech faster and more accurately.

Scientists at the University of California in San Francisco recently developed an artificial speech system. The technology made use of artificial intelligence to imitate the part of the brain responsible for converting electrical brain signals into speech commands. The technology would then send those signals to a speech apparatus to produce the speech auditorily.

Unfortunately, this technology may only be helpful for people who had previously spoken.

Can AI help people with speech disorders to communicate?

However, Indian researcher Dr. Vishal Nandigana recently made a ground-breaking discovery in speech science.

Researchers from Madras have developed a solution that can transform the brain signals of people with speech disorders into complete English sentences out loud.

The technology decodes the brain’s electrical signals using physical laws and mathematical transformations, such as the Fourier transform. These brain signals then convert into data.

But there’s still work to do. For this speech data to be interpreted, more research is needed to transform the electrically controlled ion current signals into a specific message.

Once scientists obtain enough electrophysiological data from neurologists, they should be able to recognize what people with speech disorders want to say with much greater ease.

Algorithms Will Decipher the Signals of Nature

Another interesting application of this research is interpreting signals sent by nature.

Scientists point to photosynthesis or a plant’s response to weather and external phenomena. Data signals sent by plants can potentially be read as messages.

In the future, people might be able to interpret nature’s reactions. This help to predict dangerous weather phenomena and natural disasters, such as monsoons, earthquakes, floods, and tsunamis.

All of this can be done with the help of artificial intelligence and deep learning algorithms. Although these technologies are currently only in the laboratory stage, they give hope that we’ll reach solutions for some of life’s most complicated challenges.

Related Posts

Summa Linguae uses cookies to allow us to better understand how the site is used. By continuing to use this site, you consent to this policy.

Learn More