For years, people have tried to break down communication barriers for those with speech disorders by developing new speech technologies. Luckily, we are getting closer to a real solution thanks to the latest discovery by scientists from the Indian Institute of Technology in Madras.
At present, people who have lost their ability to speak can use a fairly limited piece of technology that allows them to select words and letters by minimal movements controlling a cursor on a screen. The created text is then processed by a speech synthesizer. This is exactly how the famous physicist Stephen Hawking, who suffered from amyotrophic lateral sclerosis, could give lectures.
However, this tool is far from perfect. Its main disadvantage is speed—or a lack thereof. The user can only speak 10 words per minute. For comparison, unimpaired speakers can speak about 150 words per minute.
To provide more a more seamless vehicle for communication, scientists are developing a solution that recognizes brain signals and synthesizes them into speech faster and more accurately.
Scientists at the University of California in San Francisco recently developed an artificial speech system. The technology made use of artificial intelligence to imitate the part of the brain responsible for converting electrical brain signals into speech commands. The technology would then send those signals to a speech apparatus to produce the speech auditorily.
Unfortunately, this technology may only be helpful for people who had previously spoken.
Can AI help people with speech disorders to communicate?
However, Indian researcher Dr. Vishal Nandigana recently made a ground-breaking discovery in speech science.
Researchers from Madras have developed a solution that can transform the brain signals of people with speech disorders into complete English sentences out loud.
The technology decodes the brain’s electrical signals using physical laws and mathematical transformations, such as the Fourier transform. These brain signals are then converted into data.
But there’s still work to be done. For this speech data to be interpreted, more research is needed to transform the electrically controlled ion current signals into a specific message.
Once scientists obtain enough electrophysiological data from neurologists, they should be able to recognize what people with speech disorders want to say with much greater ease.
Algorithms will decipher the signals of nature
Another interesting application of this research is interpreting signals sent by nature.
Scientists point to photosynthesis or a plant’s response to weather and external phenomena. Data signals sent by plants can potentially be read as messages.
In the future, people might be able to interpret nature’s reactions, which would help to predict dangerous weather phenomena and natural disasters, such as monsoons, earthquakes, floods, and tsunamis.
All of this can be done with the help of artificial intelligence and deep learning algorithms. Although these technologies are currently only in the laboratory stage, they give hope that we’ll reach solutions for some of life’s most complicated challenges.
How Does Speech Recognition Technology Work?
Surrounded by smart phones, TVs, tablets, speakers, laptops, automated cars and more, we take for granted ...
Speech Recognition Software: Past, Present, and Future
Data collection will do the heavy lifting when it comes to the future of speech recognition software.