| Literature DB >> 36073078 |
Milka C Madahana1, Katijah Khoza-Shangase, Nomfundo Moroe, Daniel Mayombo, Otis Nyandoro, John Ekoru.
Abstract
BACKGROUND: The emergence of the coronavirus disease 2019 (COVID-19) pandemic has resulted in communication being heightened as one of the critical aspects in the implementation of interventions. Delays in the relaying of vital information by policymakers have the potential to be detrimental, especially for the hearing impaired.Entities:
Keywords: COVID-19; South Africa; artificial intelligence; hearing impaired; machine learning; sign language; speech; text; translation
Mesh:
Year: 2022 PMID: 36073078 PMCID: PMC9452925 DOI: 10.4102/sajcd.v69i2.915
Source DB: PubMed Journal: S Afr J Commun Disord ISSN: 0379-8046
FIGURE 1The Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram describing the process of study selection.
Summary of studies included in the scoping review documenting evidence on AI-based real-time speech-to-text to sign language as a solution for the hearing impaired during coronavirus disease 2019.
| Author(s) & date | Publication title | Context or country | Aim | Technology | Conclusion | Recommendations |
|---|---|---|---|---|---|---|
| Ezhumalai et al. ( | Speech to sign language translator for hearing impaired | India | To design an application that converts the speech and text input into a sequence of sign language visuals. Speech recognition is used to convert the input audio to text, and it is further translated into sign language. | This system is designed to translate each word that is received as input into sign language. This project translates the words based on Indian sign language.
Natural language processing – The filler words such as ‘is’, ‘are’, ‘was’, ‘were’, etc., are words that hardly contribute to the context in sign language conversion. Therefore, the system removes those filler words from the speech or sentence. Root words – The words may be in gerund form, plural form or adjective form. The proposed system will remove these forms of the words and find the root word from those words. These root words will be helpful in the effective conversion into sign language. Data set – The system has a large data set of Indian sign language words to map according to the text or text recognised from the speech. S, it will be helpful to all deaf people in India. It makes people understand most of the speech or text. | Speech to sign language translation is a necessity in the modern era of online communication for hearing-impaired people. It will bridge the communication gap between normal and hearing-impaired people. | The future work is to develop a chat application incorporated with this sign language translation system. This can be used in team meeting applications, where a live translator feature can be added to the application. Also, a sign language to text translating option can be added to this application. |
| Papastratis et al. ( | Artificial intelligence technologies for sign language | - | To provide a comprehensive review of state-of-the-art methods in sign language capturing, recognition, translation and representation, pinpointing their advantages and limitations. | Systematic literature search:
Sign language recognition
Continuous sign language recognition Isolated sign language recognition Sign language translation Sign language representation
Realistic avatars Sign language production Applications | Most existing works deal with sign language recognition, while sign language capturing and translation methods are still not thoroughly explored. | Improvements can still be achieved in the accuracy of sign language recognition and production systems. |
| Harkude et al. ( | Audio to sign language translation for deaf people | India | To develop a communication system for deaf people. | Audio input is taken using python PyAudio module. Conversion of audio to text using microphone. Dependency parser is used for analysing the grammar of the sentence and obtaining the relationship between words. | Sign language translator is very useful in various areas – schools, colleges, hospitals, universities, airports, courts, anywhere anyone can use this system for understanding sign language to communicate. It makes communication between a normal-hearing person and a hard-of-hearing person easier. | The future work is to develop an application where the news channels can use it while giving news; in one corner of the screen, sign language will be displayed for deaf people. Right now only DD news is using this kind of presentation, but they are using a human being showing signs according to the speech of the person giving news live. So this will be a better idea that can be given to news channels. We look forward to expand the project by also including facial expressions into the system. |
| Text to sign language:
Speech recognition using Google Speech application programming interface (API). Text pre-processing using NLP. Dictionary-based machine translation. ISL generator: ISL of input sentence using ISL grammar rules. Generation of sign language with signing avatar. | ||||||
| Baumgärtner et al. ( | Automated sign language translation: the role of artificial intelligence now and in the future | - | To develop systems for automated sign language recognition and generation. | Two kinds of approaches generate avatar animations:
motion capturing (human movements are tracked and mapped to an avatar) keyframe animations (the entire animation is computer-generated). | The mentioned approaches have potential. In the future, the mentioned technologies can enable sign language users to access personal assistants, to use text-based systems, to search sign language video content and to use automated real-time translation when human interpreters are not available. | Taking this concept further, a daily life application based on smartphone technologies could be developed and automatically translate speech to sign language and vice versa. A range of (spoken and signed) languages could be supported, and the signer might additionally be able to choose or individualise the signing avatar. |
| Shezi and Ade-Ibijola ( | Deaf Chat: A speech-to-text communication aid for hearing DEFICIENCY | South Africa | To introduce a model and a tool (Deaf Chat) to communicate with hearing-impaired individuals based on artificial intelligence. | Deaf Chat uses speaker diarisation techniques to recognise and classify speakers before sending converted speech to text from one user to another. The Android studio integrated development environment (IDE) was used to develop the tool, and the International Business Machine (IBM) Corporation Watson API was used to build the application. | This paper presented a new model and a software prototype for facilitating communication with hearing-impaired individuals. | Implement the model and design in an iOS and web-based version of Deaf Chat. Other additional features include quick access to emergency services for hearing-impaired users. |
| Shinde and Dandona ( | Two-way sign language converter for speech impaired | India | To propose a system that enables a two-way conversation between the speech impaired and other vocal individuals. | Phase-I: Converting a stream of input hand gestures to their relevant semantic text as well as audio output in real time. | The prototype successfully demonstrates a solution to bridge the communication gap and recognises 320 words and converts them to hand gestures with 100% accuracy. It is also capable of breaking up sentences and displaying appropriate hand gestures of keywords in the sentence. | - |
AI, Artificial intelligence; DHH, deaf or hard of hearing; NLP, natural language processing; ISL, Indian Sign Language.
FIGURE 2Illustration of the automatic speech recognition process.
FIGURE 3Block diagram for extracting mel-frequency cepstral coefficients.
FIGURE 4An illustration of the implementation plan and expected results.