An intelligent Embedded System using Natural Language Processing for Deaf people | ||||
Journal of the ACS Advances in Computer Science | ||||
Volume 15, Issue 1, 2024 PDF (669.3 K) | ||||
Document Type: Original Article | ||||
DOI: 10.21608/asc.2024.376772 | ||||
![]() | ||||
Author | ||||
Mohamed Hussein | ||||
High Institute for Computers and Information Technology AL-Shorouk Academy | ||||
Abstract | ||||
Natural language processing (NLP) refers to the branch of computer science—and more specifically, the branch of artificial intelligence or AI—concerned with giving computers the ability to understand text and spoken words in much the same way human beings can. NLP combines computational linguistics rule-based modeling of human language with statistical, machine learning, and deep learning models. Together, these technologies enable computers to process human language in the form of text or voice data and to ‘understand’ its full meaning, complete with the speaker or writer’s intent and sentiment. Deaf individuals and those who are hard of hearing often face challenges in completing basic daily tasks, such as interacting with others. While some solutions exist, they have significant limitations. For example, lip reading can only help understand about 30% of spoken words, and sign language interpreters are in short supply. These challenges contribute to high unemployment rates and mental health issues in the deaf community. To address these challenges, this paper dealing with a design smart glasses that serve as assistive technology for people with hearing disabilities. The glasses provide real-time speech transcription and format it for display, allowing wearers to understand what is going on around them and interact with others. The glasses attach to regular prescription glasses and are highly effective in achieving their purpose. | ||||
Statistics Article View: 302 PDF Download: 215 |
||||