AI-Based Framework for Real-Time Recognition of Arabic and English Sign Languages | ||
| SVU-International Journal of Engineering Sciences and Applications | ||
| Volume 6, Issue 2, December 2025, Pages 195-203 PDF (1.01 M) | ||
| Document Type: Original research articles | ||
| DOI: 10.21608/svusrc.2025.388195.1287 | ||
| Authors | ||
| ِAsma Gamal Seliem* 1; Shaimaa Elembaby1; mohamed elshayeb2 | ||
| 1Modern University for information and technology | ||
| 2Faculty of Engineering, Biomedical department, at Modern University for Technology & Information (MTI). | ||
| Abstract | ||
| Over 300 sign languages are used worldwide, posing challenges for effective communication between deaf and hearing individuals. This study presents a bilingual sign language recognition (SLR) system using deep learning to enhance accessibility for the deaf and mute communities. The system processes real-time video input, leveraging MediaPipe for hand and body landmark extraction. For static gesture classification (e.g., alphabet recognition), a Support Vector Machine (SVM) with a linear kernel is employed. For dynamic gesture sequences (e.g., word-level recognition), a Long Short-Term Memory (LSTM) network is used to model temporal patterns. The models were trained on large-scale datasets of Arabic and English sign languages, achieving recognition accuracies exceeding 99% for English letters and over 93% for selected Arabic words. The training dataset consists of images from Kaggle and real-time videos, and the test dataset uses independent real-time videos not seen during training. The system supports sign-to-text translation as well as voice and text-to-sign conversion through avatars or image sequences, promoting inclusive, real-time communication across linguistic boundaries. | ||
| Keywords | ||
| Deaf and Hard of Hearing (DHH); Long Short-Term Memory; MediaPipe; Sign Language Recognition (SLR) | ||
|
Statistics Article View: 8 PDF Download: 5 |
||