Enhancement Quality and Accuracy of Speech Recognition System Using Multimodal Audio-Visual Speech signal | ||||
The Egyptian Journal of Language Engineering | ||||
Article 3, Volume 4, Issue 2, September 2017, Page 27-40 PDF (1.49 MB) | ||||
Document Type: Original Article | ||||
DOI: 10.21608/ejle.2017.59430 | ||||
View on SCiNiTO | ||||
Authors | ||||
Eslam Eid Elmaghraby 1; Amr Gody 2; Mohamed Hashem Farouk 3 | ||||
1Communication and Electronics Engineering Department from faculty of engineering, Fayoum University | ||||
2Faculty of Engineering, Fayoum University | ||||
3Engineering Math. & Physics Dept., Faculty of Engineering, Cairo University | ||||
Abstract | ||||
Most developments in speech-based automatic recognition have relied on acoustic speech as the sole input signal, disregarding its visual counterpart. However, recognition based on acoustic speech alone can be afflicted with deficiencies that prevent its use in many real-world applications, particularly under adverse conditions. The combination of auditory and visual modalities promises higher recognition accuracy and robustness than can be obtained with a single modality. Multimodal recognition is therefore acknowledged as a vital component of the next generation of spoken language systems. This paper aims to build a connected-words audio visual speech recognition system (AV-ASR) for English language that uses both acoustic and visual speech information to improve the recognition performance. Initially, Mel frequency cepstral coefficients (MFCCs) have been used to extract the audio features from the speech-files. For the visual counterpart, the Discrete Cosine Transform (DCT) Coefficients have been used to extract the visual feature from the speaker's mouth region and Principle Component Analysis (PCA) have been used for dimensionality reduction purpose. These features are then concatenated with traditional audio ones, and the resulting features are used for training hidden Markov models (HMMs) parameters using word level acoustic models. The system has been developed using hidden Markov model toolkit (HTK) that uses hidden Markov models (HMMs) for recognition. The potential of the suggested approach is demonstrated by a preliminary experiment on the GRID sentence database one of the largest databases available for audio-visual recognition system, which contains continuous English voice commands for a small vocabulary task. The experimental results show that the proposed Audio Video Speech Recognizer (AV-ASR) system exhibits higher recognition rate in comparison to an audio-only recognizer as well as it indicates robust performance. An increase of success rate by 4% for the grammar based word recognition system overall speakers is achieved for speaker independent test. | ||||
Keywords | ||||
AV-ASR; HMM; HTK; MFCC; DCT; PCA; MATLAB; GRID | ||||
Statistics Article View: 143 PDF Download: 430 |
||||