الفهرس | Only 14 pages are availabe for public view |
Abstract This study introduces an automatic speech recognition system for people with speech disorder based on both speech and visual components. Face and mouth regions are detected using the Viola - Jones algorithm. The acoustic and visual input features are concatenated on one feature vector. The system is tested on isolated English words spoken by disorder speakers from UA - Speech data. Results of our proposed system indicate that visual features are highly effective and can improve the accuracy to reach 7.91% for speaker dependent experiments and 3% for speaker independent experiments |