![]() | Only 14 pages are availabe for public view |
Abstract In this thesis, isolated Arabic words recognizer is implemented, which can be helpful in different fields such as command and control systems. The thesis studies different combined classifier techniques and compares them with individual classifiers. An enhanced Boosting algorithm which outperforms the original one has also been implemented. Researches about Arabic speech recognition are limited compared to English speech recognition. This thesis starts with creating Arabic words database consisting of 10 words, manual end point detection is used to separate words from the background noise, Three feature extraction methods have been used ; Linear Prediction Coding (LPC), Mel Frequency Cepstrum Coefficients (MFCC) and Real Cepstrum. Experiments showed that the MFCC features are more discriminative than the others. Neural Networks and Discrete Hidden Markov Model have been examined as individual classifiers. LVQ and Back-propagation classifiers resulted in 80.6% and 93.3% correct classification on a test set of 10 words, while Discrete Hidden Markov Model resulted only in 69%. Ensemble methods or committees of learning machines can often improve the performance of a system in comparison to a single learning machine. The thesis studies the effect of combined classifier architecture on the overall performance of the Arabic Speech Recognition System by proposing five different combination architectures and comparing of their performance. It is found that the architecture based on ensemble approaches outperform the modular approaches. The best ensemble-based architecture gives 94.3% correct classification while the best modular-based results in 79.3% for the testing data.Boosting is another combined classifier technique; it’s a general method for improving the performance of almost any learning algorithm. AdaBoost.M1 is used to deal with multi-class classification problem. AdaBoost.M1 is examined with Neural Networks as a base classifier. The boosted Back-Propagation and LVQ classifiers resulted in correct classification of 94.6% and 94% respectively. The AdaBoost.M1 algorithm is enhanced (EAdaBoost) by modifying the technique of computing the final decision to depend on two confidence measures of each weak classifier. This modification increases the performance of AdaBoost.M1 by about 2% compared to the original algorithm. Key Words: Arabic speech recognition, MFCC, Neural Networks, Discrete Hidden Markov Models, Combined classifiers, Boosting, AdaBoost.M1, EAdaBoost. |