Search In this Thesis
   Search In this Thesis  
العنوان
Brain Computer Interface for Human Emotion Recognition based on Machine learning Techniques /
المؤلف
Mohamed, Asmaa Hammad El-sayed.
هيئة الاعداد
باحث / اسماء حماد السيد محمد
مشرف / عبد المجيد أمين علي
مشرف / عصام حليم حسين عبد الحليم
الموضوع
Computer science - Congresses. Operations research - Congresses.
تاريخ النشر
2024.
عدد الصفحات
133 p. :
اللغة
الإنجليزية
الدرجة
الدكتوراه
التخصص
علوم الحاسب الآلي
تاريخ الإجازة
1/7/2024
مكان الإجازة
جامعة المنيا - كلية الحاسبات والمعلومات - علوم الحاسب
الفهرس
Only 14 pages are availabe for public view

from 156

from 156

Abstract

Affective computing, a subcategory of artificial intelligence, detects, processes, interprets, and mimics human emotions. Thanks to the continued advancement of portable non-invasive human sensor technologies, like brain computer inter- faces (BCI), emotion recognition has piqued the interest of academics from a variety of domains. Facial expressions, speech, behavior (gesture/posture), and physiological signals can all be used to identify human emotions. However, the first three may be ineffectual because people may hide their true emotions consciously or unconsciously (so-called social masking). Physiological signals can provide more accurate and objective emotion recognition. Electroencephalogram (EEG) signals respond in real time and are more sensitive to changes in affective states than peripheral neurophysiological signals. Thus, EEG signals can reveal important features of emotional states.
In the past few decades, electroencephalography (EEG)-based emotion recognition has emerged as one of the most important topics in the domains of health- care, education, information sharing, gaming, and many others. EEG signals can be used to efficiently classify various emotions, but there is currently no known standard collection of features to do so. Including all conceivable EEG features has the potential to improve classification performance, but it can also result in excessive dimensionality and lower performance due to redundant in- formation and inefficiency. Another issue, the majority of emotion analysis techniques now in use rely on manually derived features and machine learning. Deep learning is an end-to-end technique that can automatically extract and classify EEG features. Unfortunately, the majority of deep learning models for emotion recognition based on EEG still extract features manually, and the ac- curacy is insufficiently high. Additionally, there is still work to be done on the efficient fusion of the spatial and temporal distinct information of EEG signals to improve emotion recognition performance. These issues have motivated the development of automated models that maximize the performance of emotion classification of EEG.
Thesis objective
•Present a comprehensive review that:-
–Reviews emotion recognition methods that rely on multichannel EEG signal-based BCIs.
–Provides an overview of what has been accomplished in this area,
–Provides an overview of the datasets and methods used to elicit emotional states.
–Reviews various EEG feature extraction, feature selection/reduction, machine learning methods and deep learning methods.
–Discuss EEG rhythms that are strongly linked to emotions as well as the relationship between distinct brain areas and emotions.
–Discuss several human emotion recognition studies that use EEG data and compare different machine and deep learning algorithms.
–Suggests several challenges and future research directions in the recognition and classification of human emotional states using EEG.
•Introduce a new feature selection method to identify the optimal subset of EEG features that maximize emotion recognition performance.
•Develop a new end to end deep learning model for emotion identification that is able to learn the temporal features while capturing the spatial aspects of the EEG data.
Thesis Methodology
•The first proposed model, to deal with the issue of high dimensionality, an enhanced version of Coati Optimization Algorithm (COA) called eCOA is pro- posed for global optimization and selecting the best subset of EEG features for human emotion recognition. To be specific, COA suffers from local optima, im- balanced exploitation abilities, and inadequate diversity, like other metaheuristic methods. The proposed eCOA incorporates the COA and RUNge Kutta Opti- mizer (RUN) algorithms. The Scale Factor (SF) and Enhanced Solution Quality (ESQ) mechanism from RUN are applied to resolve the raised shortcomings of COA. The proposed eCOA algorithm has been extensively evaluated using the CEC’22 test suite and two EEG emotion recognition datasets, namely DEAP and DREAMER. Furthermore, the eCOA is applied for binary and multi-class classification of emotions in the dimensions of valence, arousal, and dominance using a multi-layer perceptron neural network (MLPNN).
•The second proposed model, to build an end-to-end procedure without prior knowledge that can automatically capture both spatial and temporal features from EEG signals to identify emotions, we suggest a new deep learning architecture that combines a time-frequency convolutional neural network (TFCNN), a bidirectional gated recurrent unit (BiGRU), and a self-attention mechanism (SAM) to categorize emotions based on EEG signals and automatically extract features. The first step is to use the continuous wavelet transform (CWT), which responds more readily to temporal frequency variations within EEG recordings, as a layer inside the convolutional layers, to create 2D scalogram images from EEG signals for time series and spatial representation learning. Second, to encode more discriminative features representing emotions, two-dimensional (2D)-CNN, BiGRU, and SAM are trained on these scalograms simultaneously to capture the appropriate information from spatial, local, temporal, and global aspects. Lastly, EEG signals are categorized into several emotional states. This network can learn the temporal dependencies of EEG emotion signals with BiGRU, extract local spatial features with TFCNN, and improve recognition accuracy with SAM, which is applied to explore global signal correlations by reassigning weights to emotion features.
Results
For the first proposed model, the proposed eCOA algorithm has obtained the best performance for all classification problems performed in both the DEAP and DREAMER datasets as follow:-
•For the DEAP dataset:
The proposed eCOA algorithm reaches the accuracy of 85.17, 91.99, 95.05 and 89.53% for arousal, valence, dominance, and four-class emotions, respectively.
Additionally, eCOA algorithm reaches 0.0166, 0.0167, 0.0159, and 0.016 the average number of chosen features for valence, arousal, dominance, and four-class emotions, respectively.
•For the DREAMER dataset:
The proposed eCOA algorithm reaches 95.21, 94.08, 94.08 and 87.39% accuracy for arousal, valence, dominance, and four-class emotions, respectively.
 In addition, eCOA algorithm reaches 0.0174, 0.016, 0.0131 and 0.012 the average number of chosen features for valence, arousal, dominance, and four-class emotions, respectively.
The second proposed model was evaluated on three different classification tasks: one with two target classes (positive and negative), one with three target classes (positive, neutral, and negative), and one with four target classes (boring, calm, horror, and funny) using the SEED and GAMEEMO datasets.
•Based on the comprehensive results of the experiments, the suggested approach achieved a 97.3%, 96.2%, and 93.8% emotion detection accuracy in two, three, and four classes, respectively, which are 3.99%, 1.98%, and 3.47% higher than the existing approaches working on the same datasets for different subjects, respectively.
Future work
The future work can be summarized in the following points:
•Traditionally, actual emotion classes have been labeled based on a pre- determined subjective rating data threshold. Unfortunately, determining the appropriate threshold is difficult. A novel approach is to consider the valence as well as the arousal dimensions at the same time and then utilize data clustering methods to find the emotion actual classes.
•Emotional models with more dimensions must be developed. Currently, the two-dimensional emotion model is widely employed. Multi-class emo- tion recognition necessitates the development of higher-dimensional emo- tion models. For example, accumulated analysis of the context information of the subject can predict the ’stance’ dimension in a three-dimensional emotion model (i.e., arousal, stance, and valence).
•To monitor temporal emotional fluctuations in real time, traditional time series analysis approaches must be integrated with machine learning techniques.
•We need to create more datasets that employ active elicitation techniques such as video games because they better imitate ”real-life” experiences and are more effective at inducing emotion.
•According to the eCOA efficiency results, we can apply multimodal ways to improve the accuracy for EEG emotion recognition. Additionally, automatic pattern detection of medical image data can be performed using the suggested eCOA. Integrating filtering and segmentation may potentially be a future direction for eCOA because preprocessing is also necessary for pattern recognition.
•We may use our deep learning framework for other applications, such as EEG classification for epilepsy diagnosis and sleep stages, and may be enhanced by integrating EEG with facial expressions, EMG, and ECG via multi-model training for deep learning.