![]() | Only 14 pages are availabe for public view |
Abstract Emotion recognition through the analysis of electroencephalography (EEG) signals stands as a crucial frontier in human-computer interaction, yet it presents a formidable research challenge. Existing methodologies typically harness a fraction of the available EEG channels often around 18 out of 32 to extract features indicative of emotional states. However, these approaches primarily rely on the valence and arousal model, limiting their accuracy in discerning emotions. This Thesis introduces a pioneering framework that transcends these constraints by integrating a three-dimensional model encompassing arousal, valence, and dominance (VAD), thus enabling the identification of emotions with unprecedented precision. Leveraging EEG signals from the DEAP database, the framework demonstrates its versatility by defining emotions even in the absence of discrete labels. Through rigorous evaluation using Support Vector Machine (SVM), K-Nearest Neighbors (K-NN), and Multilayer Perceptron (MLP) classification techniques, the framework achieves remarkable accuracies of 90.19%, 91.91%, and 89.86% in valence, arousal, and dominance classification, respectively. Notably, the study underscores the efficacy of time-domain statistical and power features extracted from EEG data in distinguishing between diverse emotional states. Furthermore, the framework surpasses the limitations of traditional two-dimensional models, facilitating accurate identification of identical emotions that were previously indistinguishable. Meanwhile, the realm of emotion recognition via EEG has witnessed burgeoning interest across various applications, including human-computer interaction, mental health assessment, and affective computing. However, this endeavor grapples with inherent challenges stemming from the intricate and noisy nature of EEG signals. Typically, existing strategies rely on feature extraction and machine learning techniques, which often falter in capturing nuanced emotional nuances and may necessitate extensive manual feature engineering efforts. In response, this study presents a groundbreaking approach that harnesses convolutional neural networks (CNNs) for EEG-based |