Extracting features using computational cerebellar model for emotion classification
Several feature extraction techniques have been employed to extract features from EEG signals for classifying emotions. Such techniques are not constructed based on the understanding of EEG and brain functions, neither inspired by the understanding of emotional dynamics. Hence, the features are diff...
Main Authors: | , , |
---|---|
Format: | Conference or Workshop Item |
Language: | English English |
Published: |
2013
|
Subjects: | |
Online Access: | http://irep.iium.edu.my/38073/ http://irep.iium.edu.my/38073/ http://irep.iium.edu.my/38073/1/Extracting_features_using_computational_cerebellar_model_for_emotion_classification.pdf http://irep.iium.edu.my/38073/4/38073_Extracting%20features%20using_Scopus.pdf |
Summary: | Several feature extraction techniques have been employed to extract features from EEG signals for classifying emotions. Such techniques are not constructed based on the understanding of EEG and brain functions, neither inspired by the understanding of emotional dynamics. Hence, the features are difficult to be interpreted and yield low classification performance. In this study, a new feature extraction technique using Cerebellar Model Articulation Controller (CMAC) is proposed. The features are extracted from the weights of datadriven self-organizing feature map that are adjusted during training to optimize the error obtained from the desired output and the calculated output. Multi-Layer Perceptron (MLP) classifier is then employed to perform classification on fear, happiness, sadness and calm emotions. Experimental results show that the average accuracy of classifying emotions from EEG signals captured on 12 children aged between 4 to 6 years old ranging from 84.18% to 89.29%. In addition, classification performance for features derived from other techniques such as Power Spectrum Density (PSD), Kernel Density Estimation (KDE) and Mel-Frequency Cepstral Coefficients (MFCC) are also presented as a standard benchmark for comparison purpose. It is observed that the proposed approach is able to yield accuracy of 33.77% to 55% as compared to the respective
comparison features. The experimental results indicated that
the proposed approach has potential for comparative emotion
recognition accuracy when coupled with MLP. |
---|