Speech emotion verification system (SEVS) based on MFCC for real time applications
Human recognizes speech emotions by extracting features from the speech signals received through the cochlea and later passed the information for processing. In this paper we propose the use of Mel-Frequency Cepstral Coefficient (MFCC) to extract the speech emotion information to provide both th...
Main Authors: | , |
---|---|
Format: | Conference or Workshop Item |
Language: | English |
Published: |
IET Digital Library
2008
|
Subjects: | |
Online Access: | http://irep.iium.edu.my/38169/ http://irep.iium.edu.my/38169/ http://irep.iium.edu.my/38169/1/Speech_Emotion_Verification_System_%28SEVS%29_based_on_MFCC_for_real_time_applications.pdf |
Summary: | Human recognizes speech emotions by extracting features
from the speech signals received through the cochlea and
later passed the information for processing. In this paper we propose the use of Mel-Frequency Cepstral Coefficient
(MFCC) to extract the speech emotion information to
provide both the frequency and time domain information
for analysis. Since features extracted using the MFCC
simulates the function of the human cochlea, neural
network (NN) and fuzzy neural network algorithm namely;
Multi Layer Perceptron (MLP), Adaptive Network-based
Fuzzy Inference System (ANFIS) and Generic Selforganizing
Fuzzy Neural Network (GenSoFNN) were used to verify the different emotions. Experimental results show potential of using these techniques to detect and distinguish three basic emotions from speech for real-time applications based on features extracted using MFCC. |
---|