Show simple item record

dc.contributor.advisorKepuska, V.
dc.contributor.authorNajafi, Mary
dc.date.accessioned2018-06-27T20:27:19Z
dc.date.available2018-06-27T20:27:19Z
dc.date.created2018-07
dc.date.issued2018-05
dc.date.submittedJuly 2018
dc.identifier.urihttp://hdl.handle.net/11141/2514
dc.descriptionThesis (M.S.) - Florida Institute of Technology, 2018en_US
dc.description.abstractThis article presents a tandem Speech Emotion Recognition (SER) system by which 8 archetypal emotions can be differentiated based upon two different types of acoustic features as inputs to Artificial Neural Network (ANN) models. The two types of features that are fed into the classifiers reveal the degree of excitement and pleasentness of speech. The author has implied the time-based characteristics of speech in the feature extraction method by monitoring the trend of local features through time. Thus, two global features are proposed that are derived from Teager Energy Operator (TEO)-based (T EOg) and spectral features, Mel-Frequency Cepstral Coeffcients (MF CCg). In this study we established a tandem system of two hierarchies that follows a cognitive model to separate emotions based on the amount of stress in the voice Teager Energy Operator-Critical Band-Autocorrelation-Envelope (TEO-CB-Auto-Env ) and the pleasentness of the emotion (MFCC). In this research, we proposed a baseline measurement of the recognition based on the current feature vectors and make an analogy between the baseline and the tandem system to demonstrate the superiority of the proposed tandem system against the non-hierarchical systems. Moreover, we compared our results with the recognition rates from some of the cited articles. Additionally, inspired by the cognitive model, the author defined a hybrid tandem system in which the first hierarchy gets the T EOg as input to the classifier and two models in the second hierarchy get the MF CCg features for their input layers This system will be compared to a tandem system with only MF CCg feature vectors in the hierarchies in terms of the effectiveness and efficiency. Based on our experiments, it turns out that the former system returns a higher degree of efficiency whereas the latter tandem system gives a higher recognition rate. In our system, we made use of a binary-class Multi-Layer Perceptron (MLP) and two multi-class MLPs for the first and the second hierarchies, respectively. Considering only the audio part, the classification is performed on three emotion-based datasets: Surrey Audio-Visual Expressed Emotion (SAVEE), Berlin Database of Emotional Speech (Emo-DB), and eNTERFACE Audio-Visual Emotion Database (eNTERFACE). The systems are considered speaker- and gender-indepenedent. We have used Unweighted Accuracy (UW) accuracy to evaluate our methods. Our tandem system at its best given only MF CCg returns prediction rates as %77.26, %71.42 and %66.49 on the Emo-DB, SAVEE and eNTERFACE datasets, respectively. Whereas this measurement using a hybrid feature (second best) are %75.067, %67.596 and %65.197.en_US
dc.format.mimetypeapplication/pdf
dc.language.isoen_USen_US
dc.rightsCC BY 4.0en_US
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/legalcodeen_US
dc.titleSpeech Emotion Recognition using Connectionist Models in a Tandem Systemen_US
dc.typeThesisen_US
dc.date.updated2018-06-08T15:40:56Z
thesis.degree.nameMaster of Science in Computer Engineeringen_US
thesis.degree.levelMastersen_US
thesis.degree.disciplineComputer Engineeringen_US
thesis.degree.departmentElectrical and Computer Engineeringen_US
thesis.degree.grantorFlorida Institute of Technologyen_US
dc.type.materialtext


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

CC BY 4.0
Except where otherwise noted, this item's license is described as CC BY 4.0