Document Type: Original Paper
Computational Neuroscience Laboratory, Department of Biomedical Engineering, Faculty of Electrical Engineering, Sahand University of Technology, Tabriz, Iran.
Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran.
To extract and combine information from different modalities, fusion techniques are commonly applied to promote system performance. In this study, we aimed to examine the effectiveness of fusion techniques in emotion recognition.
Materials and Methods
Electrocardiogram (ECG) and galvanic skin responses (GSR) of 11 healthy female students (mean age: 22.73±1.68 years) were collected while the subjects were listening to emotional music clips. For multi-resolution analysis of signals, wavelet transform (Coiflets 5 at level 14) was used. Moreover, a novel feature-level fusion method was employed, in which low-frequency sub-band coefficients of GSR signals and high-frequency sub-band coefficients of ECG signals were fused to reconstruct a new feature. To reduce the dimensionality of the feature vector, the absolute value of some statistical indices was calculated and considered as input of PNN classifier. To describe emotions, two-dimensional models (four quadrants of valence and arousal dimensions), valence-based emotional states, and emotional arousal were applied.
The highest recognition rates were obtained from sigma=0.01. Mean classification rate of 100% was achieved through applying the proposed fusion methodology. However, the accuracy rates of 97.90% and 97.20% were attained for GSR and ECG signals, respectively.
Compared to the previously published articles in the field of emotion recognition using musical stimuli, promising results were obtained through application of the proposed methodology.