publication . Other literature type . Article . Preprint . 2018

Domain Adversarial for Acoustic Emotion Recognition

Mohammed Abdelwahab; Carlos Busso;
Open Access
  • Published: 01 Dec 2018
  • Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Abstract
The performance of speech emotion recognition is affected by the differences in data distributions between train (source domain) and test (target domain) sets used to build and evaluate the models. This is a common problem, as multiple studies have shown that the performance of emotional classifiers drop when they are exposed to data that does not match the distribution used to build the emotion classifiers. The difference in data distributions becomes very clear when the training and testing data come from different domains, causing a large performance gap between validation and testing performance. Due to the high cost of annotating new data and the abundance ...
Persistent Identifiers
Subjects
free text keywords: Speech and Hearing, Media Technology, Linguistics and Language, Signal Processing, Acoustics and Ultrasonics, Instrumentation, Electrical and Electronic Engineering, Electrical Engineering and Systems Science - Audio and Speech Processing, Computer Science - Sound, Task analysis, Speech processing, Artificial intelligence, business.industry, business, Classifier (linguistics), Deep learning, Pattern recognition, Adversarial system, Test data, Artificial neural network, Computer science, Data modeling
Related Organizations
41 references, page 1 of 3

[1] C. Busso, M. Bulut, and S.S. Narayanan, “Toward effective automatic recognition systems of emotion in speech,” in Social emotions in nature and artifact: emotions in human and human-computer interaction, J. Gratch and S. Marsella, Eds., pp. 110-127. Oxford University Press, New York, NY, USA, November 2013.

[2] Y. Kim and E. Mower Provost, “Say cheese vs. smile: Reducing speechrelated variability for facial emotion recognition,” in ACM International Conference on Multimedia (MM 2014), Orlando, FL, USA, November 2014, pp. 27-36.

[3] D. Ververidis and C. Kotropoulos, “Automatic speech classification to five emotional states based on gender information,” in European Signal Processing Conference (EUSIPCO 2004), Vienna, Austria, September 2004, pp. 341-34. [OpenAIRE]

[4] T. Vogt and E. Andre´, “Comparing feature sets for acted and spontaneous speech in view of automatic emotion recognition,” in IEEE International Conference on Multimedia and Expo (ICME 2005), Amsterdam, The Netherlands, July 2005, pp. 474-477.

[5] S. Parthasarathy and C. Busso, “Jointly predicting arousal, valence and dominance with multi-task learning,” in Interspeech 2017, Stockholm, Sweden, August 2017, pp. 1103-1107.

[6] M. Abdelwahab and C. Busso, “Ensemble feature selection for domain adaptation in speech emotion recognition,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2017), New Orleans, LA, USA, March 2017, pp. 5000-5004.

[7] M. Abdelwahab and C. Busso, “Incremental adaptation using active learning for acoustic emotion recognition,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2017), New Orleans, LA, USA, March 2017, pp. 5160-5164.

[8] Y. Zong, W. Zheng, T. Zhang, and X. Huang, “Cross-corpus speech emotion recognition based on domain-adaptive least-squares regression,” IEEE Signal Processing Letters, vol. 23, no. 5, pp. 585-589, May 2016.

[9] M. Abdelwahab and C. Busso, “Supervised domain adaptation for emotion recognition from speech,” in International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2015), Brisbane, Australia, April 2015, pp. 5058-5062. [OpenAIRE]

[10] T. Rahman and C. Busso, “A personalized emotion recognition system using an unsupervised feature adaptation scheme,” in International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2012), Kyoto, Japan, March 2012, pp. 5117-5120.

[11] J. Deng, R Xia, Z. Zhang, Y. Liu, and B. Schuller, “Introducing sharedhidden-layer autoencoders for transfer learning and their application in acoustic emotion recognition,” in International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2014), Florence, Italy, May 2014, pp. 4818-4822.

[12] Y. Zhang, Y. Liu, F. Weninger, and B. Schuller, “Multi-task deep neural network with shared hidden layers: Breaking down the wall between emotion representations,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2017), New Orleans, LA, USA, March 2017, pp. 4490-4494. [OpenAIRE]

[13] X. Glorot, A. Bordes, and Y. Bengio, “Domain adaptation for large-scale sentiment classification: A deep learning approach,” in International conference on machine learning (ICML 2011), Bellevue, WA, USA, June-July 2011, pp. 513-520. [OpenAIRE]

[14] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-adversarial training of neural networks,” Journal of Machine Learning Research, vol. 17, no. 59, pp. 1-35, April 2016.

[15] M. Shami and W. Verhelst, “Automatic classification of expressiveness in speech: A multi-corpus study,” in Speaker Classification II, C. Mu¨ller, Ed., vol. 4441 of Lecture Notes in Computer Science, pp. 43-56. Springer-Verlag Berlin Heidelberg, Berlin, Germany, August 2007.

41 references, page 1 of 3
Any information missing or wrong?Report an Issue