USC-EMO-MRI: An emotional speech production database

Authors: Jangwon KimAsterios ToutiosYoon-Chul KimYinghua ZhuSungbok LeeShrikanth Narayanan
Updated: Mon 05 May 2014
Source: https://sail.usc.edu/span/usc-emo-mri/
Type: speech-database
Languages: english
Keywords: emotionspeech-productionMRIreal-time-MRIenglish
Open Access: yes
License: CC
Citation: Kim, J., Toutios, A., Lee, S., and Narayanan, S. (2020). Vocal tract shaping of emotional speech. Computer, Speech and Language, 64; Kim, J., Toutios, A., Kim, Y-C., Zhu, Y., Lee, S., & Narayanan, S. (2014). USC-EMO-MRI corpus: An emotional speech production database recorded by real-time magnetic resonance imaging. 10th International Seminar on Speech Production (ISSP), Cologne, Germany, 226-229.
Summary:

USC-EMO-MRI is an emotional speech production database which includes real-time magnetic resonance imaging data with synchronized speech audio from five male and five female actors, each producing a passage and a set of sentences in multiple repetitions, while enacting four different target emotions (neutral, happy, angry, sad). The database includes emotion quality evaluation from at least ten listeners for each speaker's data. The database and companion software tools are freely available to the research community.