Speech and Language Resource Bank
Data from 16 actors, male and female, during their affective dyadic interactions ranging from 2-10 minutes each, and two types of improvised interactions: 2 sentence exercises and paraphrases.
Authors: Angeliki Metallinou,
Zhaojun Yang,
Chi-Chun Lee,
Carlos Busso,
Sharon Carnicke,
Shrikanth Narayanan
Updated: 2015-04-17
Source: https://sail.usc.edu/CreativeIT/
Keywords: dyadic-interactions,
speech,
gestures,
motion-capture,
emotion,
english
An acted, multimodal and multispeaker database containing approximately 12 hours of audiovisual data, including video, speech, motion capture of face, text transcriptions.
Authors: Carlos Busso,
Murtaza Bulut,
Chi-Chun Lee,
Abe Kazemzadeh,
Emily Mower,
Samuel Kim,
Jeannette N. Chang,
Sungbok Lee,
Shrikanth Narayanan
Updated: 2008-11-09
Source: https://sail.usc.edu/iemocap/
Keywords: emotions,
behavior,
speech,
gesture,
motion-capture,
english