USC CreativeIT database of multimodal dyadic interactions
Authors: | Angeliki Metallinou, Zhaojun Yang, Chi-Chun Lee, Carlos Busso, Sharon Carnicke, Shrikanth Narayanan |
---|---|
Updated: | Fri 17 April 2015 |
Source: | https://sail.usc.edu/CreativeIT/ |
Type: | multimodal-database |
Languages: | english |
Keywords: | dyadic-interactions, speech, gestures, motion-capture, emotion, english |
Open Access: | yes |
License: | https://sail.usc.edu/CreativeIT/Data_Release_Form_CreativeIT.pdf |
Publications: | Metallinou et al. (2016) |
Citation: | Metallinou, A., Yang, Z., Lee, C-C., Busso, C., Carnicke, S., & Narayanan, S. (2016). The USC CreativeIT database of multimodal dyadic interactions: From speech and full body motion capture to continuous emotional annotations. Language Resources and Evaluation, 50(3), 497-521. |
Summary: | For each recording, we provide detailed audiovisual and text information, which consists of the audio and video of both interlocutors, the Motion Capture data of the full body of one of the interlocutors in each recording, the text transcriptions of the interaction. Also, for each actor-recording, we provide discrete and time-continuous annotations of dimensional emotion labels, from multiple annotators. |