USC CreativeIT database of multimodal dyadic interactions

Authors: Angeliki MetallinouZhaojun YangChi-Chun LeeCarlos BussoSharon CarnickeShrikanth Narayanan
Updated: Fri 17 April 2015
Type: multimodal-database
Languages: english
Keywords: dyadic-interactionsspeechgesturesmotion-captureemotionenglish
Open Access: yes
Publications: Metallinou et al. (2016)
Citation: Metallinou, A., Yang, Z., Lee, C-C., Busso, C., Carnicke, S., & Narayanan, S. (2016). The USC CreativeIT database of multimodal dyadic interactions: From speech and full body motion capture to continuous emotional annotations. Language Resources and Evaluation, 50(3), 497-521.

For each recording, we provide detailed audiovisual and text information, which consists of the audio and video of both interlocutors, the Motion Capture data of the full body of one of the interlocutors in each recording, the text transcriptions of the interaction. Also, for each actor-recording, we provide discrete and time-continuous annotations of dimensional emotion labels, from multiple annotators.