ArtLex-en: Acoustic and EMA data on thousands of words and syllables spoken by a single speaker of American English
Authors: | Charles Redmon, Seulgi Shin, Panying Rong |
---|---|
Updated: | Tue 10 March 2020 |
Source: | https://gitlab.com/chredmon/ku-artlex_eng1 |
Type: | speech-database |
Languages: | english |
Keywords: | electromagnetic-articulography, acoustics, lexicon, english |
Open Access: | yes |
License: | MIT |
Publications: | Redmon, Shin, and Rong (2019) |
Citation: | Redmon, C., Shin, S., & Rong, P. (2019). "KU-ArtLex: A single-speaker EMA database for modeling the articulatory structure of the lexicon." Proceedings of the International Congress of Phonetic Sciences. |
Summary: | Articulatory data (6-channel electro-magnetic articulography, EMA) was recorded on a database of over 20,000 English words along with two repetitions of 1,200 controlled CVC and VCV syllables from a single male native speaker of Midwestern American English. The primary aim of this database is to serve as a window on the articulatory structure of the lexicon; i.e., what gestural profiles distinguish words in English, and what constitutes minimality (how are minimal pairs defined, if at all) from an articulatory standpoint? Further, using this open-access database, comparable perception experiments testing the predictability of word recognition patterns from articulatory and acoustic profiles can now be run by multiple research groups, providing greater clarity to broader theoretical debates on the nature of the encoding of the speech signal. |