open   documented  

Nasalization from Acoustic Features (NAF)

R code implementing a methodology for the automatic measurement of vowel nasalization from acoustic data.

Authors:  Christopher Carignan
Updated:  2021-02-04
Source:  https://github.com/ChristopherCarignan/NAF/
Keywords:  nasalizationphoneticsmachine-learningMFCCacousticsR

open   documented  

SemDis

SemDis uses advances in natural language processing to automatically determine how closely associated texts are to each other. Higher SemDis scores indicate two texts are less related, that is, they are more distantly related ideas or concepts.

Authors:  Dan Johnson & Roger Beaty
Updated:  2020-02-18
Source:  http://semdis.wlu.psu.edu/
Keywords:  languagesemanticsword-recognitionEnglish

open  

Auditory Grouping Cues

Here, we derive auditory grouping cues by measuring and summarizing statistics of natural sound features.

Authors:  Wiktor MłynarskiJosh H. McDermott
Updated:  2019-12-10
Source:  http://mcdermottlab.mit.edu/grouping_statistics/index.html
Keywords:  sensoryauditionfrequencyharmony

open  

Illusory Texture Demos

Here you will find some sound examples demonstrating the phenomenon of "illusory sound texture."

Authors:  Richard McWalter and Josh McDermott
Updated:  2019-11-18
Source:  http://mcdermottlab.mit.edu/textcont.html
Keywords:  sound-textureperceptionspeechmusic

open  

Chinese Readability Index Explorer (CRIE)

The Chinese Readability Index Explorer (CRIE) is composed of four subsystems and incorporates 82 multilevel linguistic features. CRIE is able to conduct the major tasks of segmentation, syntactic parsing, and feature extraction.

Authors:  Yao-Ting SungTao-Hsing ChangWei-Chun LinKuan-Sheng HsiehKuo-En Chang
Updated:  2019-02-18
Source:  http://www.chinesereadability.net/CRIE/?LANG=CHT
Keywords:  linguisticssyntaxphoneticsmachine-learningChinese

open  

Model-Matched Sounds

Cochleograms and sound files are shown for example stimuli from the model-matching experiment.

Authors:  Sam V. Norman-Haignere & Josh H. McDermott
Updated:  2018-12-03
Source:  http://mcdermottlab.mit.edu/svnh/model-matching/Stimuli_from_Model-Matching_Experiment.html
Keywords:  auditionsensoryauditory-cortexneuroscienceEnglish

open  

Inharmonic Speech Segregation

Inharmonic speech demos showing sound segregation.

Authors:  Sara PophamDana BoebingerDan P. W. EllisHideki KawaharaJosh H. McDermott
Updated:  2018-05-29
Source:  http://mcdermottlab.mit.edu/inharmonic_speech_examples/index.html
Keywords:  sound-sourcesbrainfrequencyharmonicsEnglish

open  

Texture-Time Averaging

Audio files showing the adaptive and selective time-averaging of auditory scenes.

Authors:  Richard McWalter & Josh McDermott
Updated:  2018-05-07
Source:  http://mcdermottlab.mit.edu/textint.html
Keywords:  perceptionsensory-inputaudition

open  

Schema Learning for the Cocktail Party Problem

The cocktail party problem requires listeners to infer individual sound

Authors:  Kevin J.P. Woods & Josh H. McDermott
Updated:  2018-04-03
Source:  http://mcdermottlab.mit.edu/schema_learning/index.html
Keywords:  sound-sourcesauditionschema

open   documented  

Combinatorial Expressive Speech Engine

C.L.E.E.S.E. (Combinatorial Expressive Speech Engine) is a tool designed to generate an infinite number of natural-sounding, expressive variations around an original speech recording.

Authors:  Juan José BurredEmmanuel Ponsot
Updated:  2018-03-18
Source:  http://cream.ircam.fr/?p=521
Keywords:  languagespeechpitchFrenchEnglishJapanese

« Page 2 / 5 »