|<   <   Page 1 / 2   >   >|


Efficient Headphone Screen

Conducting online auditory experiments? A new headphone test to help you to efficiently screen out participants who probably aren't using headphones.

Authors:  Alice E. MilneRoberta BiancoKatarina C. PooleSijia ZhaoAndrew J. OxenhamAlexander J. BilligMaria Chait
Updated:  2021-07-28
Source:  https://app.gorilla.sc/openmaterials/100917
Keywords:  auditionexperimentstimulinoiseEnglish

open   documented  

ANL Frequency-Following Responses

A set of tools to analyze and interpret auditory steady-state responses, particularly the subcortical kind commonly known as frequency-following responses (FFRs).

Authors:  Hari BharadwajHao LuLenny Varghese
Updated:  2021-04-04
Source:  https://github.com/SNAPsoftware/ANLffr
Keywords:  auditionfrequencyPythonphase-lockingEEG

open   documented  


StimuliApp is a free app designed to create psychophysical tests with precise timing on iOS and iPadOS devices.

Authors:  Rafael Marin-CamposJoseph DalmauAlbert CompteDaniel Linares
Updated:  2020-12-24
Source:  https://www.stimuliapp.com/
Keywords:  psychophysicsexperimentauditionvisual-stimulation

open   documented  

Headphone Check

This code implements a headphone screening task intended to facilitate web-based experiments employing auditory stimuli.

Authors:  Kevin J.P. WoodsMax H. SiegelJames TraerJosh H. McDermottRay GonzalezKelsey Allen
Updated:  2020-06-02
Source:  https://github.com/mcdermottLab/HeadphoneCheck
Keywords:  auditionexperimentstimulipure-tone


Auditory English Lexicon Project

The Auditory English Lexicon Project (AELP) is a multi-talker, multi-region psycholinguistic database of 10,170 spoken words and 10,170 spoken nonwords.

Authors:  Winston D. GohMelvin J. YapQian Wen Chee
Updated:  2019-12-24
Source:  https://inetapps.nus.edu.sg/aelp/
Keywords:  psycholinguisticsdatabaselexiconauditionsemanticsEnglish


Auditory Grouping Cues

Here, we derive auditory grouping cues by measuring and summarizing statistics of natural sound features.

Authors:  Wiktor MÅ‚ynarskiJosh H. McDermott
Updated:  2019-12-10
Source:  http://mcdermottlab.mit.edu/grouping_statistics/index.html
Keywords:  sensoryauditionfrequencyharmony


Model-Matched Sounds

Cochleograms and sound files are shown for example stimuli from the model-matching experiment.

Authors:  Sam V. Norman-Haignere & Josh H. McDermott
Updated:  2018-12-03
Source:  http://mcdermottlab.mit.edu/svnh/model-matching/Stimuli_from_Model-Matching_Experiment.html
Keywords:  auditionsensoryauditory-cortexneuroscienceEnglish


Texture-Time Averaging

Audio files showing the adaptive and selective time-averaging of auditory scenes.

Authors:  Richard McWalter & Josh McDermott
Updated:  2018-05-07
Source:  http://mcdermottlab.mit.edu/textint.html
Keywords:  perceptionsensory-inputaudition


Schema Learning for the Cocktail Party Problem

The cocktail party problem requires listeners to infer individual sound

Authors:  Kevin J.P. Woods & Josh H. McDermott
Updated:  2018-04-03
Source:  http://mcdermottlab.mit.edu/schema_learning/index.html
Keywords:  sound-sourcesauditionschema



A large-scale statistical analysis of real-world acoustics, revealing strong regularities of reverberation in natural scenes.

Authors:  James Traer & Josh H. McDermott
Updated:  2016-11-29
Source:  http://mcdermottlab.mit.edu/Reverb/ReverbSummary.html
Keywords:  soundauditionperiphery

|<   <   Page 1 / 2   >   >|