Multiresolution spectrotemporal analysis of complex sounds

Tai-Shih Chi*, Powen Ru, Shihab A. Shamma

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

376 Scopus citations

Abstract

A computational model of auditory analysis is described that is inspired by psychoacoustical and neurophysiological findings in early and central stages of the auditory system. The model provides a unified multiresolution representation of the spectral and temporal features likely critical in the perception of sound. Simplified, more specifically tailored versions of this model have already been validated by successful application in the assessment of speech intelligibility [Elhilali et al., Speech Commun. 41(2-3), 331-348 (2003); Chi et al., J. Acoust. Soc. Am. 106, 2719-2732 (1999)] and in explaining the perception of monaural phase sensitivity [R. Carlyon and S. Shamma, J. Acoust. Soc. Am. 114, 333-348 (2003)]. Here we provide a more complete mathematical formulation of the model, illustrating how complex signals are transformed through various stages of the model, and relating it to comparable existing models of auditory processing. Furthermore, we outline several reconstruction algorithms to resynthesize the sound from the model output so as to evaluate the fidelity of the representation and contribution of different features and cues to the sound percept.

Original languageEnglish
Pages (from-to)887-906
Number of pages20
JournalJournal of the Acoustical Society of America
Volume118
Issue number2
DOIs
StatePublished - 1 Aug 2005

Fingerprint Dive into the research topics of 'Multiresolution spectrotemporal analysis of complex sounds'. Together they form a unique fingerprint.

Cite this