A robust adaptive speech enhancement system for vehicular applications

Jwu-Sheng Hu*, Chieh Cheng Cheng, Wei Han Liu, Chia Hsing Yang

*Corresponding author for this work

Research output: Contribution to journalArticle

10 Scopus citations


This work proposes and implements a novel and robust adaptive speech enhancement system, which contains both time domain and frequency domain beamformers using H filtering approach to provide a clean and undisturbed speech waveform and improve the speech recognition rate in vehicle environments. A microphone array data acquisition hardware is also designed and implemented for the proposed speech enhancement system. Mutually matched microphones are needed for traditional multidimensional noise reduction methods, but this requirement is not practical for consumer applications from the cost standpoint. To overcome this issue, the proposed system adapts the mismatch dynamics to maintain the theoretical performance allowing unmatched microphones to be used in an array. Furthermore, to achieve a high speech recognition performance, the speech recognizer is usually required to be retrained for different vehicle environments due to different noise characteristics and channel effects. The proposed system using the H filtering approach, which makes no assumptions about noise and disturbance, is robust to the modeling error in a channel recovery process. Consequently, the real vehicular experimental results show that the proposed frequency domain beamformer provides a satisfactory speech recognition performance without the need to retrain the speech recognizer.

Original languageEnglish
Pages (from-to)1069-1077
Number of pages9
JournalIEEE Transactions on Consumer Electronics
Issue number3
StatePublished - 1 Aug 2006


  • Automatic speech recognition
  • H filtering
  • Human machine interaction
  • Speech enhancement

Fingerprint Dive into the research topics of 'A robust adaptive speech enhancement system for vehicular applications'. Together they form a unique fingerprint.

  • Cite this