Quasi-Bayes linear regression for sequential learning of hidden Markov models

Jen-Tzung Chien*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

20 Scopus citations


This paper presents an online/sequential linear regression adaptation framework for hidden Markov model (HMM) based speech recognition. Our attempt is to sequentially improve speaker-independent speech recognition system to handle the nonstationary environments via the linear regression adaptation of HMMs. A quasi-Bayes linear regression (QBLR) algorithm is developed to execute the sequential adaptation where the regression matrix is estimated using QB theory. In the estimation, we specify the prior density of regression matrix as a matrix variate normal distribution and derive the pooled posterior density belonging to the same distribution family. Accordingly, the optimal regression matrix can be easily calculated. Also, the reproducible prior/posterior pair provides a meaningful mechanism for sequential learning of prior statistics. At each sequential epoch, only the updated prior statistics and the current observed data are required for adaptation. The proposed QBLR is a general framework with maximum likelihood linear regression (MLLR) and maximum a posteriori linear regression (MAPLR) as special cases. Experiments on supervised and unsupervised speaker adaptation demonstrate that the sequential adaptation using QBLR is efficient and asymptotical to batch learning using MLLR and MAPLR in recognition performance.

Original languageEnglish
Pages (from-to)268-278
Number of pages11
JournalIEEE Transactions on Speech and Audio Processing
Issue number5
StatePublished - 1 Jul 2002


  • Conjugate prior distribution
  • Linear regression model
  • Quasi-Bayes estimate
  • Sequential learning
  • Speaker adaptation

Fingerprint Dive into the research topics of 'Quasi-Bayes linear regression for sequential learning of hidden Markov models'. Together they form a unique fingerprint.

Cite this