Abstract
Statistical n-gram language modeling is popular for speech recognition and many other applications. The conventional n-gram suffers from the insufficiency of modeling long-distance language dependencies. This paper presents a novel approach focusing on mining long distance word associations and incorporating these features into language models based on linear interpolation and maximum entropy (ME) principles. We highlight the discovery of the associations of multiple distant words from training corpus. A mining algorithm is exploited to recursively merge the frequent word subsets and efficiently construct the set of association patterns. By combining the features of association patterns into n-gram models, the association pattern n-grams arc estimated with a special realization to trigger pair n-gram where only the associations of two distant words are considered. In the experiments on Chinese language modeling, we find that the incorporation of association patterns significantly reduces the perplexities of n-gram models. The incorporation using ME outperforms that using linear interpolation. Association pattern n-gram is superior to trigger pair n-gram. The perplexities : arc further reduced using more association steps. Further, the proposed association pattern n-grams are not only able to elevate document classification accuracies but also improve speech recognition rates.
Original language | English |
---|---|
Pages (from-to) | 1719-1728 |
Number of pages | 10 |
Journal | IEEE Transactions on Audio, Speech and Language Processing |
Volume | 14 |
Issue number | 5 |
DOIs | |
State | Published - 1 Sep 2006 |
Keywords
- Association pattern
- Data mining
- Language model
- Long distance association
- Maximum entropy and trigger pairz