Bayesian sparse topic model

Jen-Tzung Chien*, Ying Lan Chang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

20 Scopus citations

Abstract

This paper presents a new Bayesian sparse learning approach to select salient lexical features for sparse topic modeling. The Bayesian learning based on latent Dirichlet allocation (LDA) is performed by incorporating the spike-and-slab priors. According to this sparse LDA (sLDA), the spike distribution is used to select salient words while the slab distribution is applied to establish the latent topic model based on those selected relevant words. The variational inference procedure is developed to estimate prior parameters for sLDA. In the experiments on document modeling using LDA and sLDA, we find that the proposed sLDA does not only reduce the model perplexity but also reduce the memory and computation costs. Bayesian feature selection method does effectively identify relevant topic words for building sparse topic model.

Original languageEnglish
Pages (from-to)375-389
Number of pages15
JournalJournal of Signal Processing Systems
Volume74
Issue number3
DOIs
StatePublished - 1 Jan 2014

Keywords

  • Bayesian sparse learning
  • Feature selection
  • Topic model

Fingerprint Dive into the research topics of 'Bayesian sparse topic model'. Together they form a unique fingerprint.

Cite this