This paper presents a new Bayesian sparse learning approach to select salient lexical features and build sparse topic model (sTM). The Bayesian learning is performed by incorporating the spike-and-slab priors so that the words with spiky distributions are filtered and those with slab distributions are selected as features for estimating the topic model (TM) based on latent Dirichlet allocation. The variational inference procedure is developed to train sTM parameters. In the experiments on document modeling using TM and sTM, we find that the proposed sTM does not only reduce the model perplexity but also reduce the memory and computation costs. Bayesian feature selection method does effectively identify the representative topic words for building a sparse learning model.