In real-world information systems, there are abundant unlabeled data but sparse labeled data. It is challenging to construct an adaptive model to classify a large amount of documents containing different domains. The classifiers trained from a source domain shall perform poorly for the test data in a target domain due to the domain mismatch. In this study, we build a topic-bridged latent Dirichlet allocation (TLDA) model from a variety of labeled and unlabeled documents and perform the transfer learning for document classification. The severe change of word distributions is compensated by bridging the latent topics of source and target data which are drawn by the Dirichlet priors. A variational inference procedure is performed for semi-supervised learning. In the experiments on text categorization using 20 Newsgroups dataset, the proposed TLDA model achieved higher classification performance compared to the other methods.