Semi-Supervised Text Classification with Universum Learning

Chien-Liang Liu, Wen Hoar Hsaio, Chia-Hoang Lee, Tao Hsing Chang, Tsung Hsun Kuo

Research output: Contribution to journalArticle

38 Scopus citations

Abstract

Universum, a collection of nonexamples that do not belong to any class of interest, has become a new research topic in machine learning. This paper devises a semi-supervised learning with Universum algorithm based on boosting technique, and focuses on situations where only a few labeled examples are available. We also show that the training error of AdaBoost with Universum is bounded by the product of normalization factor, and the training error drops exponentially fast when each weak classifier is slightly better than random guessing. Finally, the experiments use four data sets with several combinations. Experimental results indicate that the proposed algorithm can benefit from Universum examples and outperform several alternative methods, particularly when insufficient labeled examples are available. When the number of labeled examples is insufficient to estimate the parameters of classification functions, the Universum can be used to approximate the prior distribution of the classification functions. The experimental results can be explained using the concept of Universum introduced by Vapnik, that is, Universum examples implicitly specify a prior distribution on the set of classification functions.

Original languageEnglish
Article number7051235
Pages (from-to)462-473
Number of pages12
JournalIEEE Transactions on Cybernetics
Volume46
Issue number2
DOIs
StatePublished - 1 Feb 2016

Keywords

  • AdaBoost
  • learning with Universum
  • text classification

Fingerprint Dive into the research topics of 'Semi-Supervised Text Classification with Universum Learning'. Together they form a unique fingerprint.

  • Cite this