An RSVM based two-teachers-one-student semi-supervised learning algorithm

Chien Chung Chang*, Hsing Kuo Pao, Yuh-Jye Lee

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

9 Scopus citations


Based on the reduced SVM, we propose a multi-view algorithm, two-teachers-one-student, for semi-supervised learning. With RSVM, different from typical multi-view methods, reduced sets suggest different views in the represented kernel feature space rather than in the input space. No label information is necessary when we select reduced sets, and this makes applying RSVM to SSL possible. Our algorithm blends the concepts of co-training and consensus training. Through co-training, the classifiers generated by two views can "teach" the third classifier from the remaining view to learn, and this process is performed for each choice of teachers-student combination. By consensus training, predictions from more than one view can give us higher confidence for labeling unlabeled data. The results show that the proposed 2T1S achieves high cross-validation accuracy, even compared to the training with all the label information available.

Original languageEnglish
Pages (from-to)57-69
Number of pages13
JournalNeural Networks
StatePublished - 1 Jan 2012


  • Co-training
  • Consensus training
  • Multi-view
  • Reduced set
  • Semi-supervised learning
  • Support vector machines

Fingerprint Dive into the research topics of 'An RSVM based two-teachers-one-student semi-supervised learning algorithm'. Together they form a unique fingerprint.

Cite this