Maximum-margin sparse coding

Chien-Liang Liu, Wen Hoar Hsaio*, Bin Xiao, Chun Yu Chen, Wei Liang Wu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

This work devises a maximum-margin sparse coding algorithm, jointly considering reconstruction loss and hinge loss in the model. The sparse representation along with maximum-margin constraint is analogous to kernel trick and maximum-margin properties of support vector machine (SVM), giving a base for the proposed algorithm to perform well in classification tasks. The key idea behind the proposed method is to use labeled and unlabeled data to learn discriminative representations and model parameters simultaneously, making it easier to classify data in the new space. We propose to use block coordinate descent to learn all the components of the proposed model and give detailed derivation for the update rules of the model variables. Theoretical analysis on the convergence of the proposed MMSC algorithm is provided based on Zangwill's global convergence theorem. Additionally, most previous research studies on dictionary learning suggest to use an overcomplete dictionary to improve classification performance, but it is computationally intensive when the dimension of the input data is huge. We conduct experiments on several real data sets, including Extended YaleB, AR face, and Caltech101 data sets. The experimental results indicate that the proposed algorithm outperforms other comparison algorithms without an overcomplete dictionary, providing flexibility to deal with high-dimensional data sets.

Original languageEnglish
Pages (from-to)340-350
Number of pages11
JournalNeurocomputing
Volume238
DOIs
StatePublished - 17 May 2017

Keywords

  • Block coordinate descent
  • Maximum-margin
  • Sparse coding

Fingerprint Dive into the research topics of 'Maximum-margin sparse coding'. Together they form a unique fingerprint.

Cite this