A comparative study on attention-based rate adaptation for scalable video coding

Chia Ming Tsai*, Chia Wen Lin, Weisi Lin, Wen-Hsiao Peng

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations

Abstract

We conduct subjective tests to evaluate the performance of scalable video coding with different spatial-domain bit-allocation methods, visual attention models, and motion feature extractors in the literature. For spatial-domain bit allocation, we use the selective enhancement and quality layer assignment methods. For characterizing visual attention, we use the motion attention model and perceptual quality significant map. For motion features, we adopt motion vectors from hierarchical B-picture coding and optical flow. Experimental results show that a more accurate visual attention model leads to better perceptual quality. In cooperation with a visual attention model, the selective enhancement method, compared to the quality layer assignment, achieves better subjective quality when an ROI has enough bit allocation and its texture is not complex. The quality layer assignment method is suitable for region-wise quality enhancement due to its frame-based allocation nature.

Original languageEnglish
Title of host publication2009 IEEE International Conference on Image Processing, ICIP 2009 - Proceedings
PublisherIEEE Computer Society
Pages969-972
Number of pages4
ISBN (Print)9781424456543
DOIs
StatePublished - 1 Jan 2009
Event2009 IEEE International Conference on Image Processing, ICIP 2009 - Cairo, Egypt
Duration: 7 Nov 200910 Nov 2009

Publication series

NameProceedings - International Conference on Image Processing, ICIP
ISSN (Print)1522-4880

Conference

Conference2009 IEEE International Conference on Image Processing, ICIP 2009
CountryEgypt
CityCairo
Period7/11/0910/11/09

Keywords

  • Perceptual coding
  • Scalable video coding
  • Video adaptation
  • Visual attention model

Fingerprint Dive into the research topics of 'A comparative study on attention-based rate adaptation for scalable video coding'. Together they form a unique fingerprint.

Cite this