Video adaptation for small display based on content recomposition

Wen-Huang Cheng*, Chia Wei Wang, Ja Ling Wu

*Corresponding author for this work

Research output: Contribution to journalArticle

85 Scopus citations

Abstract

The browsing of quality videos on small hand-held devices is a common scenario in pervasive media environments. In this paper, we propose a novel framework for video adaptation based on content recomposition. Our objective is to provide effective small size videos which emphasize the important aspects of a scene while faithfully retaining the background context. That is achieved by explicitly separating the manipulation of different video objects. A generic video attention model is developed to extract user-interest objects, in which a high-level combination strategy is proposed for fusing the adopted three types of visual attention features: intensity, color, and motion. Based on the knowledge of media aesthetics, a set of aesthetic criteria is presented. Accordingly, these objects are well reintegrated with the direct-resized background to optimally match the specific screen sizes. Experimental results demonstrate the efficiency and effectiveness of our approach.

Original languageEnglish
Pages (from-to)43-58
Number of pages16
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume17
Issue number1
DOIs
StatePublished - 1 Jan 2007

Keywords

  • Content recomposition
  • Media aesthetics
  • Region of interest
  • Video adaptation
  • Visual attention model

Fingerprint Dive into the research topics of 'Video adaptation for small display based on content recomposition'. Together they form a unique fingerprint.

  • Cite this