A NOVEL VIDEO ANNOTATION FRAMEWORK USING NEAR-DUPLICATE SEGMENT DETECTION

Chien Li Chou, Hua-Tsung Chen, Chun Chieh Hsu, Suh-Yin Lee

研究成果: Conference contribution同行評審

摘要

The traditional video annotation approaches focus on annotating keyframes, shots, or the whole video with semantic keywords. However, the extractions of keyframes and shots lack of semantic meanings, and it is hard to use a few keywords to describe a video by using multiple topics. Therefore, we propose a novel video annotation framework using near-duplicate segment detection not only to preserve but also to purify the semantic meanings of target annotation units. A hierarchical near-duplicate segment detection method is proposed to efficiently localize near-duplicate segments in frame-level. Videos containing near-duplicate segments are clustered and keyword distributions of clusters are analyzed. Finally, the keywords ranked according to keyword distribution scores are annotated onto the obtained annotation units. Comprehensive experiments demonstrate the effectiveness of the proposed video annotation framework and near-duplicate segment detection method.
原文English
主出版物標題IEEE International Conference on Multimedia & Expo Workshops (ICMEW)
出版狀態Published - 2015

指紋 深入研究「A NOVEL VIDEO ANNOTATION FRAMEWORK USING NEAR-DUPLICATE SEGMENT DETECTION」主題。共同形成了獨特的指紋。

引用此