Automatic segmentation and summarization for videos taken with smart glasses

Yen Chia Chiu, Li Yi Liu, Tsai-Pei Wang*

*Corresponding author for this work

Research output: Contribution to journalArticle

1 Scopus citations

Abstract

This paper discusses the topic of automatic segmentation and extraction of important segments of videos taken with Google Glasses. Using the information from both the video images and additional sensor data that are recorded concurrently, we devise methods that automatically divide the video into coherent segments and estimate the importance of the each segment. Such information then enables automatic generation of video summary that contains only the important segments. The features used include colors, image details, motions, and speeches. We then train multi-layer perceptrons for the two tasks (segmentation and importance estimation) according to human annotations. We also present a systematic evaluation procedure that compares the automatic segmentation and importance estimation results with those given by multiple users and demonstrate the effectiveness of our approach.

Original languageEnglish
Pages (from-to)12679-12699
Number of pages21
JournalMultimedia Tools and Applications
Volume77
Issue number10
DOIs
StatePublished - 1 May 2018

Keywords

  • Egocentric video
  • Google Glass
  • Smart glasses
  • Video abstraction
  • Video diary
  • Video segmentation
  • Video summarization

Fingerprint Dive into the research topics of 'Automatic segmentation and summarization for videos taken with smart glasses'. Together they form a unique fingerprint.

Cite this