Clustering Trajectories in Heterogeneous Representations for Video Event Detection

Wei Cheng Wang, Yen-Yu Lin, Hsin Wei Cheng, Chun Rong Huang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Trajectories have been shown to be robust and widely used in surveillance video event analysis. They encode spatial and temporal evidence simultaneously. Hence, clustering trajectories in a video can detect representative events. How to effectively represent trajectories is thus essential to video event detection. However, no a single representation of trajectories suffices in increasingly complex video analysis tasks. To address this issue, this paper presents a hierarchical clustering algorithm for grouping trajectories in multiple heterogeneous representations. It turns out that our method can not only group trajectories of highly similar events but also identify rare events from the dominant events. Experimental results show that our method can retrieve both dominant events and rare events compared with the state-of-the-art methods, leading to a better performance.

Original languageEnglish
Title of host publication2018 IEEE International Conference on Image Processing, ICIP 2018 - Proceedings
PublisherIEEE Computer Society
Pages933-937
Number of pages5
ISBN (Electronic)9781479970612
DOIs
StatePublished - 29 Aug 2018
Event25th IEEE International Conference on Image Processing, ICIP 2018 - Athens, Greece
Duration: 7 Oct 201810 Oct 2018

Publication series

NameProceedings - International Conference on Image Processing, ICIP
ISSN (Print)1522-4880

Conference

Conference25th IEEE International Conference on Image Processing, ICIP 2018
CountryGreece
CityAthens
Period7/10/1810/10/18

Keywords

  • Event detection
  • Multiple feature representations
  • Trajectory clustering
  • Video surveillance

Fingerprint Dive into the research topics of 'Clustering Trajectories in Heterogeneous Representations for Video Event Detection'. Together they form a unique fingerprint.

Cite this