Static2Dynamic: Video Inference From a Deep Glimpse

Yu Ying Yeh, Yen Cheng Liu, Wei Chen Chiu, Yu Chiang Frank Wang

Research output: Contribution to journalArticle

Abstract

In this article, we address a novel and challenging task of video inference, which aims to infer video sequences from given non-consecutive video frames. Taking such frames as the anchor inputs, our focus is to recover possible video sequence outputs based on the observed anchor frames at the associated time. With the proposed Stochastic and Recurrent Conditional GAN (SR-cGAN), we are able to preserve visual content across video frames with additional ability to handle possible temporal ambiguity. In the experiments, we show that our SR-cGAN not only produces preferable video inference results, it can also be applied to relevant tasks of video generation, video interpolation, video inpainting, and video prediction.

Keywords

  • adversarial learning
  • Gallium nitride
  • Generative adversarial networks
  • generative model
  • Interpolation
  • Stochastic processes
  • Task analysis
  • video inference
  • Video sequences
  • Video synthesis
  • Visualization

Fingerprint Dive into the research topics of 'Static2Dynamic: Video Inference From a Deep Glimpse'. Together they form a unique fingerprint.

  • Cite this