Automatic 3-D depth recovery from a single urban-scene image

Chen Yu Tseng*, Sheng-Jyh Wang

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

In this paper, we focus on recovering a 3-D depth map from a single image via ground-vertical boundary analysis. First, we generate a ground map from the input image based on the spectral matting method, followed by a spatial geometric inference. After that, we derive the depth information for the ground-vertical boundaries. Unlike conventional approaches which generally use plane models to reconstruct a 3-D structure that fits the estimated boundaries, we infer a dense depth map by solving a Maximum-A-Posteriori (MAP) estimation problem. In this MAP problem, we use a generalized spatial-coherence prior model based on the Matting Laplacian (ML) matrix in order to provide a more robust solution for depth inference. We demonstrate that this approach can produce more pleasant depth maps for cluttered scenes.

Original languageEnglish
Title of host publication2012 IEEE Visual Communications and Image Processing, VCIP 2012
DOIs
StatePublished - 1 Dec 2012
Event2012 IEEE Visual Communications and Image Processing, VCIP 2012 - San Diego, CA, United States
Duration: 27 Nov 201230 Nov 2012

Publication series

Name2012 IEEE Visual Communications and Image Processing, VCIP 2012

Conference

Conference2012 IEEE Visual Communications and Image Processing, VCIP 2012
CountryUnited States
CitySan Diego, CA
Period27/11/1230/11/12

Keywords

  • 3-D depth estimation
  • Matting Laplacian

Fingerprint Dive into the research topics of 'Automatic 3-D depth recovery from a single urban-scene image'. Together they form a unique fingerprint.

Cite this