Shape-from-focus depth reconstruction with a spatial consistency model

Chen Yu Tseng, Sheng-Jyh Wang

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

This paper presents a maximum a posteriori (MAP) framework to incorporate a spatial consistency prior model for depth reconstruction in the shape-from-focus (SFF) process. Existing SFF techniques, which reconstruct a dense 3-D depth from multifocus image frames, usually have poor performance over low-contrast regions and usually need a large number of frames to achieve satisfactory results. To overcome these problems, a new depth reconstruction process is proposed to estimate the depth values by solving an MAP estimation problem with the inclusion of a spatial consistency model. This consistency model assumes that within a local region, the depth value of each pixel can be roughly predicted by an affine transformation of the image features at that pixel. A local learning process is proposed to construct the consistency model directly from the multifocus image sequence. By adopting this model, the depth values can be inferred in a more robust way, especially over low-contrast regions. In addition, to improve the computational efficiency, a cell-based version of the MAP framework is proposed. Experimental results demonstrate the effective improvement in accuracy and robustness as compared with existing approaches over real and synthesized image data. In addition, experimental results also demonstrate that the proposed method can achieve quite impressive performance, even with only the use of a few image frames.

Original languageEnglish
Article number6971052
Pages (from-to)2063-2076
Number of pages14
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume24
Issue number12
DOIs
StatePublished - 1 Dec 2014

Keywords

  • 3-D reconstruction
  • depth estimation
  • depth map
  • shape-from-focus (SFF)

Fingerprint Dive into the research topics of 'Shape-from-focus depth reconstruction with a spatial consistency model'. Together they form a unique fingerprint.

Cite this