Robust Multiview Synthesis For Wide-Baseline Camera Arrays
This publication appears in: IEEE Transactions on Multimedia
Authors: B. Ceulemans, S. Lu, G. Lafruit and A. Munteanu
Number of Pages: 14
Publication Year: 2017
In many advanced multimedia systems, multiview content can offer more immersion compared to classical stereoscopy. The feeling of immersiveness is increased substantially by offering motion-parallax, as well as stereopsis. This drives both the so-called free-navigation and super-multiview technologies. However, it is currently still challenging to acquire, store, process and transmit this type of content. This paper presents a novel multiview-interpolation framework for wide-baseline camera arrays. The proposed method comprises several novel components, including point cloud-based filtering, improved de-ghosting, multi-reference color blending, and depth-aware MRF-based disocclusion inpainting. The method offers robustness against depth errors caused by quantization and smoothing across object boundaries. Furthermore, the available input color and depth are maximally exploited while preventing propagation of unreliable information to virtual viewpoints. The experimental results show that the proposed method outperforms the state-of-the-art View Synthesis Reference Software (VSRS 4.1) both in objective terms as well as subjectively, based on a visual assessment on a high-end light-field 3D display.