In many advanced multimedia systems, multiview content can offer more immersion compared to classical stereoscopy. The feeling of immersiveness is increased substantially by offering motion-parallax, as well as stereopsis. This drives both the so-called free-navigation and super-multiview technologies. However, it is currently still challenging to acquire, store, process and transmit this type of content. This paper presents a novel multiview-interpolation framework for wide-baseline camera arrays. The proposed method comprises several novel components, including point cloud-based filtering, improved de-ghosting, multi-reference color blending, and depth-aware MRF-based disocclusion inpainting. The method offers robustness against depth errors caused by quantization and smoothing across object boundaries. Furthermore, the available input color and depth are maximally exploited while preventing propagation of unreliable information to virtual viewpoints. The experimental results show that the proposed method outperforms the state-of-the-art View Synthesis Reference Software (VSRS 4.1) both in objective terms as well as subjectively, based on a visual assessment on a high-end light-field 3D display.
Ceulemans, B, Lu, S-P, Lafruit, G & Munteanu, A 2017, 'Robust Multiview Synthesis For Wide-Baseline Camera Arrays', IEEE Transactions on Multimedia. https://doi.org/10.1109/TMM.2018.2802646
Ceulemans, B., Lu, S.-P., Lafruit, G., & Munteanu, A. (Accepted/In press). Robust Multiview Synthesis For Wide-Baseline Camera Arrays. IEEE Transactions on Multimedia. https://doi.org/10.1109/TMM.2018.2802646
@article{a8be186cf2934734a8b8e4c92d95a38e,
title = "Robust Multiview Synthesis For Wide-Baseline Camera Arrays",
abstract = "In many advanced multimedia systems, multiview content can offer more immersion compared to classical stereoscopy. The feeling of immersiveness is increased substantially by offering motion-parallax, as well as stereopsis. This drives both the so-called free-navigation and super-multiview technologies. However, it is currently still challenging to acquire, store, process and transmit this type of content. This paper presents a novel multiview-interpolation framework for wide-baseline camera arrays. The proposed method comprises several novel components, including point cloud-based filtering, improved de-ghosting, multi-reference color blending, and depth-aware MRF-based disocclusion inpainting. The method offers robustness against depth errors caused by quantization and smoothing across object boundaries. Furthermore, the available input color and depth are maximally exploited while preventing propagation of unreliable information to virtual viewpoints. The experimental results show that the proposed method outperforms the state-of-the-art View Synthesis Reference Software (VSRS 4.1) both in objective terms as well as subjectively, based on a visual assessment on a high-end light-field 3D display.",
author = "Beerend Ceulemans and Shao-Ping Lu and Gauthier Lafruit and Adrian Munteanu",
year = "2017",
doi = "10.1109/TMM.2018.2802646",
language = "English",
journal = "IEEE Transactions on Multimedia",
issn = "1520-9210",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
}