We propose a calibration methodology for a novel type of inward-looking spherical light field acquisition device comprised of a moving CMOS camera with two angular degrees of freedom around an object. We designed a calibration cube covered with ChArUco markers to resolve viewpoint ambiguity. Our calibration model includes 20 unknowns describing the spherical parameters and the camera's geometrical and internal properties. These parameters are jointly optimized to reduce the reprojection error of the calibration cube's markers from multiple viewpoints, resulting in a model that can predict the pose of the camera from any other viewpoint. We successfully tested this calibrated model with a photogrammetry experiment followed by view synthesis of novel views using the resulting depth maps. Results show that the reconstructed image is highly accurate when compared to a real-life capturing of the same view.
Bolsée, Q, Darwish, W, Bonatto, D, Lafruit, G & Munteanu, A 2020, A Device for Capturing Inward-Looking Spherical Light Fields. in International Conference on 3D Imaging. 2020 edn, 9376346, 2020 International Conference on 3D Immersion, IC3D 2020 - Proceedings, IEEE, pp. 1-5, International Conference on 3D Imaging 2020, Brussels, Belgium, 15/12/20. https://doi.org/10.1109/IC3D51119.2020.9376346
Bolsée, Q., Darwish, W., Bonatto, D., Lafruit, G., & Munteanu, A. (2020). A Device for Capturing Inward-Looking Spherical Light Fields. In International Conference on 3D Imaging (2020 ed., pp. 1-5). Article 9376346 (2020 International Conference on 3D Immersion, IC3D 2020 - Proceedings). IEEE. https://doi.org/10.1109/IC3D51119.2020.9376346
@inproceedings{81c5b55391a440568513342b33c655c8,
title = "A Device for Capturing Inward-Looking Spherical Light Fields",
abstract = "We propose a calibration methodology for a novel type of inward-looking spherical light field acquisition device comprised of a moving CMOS camera with two angular degrees of freedom around an object. We designed a calibration cube covered with ChArUco markers to resolve viewpoint ambiguity. Our calibration model includes 20 unknowns describing the spherical parameters and the camera's geometrical and internal properties. These parameters are jointly optimized to reduce the reprojection error of the calibration cube's markers from multiple viewpoints, resulting in a model that can predict the pose of the camera from any other viewpoint. We successfully tested this calibrated model with a photogrammetry experiment followed by view synthesis of novel views using the resulting depth maps. Results show that the reconstructed image is highly accurate when compared to a real-life capturing of the same view.",
author = "Quentin Bols{\'e}e and Walid Darwish and Daniele Bonatto and Gauthier Lafruit and Adrian Munteanu",
year = "2020",
month = dec,
day = "15",
doi = "10.1109/IC3D51119.2020.9376346",
language = "English",
isbn = "978-1-6654-4782-9",
series = "2020 International Conference on 3D Immersion, IC3D 2020 - Proceedings",
publisher = "IEEE",
pages = "1--5",
booktitle = "International Conference on 3D Imaging",
edition = "2020",
note = "International Conference on 3D Imaging 2020, IC3D ; Conference date: 15-12-2020 Through 15-12-2020",
}