Depth Image-Based Rendering (DIBR) is the key technique in many multi-view 3D applications to synthesize virtual views using texture and depth information. However, DIBR induces distortions that disturb the visual quality of experience; therefore, image quality assessment (IQA) methods are essential to evaluate the quality of the synthesized views. The characteristics of the DIBR-related distortions are different from those of the traditional video coding distortions and conventional objective IQA methods often fail to provide accurate quality predictions for synthesized views. In this letter, we proposed a new Full Reference (FR) objective metric for evaluation of DIBR synthesized views. We used a feature matching method at feature (key) points of reference and synthesized images to quantify the local differences. Moreover, global quality loss is computed in shift-compensated views by measuring the gradient difference in image superpixels. Performance evaluation on the three public data sets shows the effectiveness of the proposed model. A software release of the proposed method is available at https://gitlab.com/etro/ssdi
Mahmoudpour, S & Schelkens, P 2020, 'Synthesized View Quality Assessment Using Feature Matching and Superpixel Difference', IEEE Signal Processing Letters, vol. 27, no. -, 9198131, pp. 1650 - 1654. https://doi.org/10.1109/LSP.2020.3024109
Mahmoudpour, S., & Schelkens, P. (2020). Synthesized View Quality Assessment Using Feature Matching and Superpixel Difference. IEEE Signal Processing Letters, 27(-), 1650 - 1654. Article 9198131. https://doi.org/10.1109/LSP.2020.3024109
@article{bbd3b4dfaf4e462ea7e74d99b2c048ac,
title = "Synthesized View Quality Assessment Using Feature Matching and Superpixel Difference",
abstract = "Depth Image-Based Rendering (DIBR) is the key technique in many multi-view 3D applications to synthesize virtual views using texture and depth information. However, DIBR induces distortions that disturb the visual quality of experience; therefore, image quality assessment (IQA) methods are essential to evaluate the quality of the synthesized views. The characteristics of the DIBR-related distortions are different from those of the traditional video coding distortions and conventional objective IQA methods often fail to provide accurate quality predictions for synthesized views. In this letter, we proposed a new Full Reference (FR) objective metric for evaluation of DIBR synthesized views. We used a feature matching method at feature (key) points of reference and synthesized images to quantify the local differences. Moreover, global quality loss is computed in shift-compensated views by measuring the gradient difference in image superpixels. Performance evaluation on the three public data sets shows the effectiveness of the proposed model. A software release of the proposed method is available at https://gitlab.com/etro/ssdi",
author = "Saeed Mahmoudpour and Peter Schelkens",
year = "2020",
month = jan,
day = "1",
doi = "10.1109/LSP.2020.3024109",
language = "English",
volume = "27",
pages = "1650 -- 1654",
journal = "IEEE Signal Processing Letters",
issn = "1070-9908",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "-",
}