In support of art investigation, we propose a newsource separation method that unmixes a single X-ray scanacquired from double-sided paintings. In this problem, the X-raysignals to be separated have similar morphological characteristics,which brings previous source separation methods to theirlimits. Our solution is to use photographs taken from the frontandback-side of the panel to drive the separation process. Thecrux of our approach relies on the coupling of the two imagingmodalities (photographs and X-rays) using a novel coupleddictionary learning framework able to capture both commonand disparate features across the modalities using parsimoniousrepresentations; the common component captures features sharedby the multi-modal images, whereas the innovation componentcaptures modality-specific information. As such, our modelenables the formulation of appropriately regularized convexoptimization procedures that lead to the accurate separation ofthe X-rays. Our dictionary learning framework can be tailoredboth to a single- and a multi-scale framework, with the latterleading to a significant performance improvement. Moreover, toimprove further on the visual quality of the separated images,we propose to train coupled dictionaries that ignore certain partsof the painting corresponding to craquelure. Experimentation onsynthetic and real data—taken from digital acquisition of theGhent Altarpiece (1432)—confirms the superiority of our methodagainst the state-of-the-art morphological component analysistechnique that uses either fixed or trained dictionaries to performimage separation.
Deligiannis, N, Mota, J, Cornelis, B, Rodrigues, M & Daubechies, I 2017, 'Multi-Modal Dictionary Learning for Image Separation With Application In Art Investigation', IEEE Transactions on Image Processing, vol. 26, no. 2, 7725950, pp. 751-764. https://doi.org/10.1109/TIP.2016.2623484
Deligiannis, N., Mota, J., Cornelis, B., Rodrigues, M., & Daubechies, I. (2017). Multi-Modal Dictionary Learning for Image Separation With Application In Art Investigation. IEEE Transactions on Image Processing, 26(2), 751-764. Article 7725950. https://doi.org/10.1109/TIP.2016.2623484
@article{3ebf5ffbe49540d8a78083b0088132ae,
title = "Multi-Modal Dictionary Learning for Image Separation With Application In Art Investigation",
abstract = "In support of art investigation, we propose a newsource separation method that unmixes a single X-ray scanacquired from double-sided paintings. In this problem, the X-raysignals to be separated have similar morphological characteristics,which brings previous source separation methods to theirlimits. Our solution is to use photographs taken from the frontandback-side of the panel to drive the separation process. Thecrux of our approach relies on the coupling of the two imagingmodalities (photographs and X-rays) using a novel coupleddictionary learning framework able to capture both commonand disparate features across the modalities using parsimoniousrepresentations; the common component captures features sharedby the multi-modal images, whereas the innovation componentcaptures modality-specific information. As such, our modelenables the formulation of appropriately regularized convexoptimization procedures that lead to the accurate separation ofthe X-rays. Our dictionary learning framework can be tailoredboth to a single- and a multi-scale framework, with the latterleading to a significant performance improvement. Moreover, toimprove further on the visual quality of the separated images,we propose to train coupled dictionaries that ignore certain partsof the painting corresponding to craquelure. Experimentation onsynthetic and real data—taken from digital acquisition of theGhent Altarpiece (1432)—confirms the superiority of our methodagainst the state-of-the-art morphological component analysistechnique that uses either fixed or trained dictionaries to performimage separation.",
keywords = "coupled dictionary learning, multi-modal data analysis, multi-scale image decomposition, Source separation",
author = "Nikolaos Deligiannis and Jo{\~a}o Mota and Bruno Cornelis and Miguel Rodrigues and Ingrid Daubechies",
year = "2017",
month = feb,
doi = "10.1109/TIP.2016.2623484",
language = "English",
volume = "26",
pages = "751--764",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "2",
}