Publication Details

2023 24th International Conference on Digital Signal Processing (DSP)

Contribution To Book Anthology


Explainable AI is important for improving transparency, accountability, trust, and ethical considerations in AI systems, and for enabling users to make informed decisions based on the outputs of these systems. It provides insights into the factors that drove a particular machine learning model prediction. In the context of deep learning models, invariance refers to the property whereby diverse input transformations, such as data augmentations, result in similar feature spaces and predictions. The aim of this work is to unveil what invariant features the model has learned. We propose a method coined as Pixel Invariance, which measures the invariance of each pixel of the input. Our investigation involves an analysis of four self-supervised models, as these models are pre-trained to learn invariance to input transformations. We additionally perform quantitative evaluation measures to assess the faithfulness, reliability and confidence of the explanation map, and analyze the four self-supervised models both qualitatively and quantitatively.

DOI ieeexplore scopus