Contrastive Language-Image Pretraining (CLIP) performs zero-shot image classification by mapping images and textual class representation into a shared embedding space, then retrieving the class closest to the image. This work provides a new approach for interpreting CLIP models for image classification from the lens of mutual knowledge between the two modalities. Specifically, we ask: what concepts do both vision and language CLIP encoders learn in common that influence the joint embedding space, causing points to be closer or further apart? We answer this question via an approach of textual concept-based explanations, showing their effectiveness, and perform an analysis encompassing a pool of 13 CLIP models varying in architecture, size and pretraining datasets. We explore those different aspects in relation to mutual knowledge, and analyze zero-shot predictions. Our approach demonstrates an effective and human-friendly way of understanding zero-shot classification decisions with CLIP.
Sammani, F & Deligiannis, N 2024, Interpreting and Analyzing CLIP’s Zero-Shot Image Classification via Mutual Knowledge. in 38th Conference on Neural Information Processing Systems. pp. 1-35, 38th Conference on Neural Information Processing Systems (NeurIPS 2024), Vancouver, Canada, 10/12/24. <http://38th Conference on Neural Information Processing Systems>
Sammani, F., & Deligiannis, N. (2024). Interpreting and Analyzing CLIP’s Zero-Shot Image Classification via Mutual Knowledge. In 38th Conference on Neural Information Processing Systems (pp. 1-35) http://38th Conference on Neural Information Processing Systems
@inproceedings{a4b6382680594d19bcc93c15923d343f,
title = "Interpreting and Analyzing CLIP{\textquoteright}s Zero-Shot Image Classification via Mutual Knowledge",
abstract = "Contrastive Language-Image Pretraining (CLIP) performs zero-shot image classification by mapping images and textual class representation into a shared embedding space, then retrieving the class closest to the image. This work provides a new approach for interpreting CLIP models for image classification from the lens of mutual knowledge between the two modalities. Specifically, we ask: what concepts do both vision and language CLIP encoders learn in common that influence the joint embedding space, causing points to be closer or further apart? We answer this question via an approach of textual concept-based explanations, showing their effectiveness, and perform an analysis encompassing a pool of 13 CLIP models varying in architecture, size and pretraining datasets. We explore those different aspects in relation to mutual knowledge, and analyze zero-shot predictions. Our approach demonstrates an effective and human-friendly way of understanding zero-shot classification decisions with CLIP.",
author = "Fawaz Sammani and Nikos Deligiannis",
year = "2024",
language = "English",
pages = "1--35",
booktitle = "38th Conference on Neural Information Processing Systems",
note = "38th Conference on Neural Information Processing Systems (NeurIPS 2024), NeurIPS 2024 ; Conference date: 10-12-2024 Through 15-12-2024",
url = "https://neurips.cc/virtual/2024/calendar",
}