Natural language explanation (NLE) models aim at explaining the decision-making process of a black box system via generating natural language sentences which arehuman-friendly, high-level and fine-grained. Current NLE models 1) explain the decision-making process of a vision or vision-language model (a.k.a., task model), e.g., a VQA model, via a language model (a.k.a., explanation model), e.g., GPT. Other than the additional memory resources and inference time required by the task model, the task and explanation models are completely independent, which disassociates the explanation from the reasoning process madeto predict the answer. We introduce NLX-GPT, a general, compact and faithful language model that can simultaneously predict an answer and explain it. We firstconduct pre-training on large scale data of image-caption pairs for general understanding of images, and then formulate the answer as a text prediction task along with the explanation. Without region proposals nor a task model, our resulting overall framework attains better evaluation scores, contains much less parameters and is 15× faster than the current SoA model. We then address theproblem of evaluating the explanations which can be in many times generic, data-biased and can come in several forms. We therefore design 2 new evaluation measures:(1) explain-predict and (2) retrieval-based attack, a self-evaluation framework that requires no labels. Code is at: https://github.com/fawazsammani/nlxgpt.
Sammani, F, Mukherjee, T & Deligiannis, N 2022, NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks. in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2022-June, IEEE, pp. 8322-8332, 2022 Conference on Computer Vision and Pattern Recognition, New Orleans, United States, 19/06/22. https://doi.org/10.1109/CVPR52688.2022.00814
Sammani, F., Mukherjee, T., & Deligiannis, N. (2022). NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 8322-8332). (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition; Vol. 2022-June). IEEE. https://doi.org/10.1109/CVPR52688.2022.00814
@inproceedings{7344eaf415174f93994331f744034716,
title = "NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks",
abstract = "Natural language explanation (NLE) models aim at explaining the decision-making process of a black box system via generating natural language sentences which arehuman-friendly, high-level and fine-grained. Current NLE models 1) explain the decision-making process of a vision or vision-language model (a.k.a., task model), e.g., a VQA model, via a language model (a.k.a., explanation model), e.g., GPT. Other than the additional memory resources and inference time required by the task model, the task and explanation models are completely independent, which disassociates the explanation from the reasoning process madeto predict the answer. We introduce NLX-GPT, a general, compact and faithful language model that can simultaneously predict an answer and explain it. We firstconduct pre-training on large scale data of image-caption pairs for general understanding of images, and then formulate the answer as a text prediction task along with the explanation. Without region proposals nor a task model, our resulting overall framework attains better evaluation scores, contains much less parameters and is 15× faster than the current SoA model. We then address theproblem of evaluating the explanations which can be in many times generic, data-biased and can come in several forms. We therefore design 2 new evaluation measures:(1) explain-predict and (2) retrieval-based attack, a self-evaluation framework that requires no labels. Code is at: https://github.com/fawazsammani/nlxgpt.",
author = "Fawaz Sammani and Tanmoy Mukherjee and Nikos Deligiannis",
note = "Funding Information: Acknowledgement: This research has been supported by the Research Foundation - Flanders (FWO) under the Project G0A4720N. Publisher Copyright: {\textcopyright} 2022 IEEE. Copyright: Copyright 2022 Elsevier B.V., All rights reserved.; 2022 Conference on Computer Vision and Pattern Recognition ; Conference date: 19-06-2022 Through 24-06-2022",
year = "2022",
month = jun,
doi = "10.1109/CVPR52688.2022.00814",
language = "English",
isbn = "978-1-6654-6947-0",
series = "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition",
publisher = "IEEE",
pages = "8322--8332",
booktitle = "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)",
}