Publication Details
Overview
 
 
 

European Signal Processing Conference

Contribution To Book Anthology

Abstract 

This paper presents a study of explainable AI methods applied to video anomaly detection. Specifically, we put forward a multidimensional evaluation protocol to evaluate attribution methods by considering the correctness of the explanations, their plausibility with respect to ground-truth anomaly data, and the robustness of explanations across multiple time frames. We evaluate these metrics on common gradient-based and perturbation-based explanation techniques, which we use to explain a 3DCNN-based classifier trained on real video data. Our results show that using specific methods generally leads to trade-offs in explanation performance, which include the higher computational cost related to video data. In particular, gradient-based methods achieve higher robustness across multiple frames, whereas perturbation methods achieve higher model fidelity scores.

Reference