The purpose of Explainable AI (XAI) research is to hope that the process of machine learning is no longer a black box, and machine learning can explain the basic principles of their judgments, characterize their strengths and weaknesses, and convey an understanding of how they will express themselves. Furthermore, the created XAI methods will combine the most advanced human-machine interface technology to transform the model into an interpretive dialogue that the end-user can understand and be useful. Explainability can help developers ensure that the system is working as expected, it might be necessary to meet regulatory standards, or it might be important in allowing those affected by a decision to challenge or change that outcome.
Deep unfolding is expected to bridge the gap between analytical methods and deep learning-based methods by designing deep neural networks (DNNs) as unrolled iterations of optimization algorithms. Deep unfolding networks are interpretable and prove to be superior to traditional optimization-based methods.
The overarching objective of the project is to design explainable AI models and develop explanations. In the process of constructing an explainable AI model, by using prior knowledge about the data (including sparsity, low-rank properties and side information) the hidden structure in the data can be captured and the networks become more transparent. Moreover, in order to further optimize the model, developing better tools for post-hoc interpretation and explanation, that ensure good alignment with the actual decision process, and applying these to the newly developed networks, more insight in their internal workings will be obtained. Ultimately, we expect this will lead not only to explainable models but also to better performing ones, surpassing the state-of-the-art for various classification and regression tasks, and we will evaluate the produced models for visual interpretation and explanation.