Project Details
Project description 

Deep neural networks (DNNs) have shown tremendous performance improvement in a wide range of applications. However, they havebrone important shortcoming: they are often considered as black boxes, as their inner processes and generalization capabilities are not fully understood. In this project, we aim at tackling this problem, by developing a new framework for AI that`s explainable and interpretable. Two complementary research directions will bebrinvestigated. First, we argue that knowledge about the data structure should be incorporated in the design of DNNs, i.e. prior to network training, leading to network transparency, i.e. networks that are more interpretable by design. We will apply this mostly to inverse problems such as image denoising, superresolution and inpainting. Second, we will develop trustworthy methods for post-hoc interpretation and explanation, that analyze the behavior of a network after it has been trained. This will be demonstrated on image classification as well as object detection problems. We expect both strategies to reinforcebrone another, leading not only to more explainable models, but also better performing ones, outperforming the current state-of-the-artbr

Runtime: 2020 - 2024