ETRO VUB
About ETRO  |  News  |  Events  |  Vacancies  |  Contact  
Home Research Education Industry Publications About ETRO

Master theses

Current and past ideas and concepts for Master Theses.

How can we explain a prediction of a graph based deep model?

Subject

The problem of explainability in machine learning (i.e., explaining a neural network decision) has recently gained a lot of importance due to the last GDPR regulation in the EU. The task of explaining deep models is strongly related to other tasks such as understanding how deep neural networks –we are particularly interested on CNNs and graph neural networks (GNNs)- internally work. In this master thesis, we will focus on explainable deep learning tools. Specifically, the main goal is to investigate existing techniques and develop novel explainability tools to be applied in graph neural networks.

The goal of an explainable deep GNN model in a classification task is to explain why the model choose a specific class as output. For instance, this can be applied in weather forecasting (rain/sun and why) or prediction of people density in a street (busy/not busy and why, e.g. for coronavirus measures), social media analysis and many other tasks. Explainable GNNs can also be applied in regression tasks as natural phenomena prediction (pollutant concentrations or temperature). Although explainability techniques have been developed for other machine learning techniques, deep learning has few of them and GNNs even less. We would like to review the existing methodologies to provide a general benchmark and an improved model.

Kind of work

In this thesis, we will work on different datasets for analyzing and benchmarking explainable techniques for deep graph models. Specifically, we will focus on (i) investigating existing baselines for the task, (ii) identify issues that are present into current state-of-the-art approaches, and (iii) alleviating those issues by developing new neural network architectures for better explainability. The student will be able to choose from our existing datasets or of his/her choice.

Framework of the Thesis

“Explainability Techniques for Graph Convolutional Networks” Federico Baldassarre and Hossein Azizpour, 2019.
“Explainability Methods for Graph Convolutional Neural Networks” Phillip E. Pope et al, 2019.
“GNNExplainer: Generating Explanations for Graph Neural Networks.” Rex Ying et al, 2019
“GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks”. Q.Huang et al., 2020

Number of Students

1-2 students

Expected Student Profile

Proven programming experience (e.g., Python).
Background in machine leag.
Prior experience with state-of-the-art machine learning frameworks (e.g., Tensorflow, PyTorch) .

Promotor

Prof. Dr. Ir. Nikolaos Deligiannis

+32 (0)2 629 1683

ndeligia@etrovub.be

more info

Supervisor

Miss Esther Rodrigo

+32 (0)2 629 2930

erodrigo@etrovub.be

more info

- Contact person

- IRIS

- AVSP

- LAMI

- Contact person

- Thesis proposals

- ETRO Courses

- Contact person

- Spin-offs

- Know How

- Journals

- Conferences

- Books

- Vacancies

- News

- Events

- Press

Contact

ETRO Department

Tel: +32 2 629 29 30

©2022 • Vrije Universiteit Brussel • ETRO Dept. • Pleinlaan 2 • 1050 Brussels • Tel: +32 2 629 2930 (secretariat) • Fax: +32 2 629 2883 • WebmasterDisclaimer