Event
Public PhD defence of Xiangyu Yang on October 15
 
 

On October 15th 2024 at 16:00, Xiangyu Yang will defend their PhD entitled “Leveraging Deep Learning Models for Big Data Analytics”.

Everybody is invited to attend the presentation in room D.0.05 or online via this link.

Abstract 

With the exponential growth of data generated daily from social media, e-commerce, and various digital interactions, the necessity to effectively harness and leverage this vast expanse of information is more critical than ever. In this context, Deep Learning (DL), a subfield of Artificial Intelligence (AI), has emerged as a transformative force, delivering unparalleled capacities in pattern recognition, data analysis, and predictive modeling. Deep learning takes large amounts of available data as fuel to train itself, and significantly impacts various fields ranging from healthcare to finance, enabling advanced applications in natural language processing (NLP), computer vision (CV), and recommender systems (RS).

This thesis delves into the essential role of AI in leveraging big data, focusing on information extraction from social media, deep learning model explainability, and the development of explainable recommender systems. With the vast, ever-growing volume of data, extracting meaningful insights from unstructured social media becomes increasingly complex, necessitating cutting-edge AI solutions. Concurrently, the reliance on deep learning models for critical decisions brings explainability to the forefront, emphasizing the importance of developing transparent methods that ensure user trust. Furthermore, the demand for recommender systems that provide understandable textual explanations has surged, highlighting the need for explainable systems that align with user preferences and decision-making processes.

This thesis advances the field through three key contributions. Initially, we establish two traffic-related datasets from social media, annotated for comprehensive traffic event detection. Employing BERT-based models, we tackle this detection problem via text classification and slot filling, proving these models’ efficacy in parsing social media for traffic-related information. Our second contribution intro- duces LRP-based methods to explain deep conditional random fields, with successful applications in fake news detection and image segmentation. Lastly, we present an innovative personalized explainable recommender system that integrates user and item context into a language model, producing textual explanations that enhance system transparency.

 
 
Related content 
 
 
...
Public PhD defence of Eden Teshome Hunde on November 15
  Event
12 November 2024
...
Public PhD defence of Boris Joukovsky on November 7
  Event
4 November 2024
...
Public PhD defence of Yuqing Yang on October 25
  Event
14 October 2024