“Signal Processing in the AI era” was the tagline of this year’s IEEE International Conference on Acoustics, Speech and Signal Processing, taking place in Rhodes, Greece.
In this context, Brent de Weerdt, Xiangyu Yang, Boris Joukovsky, Alex Stergiou and Nikos Deligiannis presented ETRO’s research during poster sessions and oral presentations, with novel ways to process and understand graph, video, and audio data. Nikos Deligiannis chaired a session on Graph Deep Learning, attended the IEEE T-IP Editorial Board Meeting, and had the opportunity to meet with collaborators from the VUB-Duke-Ugent-UCL joint lab.
Featured articles:

Design and realisation of the CCD camera’s (Abbe Comparator and Rainbow light spectrum analyser)
Five young academics have been chosen to take on administrative tasks for a year in addition to their academic work to support the rector and vice-rectors. From ETRO, civil engineer Jeroen Van Schependom will take care of the vice-rectorate Research Policy.
These five academic staff members will have the opportunity, with the current management team, to develop their leadership potential and inspire the rectoral policy team. They will devote one day a week within their current tenure to this new role. Each will work closely with the rector or a vice-rector in a specific policy area to gain a tangible view of what leadership and policymaking means in practice.
“By giving young academics the opportunity to hone their policy competencies and weigh in on VUB policy, the university aims to increase its policy capability. The voice and views of our younger colleagues are absolutely essential. After all, they are also the leaders of the future,” says rector Caroline Pauwels.
On March 31 2021 at 16.00 Panagiotis Tsinganos will defend his PhD entitled “Multi-channel EMG pattern classification based on deep learning”.
Everybody is invited to attend the presentation online via https://upatras-gr.zoom.us/j/98941099749?pwd=ZmdQZkxRYllIaVRDKzJrVHM2L2krQT09
In recent years, a huge body of data generated by various applications in domains like social networks and healthcare have paved the way for the development of high performance models. Deep learning has transformed the field of data analysis by dramatically improving the state of the art in various classification and prediction tasks. Combined with advancements in electromyography it has given rise to new hand gesture recognition applications, such as human computer interfaces, sign language recognition, robotics control and rehabilitation games.
The purpose of this thesis is to develop novel methods for electromyography signal analysis based on deep learning for the problem of hand gesture recognition. Specifically, we focus on methods for data preparation and developing accurate models even when few data are available. Electromyography signals are in general one-dimensional time-series with a rich frequency content. Various feature sets have been proposed in literature however due to the stochastic nature of the signals the performance of the developed models depends on the combination of the features and the classifier. On the other hand, the end-to-end training scheme of deep learning models reduces the effort needed for finding the best features and classification model, yet a suitable preprocessing of the signals is still required. Another problem is that variations in gesture duration, sensor placement and muscle physiology require continuous adaptation of the trained models using new recorded data.
The implementation is based on surface electromyography sensors, which comprise the input to the end-to-end deep learning pipelines that process and classify the electromyography signals. Preprocessing and data preparation techniques for electromyograms are examined, while data augmentation and transfer learning approaches allow developing personalised models even when few data are available. Besides their successful application in other domains, the use of deep learning models allows the development of systems that can easily generalise to new users. The use of electromyography sensors is important because the developed system can detect whether any unwanted compensatory movements are performed, which under typical vision-based interfaces is impossible.
The advancements proposed in this thesis have been evaluated with publicly available data repositories. However, considering that the models are trained in an end-to-end fashion they can be easily adapted to different setups.
After the conservation and restoration project for Jan Van Eyck’s masterpiece, the Royal Institute for Cultural Heritage documented both sides of the painting with hundreds of macro photography photos. Universum Digitalis then algorithmically assembled those images to produce gigapixel images of the artwork. The painting was previously digitized in 2015 using the same scientific protocol. Universum Digitalis seamlessly aligned both acquisitions, enabling a unique pixel-level comparison before and after restoration.
Comparison of the front and backside before and after restoration.
The restored painting will be exhibited at The Louvre Museum during the exhibition “Revoir Van Eyck – La Vierge du chancelier Rolin” from March 20th to June 17th, 2024. In parallel with the exhibition’s opening, the gigapixel images produced by Universum Digitalis will be made publicly accessible on the Closer to Van Eyck website.
https://www.louvre.fr/en/what-s-on/exhibitions/a-new-look-at-jan-van-eyck

On July 2 2021 at 16.30 Abel Diaz Berenguer will defend his PhD entitled “Learning to predict human behavior in crowded scenes”.
Automatically understanding human behavior is one of the most fundamental research topic towards socially aware vision-based autonomous systems. There is an increasing interest in incorporating the social signal perspective into the learning systems pipeline. This dissertation focuses on developing and incorporating computational mechanisms of Computer Vision and Machine Learning to analyze and predict human behavior in crowded scenes automatically. Our research specifically addresses public safety assisted by autonomous video surveillance systems aiming to decrease the human labor dedicated to video monitoring.
Our research efforts concentrate on the information processing pipeline for learning systems that cope with human trajectory prediction and human behavior analysis in crowded scenes. We contribute to human trajectory prediction in crowded scenes with
(i) a novel latent variable model aware of the human-human and human-contextual interactions to predict plausible trajectories. Furthermore,
(ii) a novel latent location-velocity recurrent model that predicts future variable and feasible trajectories. Towards human anomalous behavior detection, we adopt two unsupervised approaches based on the scene dominant behavior and trajectories underlying properties to address trajectory-based anomaly detection. Besides, we contribute with
(iii) a supervised approach capable of attaining discriminative sequence-based feature representations to recognize whether video sequences depict violent human behavior.
Extensive experiments on publicly available datasets, demonstrate the effectiveness of our proposals.