“Signal Processing in the AI era” was the tagline of this year’s IEEE International Conference on Acoustics, Speech and Signal Processing, taking place in Rhodes, Greece.
In this context, Brent de Weerdt, Xiangyu Yang, Boris Joukovsky, Alex Stergiou and Nikos Deligiannis presented ETRO’s research during poster sessions and oral presentations, with novel ways to process and understand graph, video, and audio data. Nikos Deligiannis chaired a session on Graph Deep Learning, attended the IEEE T-IP Editorial Board Meeting, and had the opportunity to meet with collaborators from the VUB-Duke-Ugent-UCL joint lab.
Featured articles:

On February 8 2023 at 16.00, Pratap Renukaswamy will defend his PhD entitled “PLL MODULATION AND MIXED-SIGNAL CALIBRATION TECHNIQUES FOR FMCW CHIRP SYNTHESIS”.
Everybody is invited to attend the presentation in room D.2.01, or though this link.
Radar sensors have moved in the past decade from bulky systems to integrated solutions, driven by many applications in varying domains. Radar sensors are key components in self-driving cars to provide robust sensing capabilities in every weather condition. They allow contactless monitoring of vital signs such as breathing and heart rate. One of the latest applications is gesture recognition in recent smartphones.
The signals used in radar sensors are modulated signals: Frequency- Modulated Continuous-Wave (FMCW) is today the most widely used modulation. Here a carrier frequency is linearly modulated to reach a maximum over a specified period. This waveform is called a chirp.
The key component to realize this is a frequency-chirping Phase-Locked Loop (PLL), that generates an clean sinewave of a linearly increasing frequency. Many of the key performance criteria of the radar system are determined by the quality of the generated FMCW source. Any nonlinearity in the frequency versus time curve causes errors in the detected distance and speed. Any noise in the system will prevent the detection of small targets, hidden in the noise floor. The total available bandwidth (difference between maximum and minimum frequency) that can be generated determines the range resolution of the radar, where several GHz of bandwidth are required to detect targets with cm accuracy.
To address these challenges, this thesis presents a PLL modulation architecture and circuit blocks for low-power and high-performance chirp synthesis and verified using two 28 nm CMOS prototype chips. The designs will further push the performance of the FMCW PLLs, by combining innovative mixed-signal processing and calibration techniques with Charge-Integrating Digital-to-Analog Converter (QDAC) as a key building block. The 10 GHz sub-sampling PLL prototype achieves 23 MHz/ÎĽs chirp slope with 28 kHz rms-FM-error, while consuming less than 12 mW power. The 16 GHz duty-cycled charge-pump PLL design achieves a 29 MHz/ÎĽs slope with an rms-FM-error below 41 kHz while consuming less than 16.5 mW.
Kick-off of the VLIR-UOS Short Initiative project NEST: Non-intrusive devices for Telemedicine.
The event occurred at the Escuela Superior PolitĂ©cnica del Litoral (ESPOL) in Guayaquil (Ecuador) on the 20th of January. The kick-off event consisted of a seminar describing the project’s objectives for a broad audience. In the afternoon, training about the PPG EduKit took place for students, technical assistants, and professors of the engineering programs at the ESPOL.

On June 14 2022 at 14.00, Iman Marivani will defend his PhD entitled “DEEP UNFOLDING DESIGNS FOR MULTIMODAL IMAGE RESTORATION AND FUSION”.
Everybody is invited to attend the presentation live (in room K.2.56 (Building K, Humanities, Sciences & Engineering Campus)) or online via this link.
Big datasets contain correlated heterogeneous data acquired by diverse modalities, e.g., photography, multispectral and infrared imaging, as well as computed tomography (CT), X- radiography, and ultra-sound sensors in medical imaging and non-destructive testing. While there are modalities that can easily be captured in high-resolution, in practice some modalities are more susceptible to environmental noise and are mainly available in low-resolution due to the time constraints as well as the cost per pixel of the corresponding sensors. Hence, multimodal image restoration, which refers to the reconstruction of one modality guided by another, and multimodal image fusion, that is, the fusion of images from different sources into a single more comprehensive one, are among important computer vision problems. In this PhD research, we focus on designing deep unfolding networks for multimodal image restoration and fusion.
Analytical methods for image restoration and fusion rely on solving complex optimization problems at training and inference, making them computationally expensive. Deep learning methods can learn a nonlinear mapping between the input and the desired output from data, delivering high accuracy at a lowcomputational cost during inference. However, the existing deep models, which behave like a black box, do not incorporate any prior knowledge. Recently, deep unfolding introduced the idea of integrating domain knowledge in the form of signal priors, e.g., sparsity, into the single modal neural network architecture. In this thesis, we present multimodal deep unfolding designs based on coupled convolutional sparse coding for multimodal image restoration and fusion. We propose two formulations for multimodal image restoration in the form of coupled convolutional sparse coding problems. The first formulation assumes that the representations of the guidance modality is provided and fixed. While the second formulation allows intermediate refinements of both modalities to produce a more suitable guidance representation for the reconstruction. We design two categories of multimodal CNNs by adopting two optimization techniques, i.e., proximal algorithms, and the method of multipliers, for solving the corresponding sparse coding problems. We also design a multimodal image fusion model based on the second formulation. Our deep unfolding models are extensively evaluated on several benchmark multimodal image datasets for the applications of multimodal image super- resolution and denoising, as well as multi focus and multi exposure image fusion.
An Innoviris-funded project, called eTailor with Treedy’s and ETRO-VUB for full-body scanner that extrapolates your size without you even having to take your clothes off. It will be deployed in the Decathlon shops (and not only) worldwide.
The IP behind this tech is for a part shared IP between VUB and Treedy’s. A VUB-Treedy’s patent was recently accepted covering the technology that enables estimating the body shape under clothing and taking automatically measurements on each scanned person.
eTailor is an example of how an industrial project should run: one could achieve both academic and industrial excellence.
The motivation letter is very important and should clearly describe your background, experience, and goals in your career.