“Signal Processing in the AI era” was the tagline of this year’s IEEE International Conference on Acoustics, Speech and Signal Processing, taking place in Rhodes, Greece.
In this context, Brent de Weerdt, Xiangyu Yang, Boris Joukovsky, Alex Stergiou and Nikos Deligiannis presented ETRO’s research during poster sessions and oral presentations, with novel ways to process and understand graph, video, and audio data. Nikos Deligiannis chaired a session on Graph Deep Learning, attended the IEEE T-IP Editorial Board Meeting, and had the opportunity to meet with collaborators from the VUB-Duke-Ugent-UCL joint lab.
Featured articles:

On August 31 2021 at 16.00 Ayyoub Ahar will defend his PhD entitled “Visual Quality Assessment And Analysis For Digitial Holography”. A link to the online defence will be made available later.
Since its invention in 1948, holography has held the promise to empower full-parallax 3D visualization. To facilitate an immersive 3D experience, dynamic holography is required such that it can continuously provide wide field of view and full-parallax with high spatial-resolution rendering. Though, only in recent years it has returned to the forefront of 3D visualization technologies. Presence of multiple signal processing challenges beside several hardware bottlenecks, prevented this technology to be effectively utilized for visualization purposes, especially for 3D scenes of macroscopic-scale and beyond.
One of the core remaining challenges is quality assessment of holograms and perceptual quality analysis of their reconstructions. Despite its vital role in steering other components of the holographic processing pipeline, visual quality assessment of holograms has struggled to reach its primary milestones. On this topic, some of the main contributing issues include presence of speckle noise, lack of comprehensive— perceptually annotated—holographic datasets, complexities regarding fidelity measurements of complex-valued data and perceptual quality prediction of the rendered 3D scene from the heavily noisy fringe patterns of the holographic complex wavefield. Furthermore, efficient representation of holograms via novel and customized mathematical transforms and algorithms is a hot topic due to substantial divergence between mathematical properties and statistics of the holographic content compared to the natural photographic imaging. Although, a handful of experiments have been performed to measure the effect of quantization on the reconstruction quality of holographic signals, little formal information is available on the perception of reconstruction errors by the human visual system. Moreover, knowledge from current 2D and 3D perception research can only be partially extrapolated to a holographic setting. Additionally, mature rendering devices are missing as well. These complications lead to the conclusion that parameterizing the quality perception of the digital holograms is a very exploratory process and of high risk.
From a global perspective, this research track covers necessary components in support of (1) modeling the behavior of the human visual system based upon psychovisual experiments, (2) subjective quality testing procedures and (3) performance analysis of the available quality measures on holographic content and (4) the design of related perceptual quality metrics. Along the way, several intermediary issues are also addressed to allow fulfilling the accounted objectives. Consequently, this dissertation facilitates several necessary building blocks for designing cutting-edge perceptual quality prediction algorithms and paves the way for further advances in this new topic.
On June 21 2023 at 16.00, Nicolas Ospitia Patino will defend his PhD entitled “UNRAVELING TEXTILE-REINFORCED CEMENTITIOUS COMPOSITES BY MEANS OF MULTIMODAL SENSING TECHNIQUES”.
Everybody is invited to attend the presentation at the Room D.0.08.
.
Textile Reinforced Cementitious (TRC) sandwich composites are innovative construction materials composed of two slender TRC facings, and a thick thermal and acoustic insulating core. Their non-corrosive nature allows for slender structures, resulting in a reduction of the cement used, and therefore a decrease of the negative impact on the environment. The sandwich technology brings superior bending resistance while enforcing the lightweight nature of the composite. Despite the numerous advantages of TRC sandwich composites, they present a complex and possibly unpredictable fracture behavior, and manufacturing issues such as a weak interlaminar bond and therefore, need status verification in the different stages of their service life: at manufacturing stage (curing), final product quality (manufacturing defects), deterioration during use (damage accumulation). Up to the moment, there is no reliable non-invasive inspection protocol that assesses the curing of the cementitious facings, provide quality control, and damage monitoring.
Along this study, a combination of Non-Destructive Testing (NDT) techniques is employed to provide a protocol that allows to monitor the composite from the hardening of the cementitious facings, quality control, and finally, damage characterization. Electromagnetic millimeter wave (MMW) spectrometry is employed for the first time in this kind of material to monitor the hydration of cementitious media, quality control, and damage characterization. Additionally, passive, and active elastic wave-based NDT techniques, like Acoustic Emission (AE) and Ultrasound, respectively, are also used in combination with Digital Image Correlation(DIC) to characterize the material along its lifetime, and benchmark MMW spectrometry. This thesis summarizes the results of an extensive experimental campaign, highlighting the innovative contributions. Previously unknown relations between electromagnetic properties measured by MMW and mechanical properties by ultrasound are revealed owing to the common origin of hydration reaction that dictates the permittivity and stiffness development. AE during proofloading reveals the effect of manufacturing defects due to the local stress field variations they impose under mechanical test. In addition, cracking and debonding leave a strong fingerprint on the electromagnetic transmission, enabling a multi-spectral methodology for the structural health monitoring (SHM) of such innovative components during their lifetime.
For second consecutive year two students of the Master program of Biomedical Engineering won the IE-net Awards. Florence Muller took the first place of the nominated Flemish students of the faculties engineering sciences, bio- engineering sciences, industrial engineering sciences and applied engineering sciences, Kristýna Holková won the 3rd price.
Each graduate of one of these faculties needs to belong to the top 20 % of the faculty. The best 45 candidates (15 bio-engineers, 15 civil engineers and 15 industrial engineers) are allowed to the final round after a quotation of the jury. The engineers with the highest score win the ie-net-prizes.
Congratulations to our alumni Florence Muller and Kristýna Holková!
https://www.ugent.be/ea/nl/actueel/nieuws/ie-net-prijzen-florence-muller

Knowledge Engineering in Diagnostic Imaging – a huge project on AI in medical image analysis, the foundation of the internal ETRO ICT computer network.
Fairchild became the first company to produce a commercial charge coupled devices.
On April 21 2023 at 14.00, Quentin Bolsée will defend his PhD entitled “CALIBRATION AND PREPROCESSING OF LIGHT FIELD AND MULTIVIEW DEPTH SYSTEMS”.
Everybody is invited to attend the presentation in room I.2.01 or via this link.
Recently, there has been an increasing demand for high quality 3D content, yet there remains a gap with what real-time depth sensors are capable of. Active sensors such as Time-of-Flight cameras still produce excessively noisy data, while passive technology (photogrammetry, Light Fields) coupled with depth estimation is nowhere near real-time and still presents missing information for challenging scenes. Deep learning has shown promising results in both areas, although the actual properties of physical sensors are almost always neglected.
In this work, properties of multiview depth camera setups are thoroughly examined towards producing a high quality geometry acquisition system. First, a novel calibration step is proposed for a global optimization of the multiple camera parameters using a custom 3D object covered with charuco markers. The noise models are then discussed, and a residual learning convolutional neural network is shown to greatly reduce it. When merging the results from several cameras, a novel refinement step is applied with a pointnet-like neural network constrained to shift 3D points along their viewing ray. This provides a correction on the depth map that preserves the pixel structure while harnessing properties of natural 3D surfaces and observations from other cameras. Combined with the preprocessing by the convolutional neural network and flying pixel removal, this approach is shown to outperform state-of-the-art noise removal methods in both depth map and 3D domain.
In the second part of the thesis, properties of light field systems are discussed, and a new geometrical model is proposed when calibrating microlens arrays in modern Light Field cameras. Unlike previous works, lens distortion parameters are added to the description of the microlens, leading to a non-constant baseline in the virtual camera array. The calibrated model is shown to outperform the state of the art when applied to stereo matching depth estimation. The topic of depth estimation is further studied by showcasing a new 3D convolution-based neural network successfully applied on synthetic light field datasets. The main advantage is a significant reduction in the number of training parameters by treating the camera index as a third dimension, exploiting its isotropy. Finally, a motorized 2-DOF device for spherical light field acquisition is presented and calibrated with a 3D object similar to the one previously described for multiview depth systems. Global optimization of the sphere and camera parameters leads to a sub-pixel accuracy and high-quality depth estimation. Those results are confirmed by comparing a captured image with its reconstruction from neighboring virtual cameras using depth-based view synthesis.