âSignal Processing in the AI eraâ was the tagline of this yearâs IEEE International Conference on Acoustics, Speech and Signal Processing, taking place in Rhodes, Greece.
In this context, Brent de Weerdt, Xiangyu Yang, Boris Joukovsky, Alex Stergiou and Nikos Deligiannis presented ETRO’s research during poster sessions and oral presentations, with novel ways to process and understand graph, video, and audio data. Nikos Deligiannis chaired a session on Graph Deep Learning, attended the IEEE T-IP Editorial Board Meeting, and had the opportunity to meet with collaborators from the VUB-Duke-Ugent-UCL joint lab.
Featured articles:
On May 25 2022 at 10.30 Mathias Polfliet will defend his PhD entitled âAdvances in Groupwise Image Registrationâ.
Everybody is invited to attend the presentation live (in room Prof. A. Queridozaal, Faculty building Erasmus MC, âs Gravendijkwal 230, 3015 CE Rotterdam) or online via this link.
This thesis deals with advances in groupwise image registration. Image registration remains an important task in medical image analysis. Whereas most methods are designed for the registration of two images (pairwise registration), there is an increasing interest in simultaneously aligning more than two images using groupwise registration given the increasing availability of medical imaging data, both at the individual and the population level. Groupwise image registration has shown promise in a number of applications dealing with large quantities of data, among others to increase registration accuracy and robustness, to improve the transformation smoothness and to reduce the methodological bias compared to pairwise registrations. However, directly comparing groupwise registrations to conventional repeated pairwise registrations is difficult due to several confounding factors impacting the algorithm. In this thesis, as a first contribution, we rigorously evaluate two registration methodologies in several experiments and investigate the differences in performance. Secondly, we fill a gap in current literature on efficient (dis)similarity measures for multimodal groupwise image registration. These two contributions are distributed over four chapters.
In Chapter 3, we investigate several registration approaches for the alignment of CT and MRI acquisitions of the mandible in patients with oral squamous cell carcinoma. A comparison is made between rigid and non-rigid approaches with symmetric and asymmetric transformation strategies. The results suggest improved performance in terms of registration accuracy for a symmetric transformation strategy compared to an asymmetric approach, however, the differences were not statistically significant (p=0.054). For this clinical application, we conclude that a rigid registration method is the recommended approach.
In Chapter 4, an investigation is performed on different template images for groupwise registrations based on mutual information. Here, template images are employed as a representative image to compare every image in the group to (in terms of its (dis)similarity). We show that the entropy of the template image can have a counter-intuitive contribution to the global dissimilarity value. Additionally, we show that equivalent performance in terms of registration accuracy can be achieved between groupwise and repeated pairwise approaches.
In Chapter 5, a novel similarity measure is introduced for multimodal groupwise registration. The conditional template entropy measures the negated average of the pairwise conditional entropy of each image of the group and a template image, which is constructed based on principal component analysis. We show improved or equivalent performance in terms of accuracy compared to other state-of-the-art (dis)similarity measures for multimodal groupwise registration and repeated pairwise registration. Furthermore, groupwise registration vastly outperform repeated pairwise registration in terms of transitive error, a measure which can be interpreted as a measure for the consistency of the transformations in a groupwise setting.
In Chapter 6, to further improve on the efficiency of multimodal groupwise registration, we propose a novel dissimilarity measure which is especially adept at registering large groups of images. The dissimilarity measure is formulated as the second smallest eigenvalue of the generalized eigenvalue problem posed in the description of Laplacian eigenmaps. We show little dependence of the measure in terms of computation time with respect to the number of images in the group, and equivalent or improved performance in terms of registration accuracy compared to state-of-the-art groupwise (dis)similarity measures.
To summarize, in this work we evaluate groupwise approaches compared to repeated pairwise approaches and show mostly equivalent performance in terms of registration accuracy and robustness and an improved transitivity for groupwise registration. Furthermore, we recommend to use the proposed dissimilarity measure based on Laplacian eigenmaps for large groups of images given its superior or equivalent registration accuracy compared to other measures but superior scaling in terms of execution time with respect to the number of images in the group.
PROJECT: INTOWALL: detection of leaks and isolation in walls
Our buildings generate 35% of the CO2-emissions and 40% of the energy use. 75% of our buildings need to be energetically renovated. In order to do so, one needs to know what the status is of the current isolation, and if there are any water leaks. The Vrije Universiteit Brussel researches with a consortium of partners towards a unique, patented technique to look into the walls without breaking them. With a compact and mobile radar system you can “read” your walls. This project is part of the Smart Hub Cleantech, climate ambitions of the province and the communes.
Partners: Vrije Universiteit Brussel i.s.m. WTCB, Green Energy Park, BAM, ING, Flux50 en Alter Reim.
Happy kids visited the ETRO Build your climate-proof LEGO city boot at CurieuCity and it was also broadcasted on Bruzz tv this weekend.
https://curieucity.brussels/nl/build-your-climate-resistant-city-of-the-future/
On September 17 2021 at 16.00 Jakub Ceranka will defend his PhD entitled âAdvancements in Whole-Body Multi-Modal MRI: Towards Computer-Aided Diagnosis of Metastatic Bone Diseaseâ.
Everybody is invited to attend the online presentation via  this teams link.
Cancer that begins in an organ, such as the lungs, breast or prostate, and then spreads to the bone or other organs, marks the beginning of metastatic disease. The confident detection of metastatic bone disease and the reliable assessment of the tumour load and treatment response is essential to improve patientsâ quality of life and increase life expectancy. Magnetic resonance imaging (MRI) has been successfully used for monitoring of metastatic bone disease. Anatomical whole-body sequences offer excellent resolution and sensitivity for the detection of neoplastic cells within the bone marrow. In combination with spatially prealigned functional diffusion-weighted whole-body MRI and apparent diffusion coefficient maps, it allows for focused, efficient, multi-parametric and holistic evaluation of the total tumour volume, diffusion volume and treatment response assessment. One of the major challenges of radiological reading of whole-body MRI in the clinical routine comes from the large amount of data to be reviewed, making lesion detection and quantification demanding for a radiologist, but also prone to error. Additionally, whole-body MR images are often corrupted with multiple spatial and intensity artifacts, which degrade the performance of medical image processing algorithms.
This PhD thesis proposes number of contributions in the medical image processing domain aiming at improving the quality and extending the usability of whole-body multi-modal MRI in the clinical routine. These include spatial groupwise image registration (to align multiple MRI modalities), multi-atlas segmentation (to define the skeleton region of interest), image standardization (to map MRI intensities into comparable ranges) and a deep learning framework for detection and segmentation of metastatic bone disease, as it is pathology of choice for this work. Combined, proposed contributions provide building blocks for a fully automated computer-aided diagnosis (CAD) system for the detection and segmentation of metastatic bone disease using whole-body multi-modal MRI. Finally, an ablation study describing the impact of different CAD system components on detection and segmentation accuracy is provided.
On February 9 2023 at 16.00, Nastaran Nourbakhsh will defend her PhD entitled âAUTOMATED EXTRACTION OF BODY BIOMETRICS BASED ON DEEP LEARNINGâ.
Everybody is invited to attend the presentation in room D.2.01 or via this link.
3D Anthropometric measurement extraction is of paramount importance for several applications such as clothing design, online garment shopping, and medical diagnosis, to name a few. State-of-the-art 3D anthropometric measurement extraction methods estimate the measurements either through some landmarks found on the input scan or by fitting a template to the input scan using optimization-based techniques. Finding landmarks is very sensitive to noise and missing data. Template-based methods address this problem, but the employed optimization-based template fitting algorithms are computationally very complex and time-consuming. To address the limitations of existing methods, we propose two automatic measurement extraction frameworks: AM-DL and Anet.
To the best of our knowledge, AM-DL is the first approach for automatic contact-less Anthropometric Measurements extraction based on Deep-Learning (AM-DL). A novel module dubbed Multi-scale EdgeConv is proposed to learn local features from point clouds at multiple scales. Multi-scale EdgeConv can be directly integrated with other neural networks for various tasks, e.g., classification of point clouds. We exploit this module to design an encoder-decoder architecture which learns to deform a template model to fit a given scan. The measurement values are then calculated on the deformed template model. Experimental results on the synthetic ModelNet40 dataset and on the real scans demonstrate that the proposed method outperforms state-of-the-art methods and performs sufficiently close to a professional tailor. However, this method requires a post processing step for transferring and refining the measurements from the template to the deformed template.
The second proposed method is Anet which is a deep neural network architecture fitting a template to the input scan and outputting the reconstructed body as well as the corresponding measurements. Unlike existing template-based anthropocentric measurement extraction methods, including AM-DL, the proposed approach does not need to transfer and refine the measurements from the template to the deformed template, thereby being faster and more accurate. A novel loss function, especially developed for 3D anthropometric measurement extraction is introduced. Additionally, two large datasets of complete and partial front-facing scans are proposed and used in training. This results in two models, dubbed Anet-complete and Anet-partial, which extract the body measurements from complete and partial front-facing scans, respectively. Experimental results on synthesized data as well as on real 3D scans captured by a photogrammetry-based scanner, an Azure Kinect sensor, and the very recent TrueDepth camera system demonstrate that the proposed approach systematically outperforms the state-of-the-art methods in terms of accuracy and robustness.
The human hand and foot are among the most complex mechanical human body parts: the hand consists of 34 muscles and 27 bones, equaling a quarter of the bones in human body, which leads to the fact that the human hand can have various shapes and complicated poses. The human foot includes 26 bones, 33 joints and over 100 muscles, ligaments and tendons. This makes scanning and measuring hundreds of thousands of real hand and foot subjects very time-consuming, inaccurate, and difficult. Therefore, we propose two methods trained separately for the foot and hand measurement extractions. To this end, we update AM-DL and Anet for hand and foot measurement extraction by adapting the proposed loss function and synthesizing new large set of synthetic hand and foot samples. Experimental results on both synthetic data and real scans captured by Occipital structure sensor Mark I and Pro demonstrate that the proposed methods outperform the state-of-the-art methods in terms of accuracy and speed.