Glioblastoma is the most common and most aggressive form of primary braintumors. As currently margins for maximal safe resection and radiotherapyplanning are based on a contrast-enhancing (CE) lesion defined on magneticresonance imaging (MRI), tumor cells that infiltrate the healthy tissuebeyond the CE component are not targeted during treatment and becomea source of tumor recurrence, of which the majority of patients ultimatelyfall victim. Despite this understanding, the detection of infiltration onmedical imaging remains elusive. This work aimed to leverage the power ofartificial intelligence to reveal complex data patterns for achieving a bettersegmentation of glioblastoma and its infiltration from multi-modal medicalimaging.Initially, an intuitive, probabilistic segmentation approach was exploredto challenge the conventional and omnipresent deterministic segmentationof glioma on MRI, of which the binary labels do not properly reflect theunderlying biology of a tumor with such a diffuse growth. This braintissue mapping model, based on conventional MRI sequences, includingT1-weighted (T1), contrast-enhanced T1-weighted (T1CE), T2-weighted(T2) and T2-fluid-attenuated inversion recovery (FLAIR), was evaluatedin terms of segmentation accuracy of the tumor and its subregions, and inthe visualization of possible infiltration. While the model achieved goodaccuracy for detection of the whole tumor, lower accuracy was found fortumor subregions compared to state-of-the-art deep learning (DL) models.Visual inspection of tumor probability maps revealed high probability valueswithin the CE lesion, with lower values extending into the surroundingedema region. This pattern aligns with our hypothesis that these probabilitymaps potentially reflect cell density and can model gradual tissue transitions.However, interpretation remained ambiguous due to the limited segmentationaccuracy and the lack of validation possibilities.Subsequently, we explored DL models for deterministic segmentation tasks.While state-of-the-art automated glioblastoma segmentation is tradition-ally performed on a four-modality input, we postulate that informationredundancy is present within combinations of these images, possibly redu-cing the performance of these models. In addition, the risk of encounteringmissing data rises as the number of required input modalities increases.Therefore, through the evaluation of segmentation accuracy and epistemicuncertainty of multiple segmentation models, differing only in their amountand combinations of input modalities, the relevance of each of the modalitiesconcerning glioblastoma segmentation was brought to light. Results showedthat T1CE and FLAIR were sufficient to reach accuracies comparable tothe four-modality model, and can serve as a minimal-input alternative tothe full-input configuration. While additional modalities beyond this didnot improve – and even deteriorated – accuracy, their presence was found toreduce segmentation uncertainty.Although according to multiple biopsy-controlled studies, positron emissiontomography (PET) with amino acid tracer O-(2-[18F]fluoroethyl)-L-tyrosine([18F]FET) reportedly allows better estimation of the tumor extension com-pared to the CE boundaries found on MRI, the automated segmentation ofthe lesion on this imaging modality is ill studied. Therefore, we explored theuse of deep learning for a robust and automated method for glioblastomasegmentation from this imaging modality. While results comparable to thecurrent state of the art were obtained, our results indicate that the lack ofreproducible ground truth restricts the maximum achievable accuracy forautomated glioblastoma segmentation on [18F]FET PET for all networks.The previous works lay the foundation for the integration of information fromboth MRI and [18F]FET PET, that allow a more comprehensive characteriz-ation of the tumor{\textquoteright}s composition and an estimation of the infiltrative region.To demonstrate this vision, we explored simultaneous segmentation of labelsdefined on MRI and [18F]FET PET, with the PET-positive lesion beyond theCE region functioning as a surrogate for infiltration labeling. Moreover, weinvestigated the possibility of predicting such label from MRI alone, allowingbetter definition of the tumor{\textquoteright}s extent while eliminating the need for PETacquisition. Although results indicate the investigated approaches allow topartially compensate for the absence of PET information, prediction basedon solely on MRI does not achieve the needed accuracy, supporting theinclusion of the acquisition of [18F]FET PET for glioblastoma management.In conclusion, this study highlights the potential of artificial intelligenceto enhance glioblastoma imaging analysis and explores tools that buildtowards improved detection of infiltration. By examining probabilistic anddeep learning models, we identified critical MRI modalities and explored[18F]FET PET{\textquoteright}s ability to better delineate a tumor{\textquoteright}s extent. Our findingsset the stage for a combined MRI-PET approach, aiming to improve tumorcharacterization and guide more effective clinical strategies.
De Sutter, S 2025, 'Segmentation of glioblastoma from multi-modal medical imaging: Towards revealing tumor infiltration', Vrije Universiteit Brussel, Brussels.
De Sutter, S. (2025). Segmentation of glioblastoma from multi-modal medical imaging: Towards revealing tumor infiltration. [PhD Thesis, Vrije Universiteit Brussel]. Crazy Copy Center Productions.
@phdthesis{9a74e654f95240c08998738e89cd2d28,
title = "Segmentation of glioblastoma from multi-modal medical imaging: Towards revealing tumor infiltration",
abstract = "Glioblastoma is the most common and most aggressive form of primary braintumors. As currently margins for maximal safe resection and radiotherapyplanning are based on a contrast-enhancing (CE) lesion defined on magneticresonance imaging (MRI), tumor cells that infiltrate the healthy tissuebeyond the CE component are not targeted during treatment and becomea source of tumor recurrence, of which the majority of patients ultimatelyfall victim. Despite this understanding, the detection of infiltration onmedical imaging remains elusive. This work aimed to leverage the power ofartificial intelligence to reveal complex data patterns for achieving a bettersegmentation of glioblastoma and its infiltration from multi-modal medicalimaging.Initially, an intuitive, probabilistic segmentation approach was exploredto challenge the conventional and omnipresent deterministic segmentationof glioma on MRI, of which the binary labels do not properly reflect theunderlying biology of a tumor with such a diffuse growth. This braintissue mapping model, based on conventional MRI sequences, includingT1-weighted (T1), contrast-enhanced T1-weighted (T1CE), T2-weighted(T2) and T2-fluid-attenuated inversion recovery (FLAIR), was evaluatedin terms of segmentation accuracy of the tumor and its subregions, and inthe visualization of possible infiltration. While the model achieved goodaccuracy for detection of the whole tumor, lower accuracy was found fortumor subregions compared to state-of-the-art deep learning (DL) models.Visual inspection of tumor probability maps revealed high probability valueswithin the CE lesion, with lower values extending into the surroundingedema region. This pattern aligns with our hypothesis that these probabilitymaps potentially reflect cell density and can model gradual tissue transitions.However, interpretation remained ambiguous due to the limited segmentationaccuracy and the lack of validation possibilities.Subsequently, we explored DL models for deterministic segmentation tasks.While state-of-the-art automated glioblastoma segmentation is tradition-ally performed on a four-modality input, we postulate that informationredundancy is present within combinations of these images, possibly redu-cing the performance of these models. In addition, the risk of encounteringmissing data rises as the number of required input modalities increases.Therefore, through the evaluation of segmentation accuracy and epistemicuncertainty of multiple segmentation models, differing only in their amountand combinations of input modalities, the relevance of each of the modalitiesconcerning glioblastoma segmentation was brought to light. Results showedthat T1CE and FLAIR were sufficient to reach accuracies comparable tothe four-modality model, and can serve as a minimal-input alternative tothe full-input configuration. While additional modalities beyond this didnot improve – and even deteriorated – accuracy, their presence was found toreduce segmentation uncertainty.Although according to multiple biopsy-controlled studies, positron emissiontomography (PET) with amino acid tracer O-(2-[18F]fluoroethyl)-L-tyrosine([18F]FET) reportedly allows better estimation of the tumor extension com-pared to the CE boundaries found on MRI, the automated segmentation ofthe lesion on this imaging modality is ill studied. Therefore, we explored theuse of deep learning for a robust and automated method for glioblastomasegmentation from this imaging modality. While results comparable to thecurrent state of the art were obtained, our results indicate that the lack ofreproducible ground truth restricts the maximum achievable accuracy forautomated glioblastoma segmentation on [18F]FET PET for all networks.The previous works lay the foundation for the integration of information fromboth MRI and [18F]FET PET, that allow a more comprehensive characteriz-ation of the tumor{\textquoteright}s composition and an estimation of the infiltrative region.To demonstrate this vision, we explored simultaneous segmentation of labelsdefined on MRI and [18F]FET PET, with the PET-positive lesion beyond theCE region functioning as a surrogate for infiltration labeling. Moreover, weinvestigated the possibility of predicting such label from MRI alone, allowingbetter definition of the tumor{\textquoteright}s extent while eliminating the need for PETacquisition. Although results indicate the investigated approaches allow topartially compensate for the absence of PET information, prediction basedon solely on MRI does not achieve the needed accuracy, supporting theinclusion of the acquisition of [18F]FET PET for glioblastoma management.In conclusion, this study highlights the potential of artificial intelligenceto enhance glioblastoma imaging analysis and explores tools that buildtowards improved detection of infiltration. By examining probabilistic anddeep learning models, we identified critical MRI modalities and explored[18F]FET PET{\textquoteright}s ability to better delineate a tumor{\textquoteright}s extent. Our findingsset the stage for a combined MRI-PET approach, aiming to improve tumorcharacterization and guide more effective clinical strategies.",
author = "{De Sutter}, Selene",
year = "2025",
language = "English",
isbn = "9789464948851",
publisher = "Crazy Copy Center Productions",
address = "Belgium",
school = "Vrije Universiteit Brussel",
}