Introduction: Digitally reconstructed radiographs (DRRs) represent valuable patient-specific pre-treatment training data for tumor tracking algorithms. However, using current rendering methods, the similarity of the DRRs to real X-ray images is limited, requires time-consuming measurements and/or are computationally expensive. In this study we present RealDRR, a novel framework for highly realistic and computationally efficient DRR rendering. Materials and methods: RealDRR consists of two components applied sequentially to render a DRR. First, a raytracer is applied for forward projection from 3D CT data to a 2D image. Second, a conditional Generative Adverserial Network (cGAN) is applied to translate the 2D forward projection to a realistic 2D DRR. The planning CT and CBCT projections from a CIRS thorax phantom and 6 radiotherapy patients (3 prostate, 3 brain) were split in training and test sets for evaluating the intra-patient, inter-patient and inter-anatomical region generalization performance of the trained framework. Several image similarity metrics, as well as a verification based on template matching, were used between the rendered DRRs and respective CBCT projections in the test sets, and results were compared to those of a current state-of-the-art DRR rendering method. Results: When trained on 800 CBCT projection images from two patients and tested on a third unseen patient from either anatomical region, RealDRR outperformed the current state-of-the-art with statistical significance on all metrics (two-sample t-test, p < 0.05). Once trained, the framework is able to render 100 highly realistic DRRs in under two minutes. Conclusion: A novel framework for realistic and efficient DRR rendering was proposed. As the framework requires a reasonable amount of computational resources, the internal parameters can be tailored to imaging systems and protocols through on-site training on retrospective imaging data.
Dhont, J, Verellen, D, Mollaert, I, Vanreusel, V & Vandemeulebroucke, J 2020, 'RealDRR - Rendering of realistic digitally reconstructed radiographs using locally trained image-to-image translation', Radiotherapy and Oncology, vol. 153, pp. 213-219. https://doi.org/10.1016/j.radonc.2020.10.004
Dhont, J., Verellen, D., Mollaert, I., Vanreusel, V., & Vandemeulebroucke, J. (2020). RealDRR - Rendering of realistic digitally reconstructed radiographs using locally trained image-to-image translation. Radiotherapy and Oncology, 153, 213-219. https://doi.org/10.1016/j.radonc.2020.10.004
@article{839e8d7b9f4e4fa3a5febfd88dc8fb02,
title = "RealDRR - Rendering of realistic digitally reconstructed radiographs using locally trained image-to-image translation",
abstract = "Introduction: Digitally reconstructed radiographs (DRRs) represent valuable patient-specific pre-treatment training data for tumor tracking algorithms. However, using current rendering methods, the similarity of the DRRs to real X-ray images is limited, requires time-consuming measurements and/or are computationally expensive. In this study we present RealDRR, a novel framework for highly realistic and computationally efficient DRR rendering. Materials and methods: RealDRR consists of two components applied sequentially to render a DRR. First, a raytracer is applied for forward projection from 3D CT data to a 2D image. Second, a conditional Generative Adverserial Network (cGAN) is applied to translate the 2D forward projection to a realistic 2D DRR. The planning CT and CBCT projections from a CIRS thorax phantom and 6 radiotherapy patients (3 prostate, 3 brain) were split in training and test sets for evaluating the intra-patient, inter-patient and inter-anatomical region generalization performance of the trained framework. Several image similarity metrics, as well as a verification based on template matching, were used between the rendered DRRs and respective CBCT projections in the test sets, and results were compared to those of a current state-of-the-art DRR rendering method. Results: When trained on 800 CBCT projection images from two patients and tested on a third unseen patient from either anatomical region, RealDRR outperformed the current state-of-the-art with statistical significance on all metrics (two-sample t-test, p < 0.05). Once trained, the framework is able to render 100 highly realistic DRRs in under two minutes. Conclusion: A novel framework for realistic and efficient DRR rendering was proposed. As the framework requires a reasonable amount of computational resources, the internal parameters can be tailored to imaging systems and protocols through on-site training on retrospective imaging data.",
keywords = "Deep learning, Imaging, Radiotherapy, Motion management",
author = "Jennifer Dhont and Dirk Verellen and Isabelle Mollaert and Verdi Vanreusel and Jef Vandemeulebroucke",
note = "Publisher Copyright: {\textcopyright} 2020 Elsevier B.V. Copyright: Copyright 2021 Elsevier B.V., All rights reserved.",
year = "2020",
month = dec,
doi = "10.1016/j.radonc.2020.10.004",
language = "English",
volume = "153",
pages = "213--219",
journal = "Radiotherapy and Oncology",
issn = "0167-8140",
publisher = "Elsevier Ireland Ltd",
}