Deep learning-based low-dose CT simulator for non-linear reconstruction methods.
Publication year
2024Source
Medical Physics, 51, 9, (2024), pp. 6046-6060ISSN
Annotation
01 september 2024
Publication type
Article / Letter to editor
Display more detailsDisplay less details
Organization
Medical Imaging
Journal title
Medical Physics
Volume
vol. 51
Issue
iss. 9
Page start
p. 6046
Page end
p. 6060
Subject
Medical Imaging - Radboud University Medical CenterAbstract
BACKGROUND: Computer algorithms that simulate lower-doses computed tomography (CT) images from clinical-dose images are widely available. However, most operate in the projection domain and assume access to the reconstruction method. Access to commercial reconstruction methods may often not be available in medical research, making image-domain noise simulation methods useful. However, the introduction of non-linear reconstruction methods, such as iterative and deep learning-based reconstruction, makes noise insertion in the image domain intractable, as it is not possible to determine the noise textures analytically. PURPOSE: To develop a deep learning-based image-domain method to generate low-dose CT images from clinical-dose CT (CDCT) images for non-linear reconstruction methods. METHODS: We propose a fully image domain-based method, utilizing a series of three convolutional neural networks (CNNs), which, respectively, denoise CDCT images, predict the standard deviation map of the low-dose image, and generate the noise power spectra (NPS) of local patches throughout the low-dose image. All three models have U-net-based architectures and are partly or fully three-dimensional. As a use case for this study and with no loss of generality, we use paired low-dose and clinical-dose brain CT scans. A dataset of 326 paired scans was retrospectively obtained. All images were acquired with a wide-area detector clinical system and reconstructed using its standard clinical iterative algorithm. Each pair was registered using rigid registration to correct for motion between acquisitions. The data was randomly partitioned into training ( 251 samples), validation ( 25 samples), and test ( 50 samples) sets. The performance of each of these three CNNs was validated separately. For the denoising CNN, the local standard deviation decrease, and bias were determined. For the standard deviation map CNN, the real and estimated standard deviations were compared locally. Finally, for the NPS CNN, the NPS of the synthetic and real low-dose noise were compared inside and outside the skull. Two proof-of-concept denoising studies were performed to determine if the performance of a CNN- or a gradient-based denoising filter on the synthetic low-dose data versus real data differed. RESULTS: The denoising network had a median decrease in noise in the cerebrospinal fluid by a factor of 1.71 and introduced a median bias of + 0.7 HU. The network for standard deviation map estimation had a median error of + 0.1 HU. The noise power spectrum estimation network was able to capture the anisotropic and shift-variant nature of the noise structure by showing good agreement between the synthetic and real low-dose noise and their corresponding power spectra. The two proof of concept denoising studies showed only minimal difference in standard deviation improvement ratio between the synthetic and real low-dose CT images with the median difference between the two being 0.0 and +0.05 for the CNN- and gradient-based filter, respectively. CONCLUSION: The proposed method demonstrated good performance in generating synthetic low-dose brain CT scans without access to the projection data or to the reconstruction method. This method can generate multiple low-dose image realizations from one clinical-dose image, so it is useful for validation, optimization, and repeatability studies of image-processing algorithms.
This item appears in the following Collection(s)
- Academic publications [246326]
- Electronic publications [133950]
- Faculty of Medical Sciences [93294]
- Open Access publications [107433]
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.