Abstract
Motion artifacts are a major factor that can degrade the diagnostic performance of computed tomography (CT) images. In particular, the motion artifacts become considerably more severe when an imaging system requires a long scan time such as in dental CT or cone-beam CT (CBCT) applications, where patients generate rigid and non-rigid motions. To address this problem, we proposed a new real-time technique for motion artifacts reduction that utilizes a deep residual network with an attention module. Our attention module was designed to increase the model capacity by amplifying or attenuating the residual features according to their importance. We trained and evaluated the network by creating four benchmark datasets with rigid motions or with both rigid and non-rigid motions under a step-and-shoot fan-beam CT (FBCT) or a CBCT. Each dataset provided a set of motion-corrupted CT images and their ground-truth CT image pairs. The strong modeling power of the proposed network model allowed us to successfully handle motion artifacts from the two CT systems under various motion scenarios in real-time. As a result, the proposed model demonstrated clear performance benefits. In addition, we compared our model with Wasserstein generative adversarial network (WGAN)-based models and a deep residual network (DRN)-based model, which are one of the most powerful techniques for CT denoising and natural RGB image deblurring, respectively. Based on the extensive analysis and comparisons using four benchmark datasets, we confirmed that our model outperformed the aforementioned competitors. Our benchmark datasets and implementation code are available at https://github.com/youngjun-ko/ct_mar_attention.
Original language | English |
---|---|
Article number | 101883 |
Journal | Medical Image Analysis |
Volume | 67 |
DOIs | |
Publication status | Published - 2021 Jan |
Bibliographical note
Funding Information:This research was supported by the Basic Science Research Program through the National Research Foundation of Korea funded by the MSIP ( 2020R1A4A1016619 , 2019R1A2C2006123 , 2018M3A9H6081483 ), by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (2020-0-01361, Artificial Intelligence Graduate School Program (YONSEI UNIVERSITY)), and by the Korea Medical Device Development Fund grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, Republic of Korea, the Ministry of Food and Drug Safety) (Project Number:202011D06).
Funding Information:
This research was supported by the Basic Science Research Program through the National Research Foundation of Korea funded by the MSIP (2020R1A4A1016619, 2019R1A2C2006123, 2018M3A9H6081483), by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (2020-0-01361, Artificial Intelligence Graduate School Program (YONSEI UNIVERSITY)), and by the Korea Medical Device Development Fund grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, Republic of Korea, the Ministry of Food and Drug Safety) (Project Number:202011D06).
Publisher Copyright:
© 2020 Elsevier B.V.
All Science Journal Classification (ASJC) codes
- Radiological and Ultrasound Technology
- Radiology Nuclear Medicine and imaging
- Computer Vision and Pattern Recognition
- Health Informatics
- Computer Graphics and Computer-Aided Design