Paper
10 March 2020 An unsupervised deep learning approach for 4DCT lung deformable image registration
Yabo Fu, Yang Lei, Tonghe Wang, Kristin Higgins, Jeffrey D. Bradley, Walter J. Curran, Tian Liu, Xiaofeng Yang
Author Affiliations +
Abstract
Traditional deformable image registration (DIR) algorithms such as optical flow and demons are iterative and slow especially for large 4D-CT datasets. In order to quickly register the 4D-CT lung images for treatment planning and target definition, the computational speed of the current DIRs needs to be improved. Deep learning-based DIR methods that enable direct transformation prediction are promising alternatives for 4D-CT DIR. In this study, we propose to integrate dilated inception module (DIM) model and self-attention gate (Self-AG) into deep learning framework for 4DCT lung DIR. To overcome the shortage of manually aligned ‘ground truth’ training datasets, the network was designed to train in an unsupervised manner. Instead of using only the fixed and moving images as input, we also included the gradient images in x, y, z directions of the fixed and moving images as input to provide the network additional information to help the transformation prediction. The DIM was able to extract multi-scale structural features for robust feature learning. Self-AG were applied at different scales throughout the encoding and decoding pathways to highlight the structure representing feature differences between moving image and fixed image. The network was trained using pairs of 3D image patches that were extracted from any two random phases of one 4D-CT images. The loss function of the proposed network contains three parts which are image similarity loss, adversarial loss and a regularization loss. The network was trained and tested on 25 patients’ 4D-CT datasets using five-fold out cross validation. The proposed method was evaluated using Mean absolute error (MAE), peak signal to noise ratio (PSNR) and normalized cross correlation (NCC) between the deformed image and the fixed image. MAE, PSNR and NCC were 19.2±6.5, 35.4±3.0 and 0.995±0.002 respectively. Target registration errors (TREs) were calculated using manually selected landmark pairs. The average TRE was 3.38 ± 2.36 mm, which was comparable to traditional DIR algorithms. To summarize, the proposed method was able to achieve comparable performance to that of the traditional DIRs while being orders of magnitude faster (less than a minute).
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yabo Fu, Yang Lei, Tonghe Wang, Kristin Higgins, Jeffrey D. Bradley, Walter J. Curran, Tian Liu, and Xiaofeng Yang "An unsupervised deep learning approach for 4DCT lung deformable image registration", Proc. SPIE 11313, Medical Imaging 2020: Image Processing, 113132T (10 March 2020); https://doi.org/10.1117/12.2549031
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image registration

Lung

Computed tomography

Cancer

Network architectures

Radiotherapy

Signal attenuation

Back to Top