Purpose: Image registration is the process of aligning images, and it is a fundamental task in medical image analysis. While many tasks in the field of image analysis, such as image segmentation, are handled almost entirely with deep learning and exceed the accuracy of conventional algorithms, currently available deformable image registration methods are often still conventional. Deep learning methods for medical image registration have recently reached the accuracy of conventional algorithms. However, they are often based on a weakly supervised learning scheme using multilabel image segmentations during training. The creation of such detailed annotations is very time-consuming.
Approach: We propose a weakly supervised learning scheme for deformable image registration. By calculating the loss function based on only bounding box labels, we are able to train an image registration network for large displacement deformations without using densely labeled images. We evaluate our model on interpatient three-dimensional abdominal CT and MRI images.
Results: The results show an improvement of ∼10 % (for CT images) and 20% (for MRI images) in comparison to the unsupervised method. When taking into account the reduced annotation effort, the performance also exceeds the performance of weakly supervised training using detailed image segmentations.
Conclusion: We show that the performance of image registration methods can be enhanced with little annotation effort using our proposed method.
The quality of the segmentation of organs and pathological tissues has significantly improved in recent years by using deep learning approaches, which are typically trained with fully supervised learning. For these fully supervised learning methods, an immense amount of fully labeled training data is required, which are, especially in medicine, costly to generate. To overcome this issue, weakly supervised training methods are used, because they do not need fully labeled ground truth data. For the localization of objects weakly supervised learning has already become more important. Recently, weakly supervised learning also became increasingly important in the area of segmentation of pathological tissues. However, these currently available approaches still require additional anatomical information. In this paper, we present a weakly supervised segmentation method that does not need ground truth segmentations as input or additional anatomical information. Our method consists of three classification networks in sagittal, axial, and coronal direction that decide whether a slice contains the structure to be segmented. Then, we use the class activation maps of the classification output to generate a combined segmentation. Our network was trained for the challenging task of pancreas segmentation with the publicly available TCIA pancreas dataset and we reached Dice scores for slices of up to 0.86 and an overall Dice score of up to 0.53.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.