Registration of retinal images is an important technique for facilitating the diagnosis and treatment of many eye diseases. Recent studies have shown that deep learning methods can be used for image registration, which is usually faster than conventional registration methods. However, it is not trivial to obtain ground truth for supervised methods and popular unsupervised methods perform not well for retinal images. Therefore, we present a weakly-supervised learning method for affine registration of fundus image. The framework consists of multiple steps, rigid registration, overlap calculation and affine registration. In addition, we introduce a keypoint matching loss to replace common similarity metrics loss used in unsupervised methods. On a fundus image dataset related to multiple eye diseases, our framework can achieve more accurate registration results than that of state-of-the-art deep learning approaches.
Glaucoma is a leading cause of irreversible blindness. Accurate optic disc (OD) and optic cup (OC) segmentation in fundus images is beneficial to glaucoma screening and diagnosis. Recently, convolutional neural networks have demonstrated promising progress in OD and OC joint segmentation in fundus images. However, the segmentation of OC is a challenge due to the low contrast and blurred boundary. In this paper, we propose an improved U-shape based network to jointly segment OD and OC. There are three main contributions: (1) The efficient channel attention (ECA) blocks are embedded into our proposed network to avoid dimensionality reduction and capture cross-channel interaction in an efficient way. (2) A multiplexed dilation convolution (MDC) module is proposed to extract more target features with various sizes and preserve more spatial information. (3) Three global context extraction (GCE) modules are used in our network. By introducing multiple GCE modules between encoder and decoder, the global semantic information flow from high-level stages can be gradually guided to different stages. The method proposed in this paper was tested on 240 fundus images. Compared with U-Net, Attention U-Net, Seg-Net and FCNs, the OD and OC’s mean Dice similarity coefficient of the proposed method can reach 96.20% and 90.00% respectively, which are better than the above networks.
At present, high myopia has become a hot spot for eye diseases worldwide because of its increasing prevalence. Linear lesion is an important clinical signal in the pathological changes of high myopia. ICGA is considered to be the “Ground Truth” for the diagnosis of linear lesions, but it is invasive and may cause adverse reactions such as allergy, dizziness, and even shock in some patients. Therefore, it is urgent to find a non-invasive imaging modality to replace ICGA for the diagnosis of linear lesions. Multi-color scanning laser (MCSL) imaging is a non-invasive imaging technique that can reveal linear lesion more richly than other non-invasive imaging technique such as color fundus imaging and red-free fundus imaging and some other invasive one such as fundus fluorescein angiography (FFA). To our best knowledge, there are no studies focusing on the linear lesion segmentation based on MCSL images. In this paper, we propose a new U-shape based segmentation network with multi-scale and global context fusion (SGCF) block named as SGCNet to segment the linear lesion in MCSL images. The features with multi-scales and global context information extracted by SGCF block are fused by learnable parameters to obtain richer high-level features. Four-fold cross validation was adopted to evaluate the performance of the proposed method on 86 MCSL images from 57 high myopia patients. The IoU coefficient, Dice coefficient, Sensitivity coefficient and Specialty are 0.494±0.109, 0.654±0.104, 0.676±0.131 and 0.998±0.002, respectively. Experiment results indicate the effectiveness of the proposed network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.