High-resolution imaging is essential for understanding retinal diseases. Adaptive optics scanning light ophthalmoscopy (AOSLO) achieves cellular-level resolution through correction of the optical aberrations of the eye. However, the resolution of AOSLO is still limited by the diffraction of light. Here, we combine annular pupil illumination with sub-Airy disk confocal pinhole detection to surpass the diffraction limit. With the improved resolution, both rod photoreceptor and foveal cone mosaics were more readily identifiable in the living human eye.
Defects in retinal pigment epithelial (RPE) cells, which nourish retinal neurosensory photoreceptor cells, contribute to many blinding diseases. Recently, the combination of adaptive optics (AO) imaging with indocyanine green (ICG) has enabled the visualization of RPE cells directly in patients’ eyes, which makes it possible to monitor cellular status in real time. However, RPE cells visualized using AO-ICG have ambiguous boundaries and minimal intracellular contrast, making it difficult for computer algorithms to identify cells solely based on image appearance information. Here, we demonstrate the importance of providing spatial information for deep learning networks. We used a training dataset containing 1,633 AO images and a separate dataset containing 250 images for validation. Whereas the original LinkNet was unable to reliably identify low-contrast RPE cells, we found that our proposed spatially-aware LinkNet which has direct access to additional spatial information about the hexagonal arrangement of RPE cells (auxiliary spatial constraints) achieved better results. The overall precision, recall, and F1 score from the spatially aware deep learning method were 92.1±4.3%, 88.2±5.7%, and 90.0±3.8% (mean±SD) respectively, which was significantly better than the original LinkNet with 92.0±4.6%, 57.9±13.3%, and 70.0±10.6 (p<0.05). These experimental results demonstrate that the auxiliary spatial constraints are the key factor for improving RPE identification accuracy. Explicit incorporation of spatial constraints into existing deep learning networks may be useful for handling images with known spatial constraints and low image intensity information at cell boundaries.
Fluorescence microscopy has transformed our understanding of modern biology. Recently, this technology was translated to the clinic using adaptive optics enhanced indocyanine green ophthalmoscopy, which enables retinal pigment epithelial cells to be fluorescently-labeled and imaged in the living human eye. Monitoring these cells across longitudinal images on the time scale of months is important for understanding blinding diseases, but remains challenging due to inherent eye-motion-caused distortions, substantial visit-to-visit image displacements, and weak cell boundaries due to the nature of fluorescence data. This paper introduces a stochastically consistent superpixel method to address these issues. First, large displacement optical flow is estimated by embedding global image displacements from a set of maximal stable extremal regions into a variational framework. Next, optical flow is utilized to initialize bilateral Gaussian processes that model superpixel movements. Finally, a generative probabilistic framework is developed to create consistent superpixels constrained with maximal likelihood criterion. Consistent superpixels were evaluated on images from 11 eyes which were longitudinally imaged over 3-12 months. Validation datasets revealed high accuracy across time points despite the presence of visit-to-visit changes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.