By measuring and correcting sample-induced aberrations, adaptive optics (AO) enables noninvasive imaging of subcellular structures in living organisms with two-photon (2P) fluorescence microscopy. We will introduce CoCoA-2P, a self-supervised machine-learning algorithm capable of simultaneously estimating aberrations and recovering 3D structural information from a single 2P image stack without requiring external training datasets. We will showcase the applications of CoCoA-2P for high-resolution in vivo structural imaging of the mouse brain and eye lenses.
Optical microscopy with adaptive optics (AO) allows high-resolution noninvasive imaging of subcellular structures in living organisms. As alternatives to hardware-based AO methods, supervised deep-learning approaches have recently been developed to estimate optical aberrations. However, these approaches are often limited in their generalizability due to discrepancies between training and imaging settings. Moreover, a corrective device is still required to compensate for aberrations in order to obtain high-resolution images. Here we describe a deep self-supervised learning approach for simultaneous aberration estimation and structural information recovery from a single 3D image stack acquired by widefield microscopy. The approach utilizes coordinate-based neural representations to represent highly complex structures. We experimentally validated our approach with directwavefront-sensing-based AO in the same samples and showed the approach is applicable to in vivo mouse brain imaging
Deep learning has proven to be an efficient and robust method for many computational imaging systems. The advantages of machine learning, as a rule, are that it is fast—at least in its supervised form after training is complete—and seems exceedingly effective in capturing regularizing priors. Here, we focus the discussion on non-invasive three-dimensional (3D) object reconstruction. One then faces the additional dilemma of choosing the appropriate model of light-matter interaction inside the specimen, i.e. the forward operator. We describe the three stages of approximation that are applicable: weak scattering with weak diffraction (also known as the Radon transform), weak scattering with strong diffraction, and strong scattering. We then overview machine learning approaches for the various models, and glance at the consequences of oversimplifying the forward operator choice.
Each image metric represents different characteristics of images. For instance, similarity metrics, e.g. Structural Similarity Index Metric (SSIM) or Pearson Correlation Coefficient (PCC), utilize correlation between two images to calculate similarity between them; error metrics, e.g. Mean Absolute Error (MAE) or Root-Mean Squared Error (RMSE), compute the pixel-wise error between them according to different norms. As each of them highlights different aspects, a choice of the metric for an application may depend upon characteristics of the images. In this paper, we will show some tomographic reconstructions of dense-layered binary-phase objects, and as the objects are binary, we propose Probability of Error (PE) as an image metric for the assessment of the reconstructions in contrast to other metrics that are not constrained to the range of values. PE is equivalent to Bit-Error Rate (BER) in digital communications as both of the signals of interest are binary, and we are interested to a bit-wise deviation of the reconstructions of their corresponding ground truth images.
In high-contrast imaging applications, such as the direct imaging of exoplanets, a coronagraph is used to suppress the light from an on-axis star so that a dimmer, off-axis object can be imaged. To maintain a high-contrast dark region in the image, optical aberrations in the instrument must be minimized. The use of phase-contrast-based Zernike Wavefront Sensors (ZWFS) to measure and correct for aberrations has been studied for large segmented aperture telescopes and ZWFS are planned for the coronagraph instrument on the Roman Space Telescope (RST). ZWFS enable subnanometer wavefront sensing precision, but their response is nonlinear. Lyot-based Low-OrderWavefront Sensors (LLOWFS) are an alternative technique, where light rejected from a coronagraph's Lyot stop is used for linear measurement of small wavefront displacements. Recently, the use of Deep Neural Networks (DNNs) to enable phase retrieval from intensity measurements has been demonstrated in several optical configurations. In a LLOWFS system, the use of DNNs rather than linear regression has been shown to greatly extend the sensor's usable dynamic range. In this work, we investigate the use of two different types of machine learning algorithms to extend the dynamic range of the ZWFS. We present static and dynamic deep learning architectures for single- and multi-wavelength measurements, respectively. Using simulated ZWFS intensity measurements, we validate the network training technique and present phase reconstruction results. We show an increase in the capture range of the ZWFS sensor by a factor of 3.4 with a single wavelength and 4.5 with four wavelengths.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.