Digital holography (DH) is a dependable method for observing micro-nano structures and 3D distribution by combining amplitude and phase information. In recent years, pixel super-resolved (PSR) technique improves the SBP of holography. By introducing measurement diversity, PSR phase retrieval approaches are able to solve low-resolution issues due to limited sensor pixels. However, in the existing wavefront modulation PSR technique, dozens or even hundreds of randomly generated phase masks are usually required, resulting in time-consuming measurement and reconstruction. Reducing the amount of data can save time, but lead to poor accuracy and noise robustness. In this paper, we propose a novel PSR holography method with complementary patterns. Specifically, we use a pair of patterns that are exactly complementary in value, while others are randomly generated 0-1 phase patterns. Using this pair, the integrity of the target information contained is guaranteed in the diffraction intensity data set. In addition, the method can effectively improve resolution with limited data, speeding up the measurement and reconstruction process. A series of simulations demonstrate the effectiveness of complementary patterns, achieving more than a 3 dB enhancement in PSNR index compared with the random phase patterns.
KEYWORDS: Neural networks, Wavefronts, Coherence imaging, Biological imaging, Data modeling, Holography, Convolution, Super resolution, Image restoration, Education and training
Large-scale computational imaging can provide remarkable space-bandwidth product that is beyond the limit of optical systems. In coherent imaging (CI), the joint reconstruction of amplitude and phase further expands the information throughput and sheds light on label-free observation of biological samples at micro- or even nano-levels. The existing large-scale CI techniques usually require scanning/modulation multiple times to guarantee measurement diversity and long exposure time to achieve a high signal-to-noise ratio. Such cumbersome procedures restrict clinical applications for rapid and low-phototoxicity cell imaging. In this work, a complex-domain-enhancing neural network for large-scale CI termed CI-CDNet is proposed for various large-scale CI modalities with satisfactory reconstruction quality and efficiency. CI-CDNet is able to exploit the latent coupling information between amplitude and phase (such as their same features), realizing multidimensional representations of the complex wavefront. The cross-field characterization framework empowers strong generalization and robustness for various coherent modalities, allowing high-quality and efficient imaging under extremely low exposure time and few data volume. We apply CI-CDNet in various large-scale CI modalities including Kramers–Kronig-relations holography, Fourier ptychographic microscopy, and lensless coded ptychography. A series of simulations and experiments validate that CI-CDNet can reduce exposure time and data volume by more than 1 order of magnitude. We further demonstrate that the high-quality reconstruction of CI-CDNet benefits the subsequent high-level semantic analysis.
Deep learning shows great potential for super-resolution microscopy, offering biological structures visualization with unprecedented details and high flexibility. An effective pathway toward this goal is structured illumination microscopy (SIM) augmented by deep learning because of its ability to double the resolution beyond the light diffraction limit in real-time. Although the deep-learning-based SIM technique works effectively, it is generally a black box that is difficult to explain the latent principle. Thus, the generated super-resolution biological structures contain unconvinced information for clinical diagnosis. This limitation impedes its further applications in safety-critical fields like medical imaging. In this paper, we report a reliable deep-learning-based SIM technique with uncertainty maps. These uncertainty maps characterize imperfections in various disturbances, such as measurement noise, model error, incomplete training data, and out-of-distribution testing data. Specifically, we employ a Bayesian convolutional neural network to quantify uncertainty and explore its application in SIM. The backbone of the reported neural network is the combination of U-net and Res-net with three low-resolution images from different structured illumination angles as inputs. The outputs are high-resolution images with double resolution beyond the numerical aperture and the pixel-wise confidence intervals quantification of reconstruction images. A series of simulations and experiments validate that the reported uncertainty quantification framework offers reliable uncertainty maps and high-fidelity super resolution images. Our work may promote practical applications of deep-learning-based super-resolution microscopy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.