Unidirectional imagers form images of input objects only in one direction, e.g., from field-of-view (FOV) A to FOV B, while blocking the image formation in the reverse direction, from FOV B to FOV A. Here, we report unidirectional imaging under spatially partially coherent light and demonstrate high-quality imaging only in the forward direction (A → B) with high power efficiency while distorting the image formation in the backward direction (B → A) along with low power efficiency. Our reciprocal design features a set of spatially engineered linear diffractive layers that are statistically optimized for partially coherent illumination with a given phase correlation length. Our analyses reveal that when illuminated by a partially coherent beam with a correlation length of ≥∼1.5λ, where λ is the wavelength of light, diffractive unidirectional imagers achieve robust performance, exhibiting asymmetric imaging performance between the forward and backward directions—as desired. A partially coherent unidirectional imager designed with a smaller correlation length of <1.5λ still supports unidirectional image transmission but with a reduced figure of merit. These partially coherent diffractive unidirectional imagers are compact (axially spanning <75λ), polarization-independent, and compatible with various types of illumination sources, making them well-suited for applications in asymmetric visual information processing and communication.
We present subwavelength imaging of amplitude- and phase-encoded objects based on a solid-immersion diffractive processor designed through deep learning. Subwavelength features from the objects are resolved by the collaboration between a jointly-optimized diffractive encoder and decoder pair. We experimentally demonstrated the subwavelength-imaging performance of solid immersion diffractive processors using terahertz radiation and achieved all-optical reconstruction of subwavelength phase features of objects (with linewidths of ~λ/3.4, where λ is the wavelength) by transforming them into magnified intensity images at the output field-of-view. Solid-immersion diffractive processors would provide cost-effective and compact solutions for applications in bioimaging, sensing, and material inspection, among others.
We introduce a diffractive super-resolution display system combining an electronic encoder and a diffractive decoder network to project super-resolved images using a low-resolution spatial light modulator (SLM). This deep learning-enabled display system achieves ~4x super-resolution, corresponding to a ~16x increase in the space-bandwidth product, which was also experimentally demonstrated using 3D-fabricated diffractive decoders that operate at the THz spectrum. The design principles of this diffractive super-resolution display were also used to project high-resolution color images using a low-resolution SLM. Diffractive super-resolution image projection paves the way for developing compact, low-power, and computationally efficient high-resolution image and video display systems.
We present an all-optical image denoiser based on spatially-engineered diffractive layers. Following a one-time training process using a computer, this analog processor composed of fabricated passive layers achieves real-time image denoising by processing input images at the speed of light and synthesizing the denoised results within its output field-of-view, completely bypassing digital processing. Remarkably, these designs achieve high output diffraction efficiencies of up to 40%, while maintaining excellent denoising performance. The effectiveness of this diffractive image denoiser was experimentally validated at the terahertz spectrum, successfully removing salt-only noise from intensity images using a 3D-fabricated denoiser that axially spans <250 wavelengths.
We directly transfer optical information around arbitrarily-shaped, fully-opaque occlusions that partially or entirely block the line-of-sight between the transmitter and receiver apertures. An electronic neural network (encoder) produces an encoded phase representation of the optical information to be transmitted. Despite being obstructed by the opaque occlusion, this phase-encoded wave is decoded by a diffractive optical network at the receiver. We experimentally validated our framework in the terahertz spectrum by communicating images around different opaque occlusions using a 3D-printed diffractive decoder. This scheme can operate at any wavelength and be adopted for various applications in emerging free-space communication systems.
KEYWORDS: Free space optics, Diffusers, Education and training, Deep learning, 3D modeling, Optical transmission, Neural networks, Mathematical optimization, Light sources and illumination, Image transmission
We report an optical diffractive decoder with an electronic encoder network to facilitate the accurate transmission of optical information of interest through unknown random phase diffusers along the optical path. This hybrid electronic-optical model was trained via supervised learning, and comprises a convolutional neural network-based encoder and jointly-trained passive diffractive layers. After their joint-training using deep learning, our hybrid model can accurately transfer optical information even in the presence of unknown phase diffusers, generalizing to new random diffusers never seen before. We experimentally validated this framework using a 3D-printed diffractive network, axially spanning <70λ, where λ=0.75mm is the illumination wavelength.
Free-space optical information transfer through diffusive media is critical in many applications, such as biomedical devices and optical communication, but remains challenging due to random, unknown perturbations in the optical path. We demonstrate an optical diffractive decoder with electronic encoding to accurately transfer the optical information of interest, corresponding to, e.g., any arbitrary input object or message, through unknown random phase diffusers along the optical path. This hybrid electronic-optical model, trained using supervised learning, comprises a convolutional neural network-based electronic encoder and successive passive diffractive layers that are jointly optimized. After their joint training using deep learning, our hybrid model can transfer optical information through unknown phase diffusers, demonstrating generalization to new random diffusers never seen before. The resulting electronic-encoder and optical-decoder model was experimentally validated using a 3D-printed diffractive network that axially spans <70λ, where λ = 0.75 mm is the illumination wavelength in the terahertz spectrum, carrying the desired optical information through random unknown diffusers. The presented framework can be physically scaled to operate at different parts of the electromagnetic spectrum, without retraining its components, and would offer low-power and compact solutions for optical information transfer in free space through unknown random diffusive media.
We report an electronic encoder (formed by a convolutional neural network) and a diffractive decoder (formed by spatially-structured diffractive layers) that are jointly optimized using deep learning to project super-resolved images at the output plane using a low-resolution spatial-light modulator (SLM). This diffractive super-resolution display performs ~4x pixel super-resolution, corresponding to a ~16x increase in the space-bandwidth product. This diffractive display was experimentally demonstrated using 3D-printed diffractive decoders operating at the THz spectrum. Diffractive super-resolution image displays can be used to build compact, low-power, and computationally efficient HR projectors operating at visible wavelengths and other parts of the electromagnetic spectrum.
We present a field-portable and high-throughput imaging flow-cytometer, which performs phenotypic analysis of microalgae using image processing and deep learning. This computational cytometer weighs ~1.6kg, and captures holographic images of water samples containing microalgae, flowing in a microfluidic channel at a rate of 100mL/h. Automated analysis is performed by extracting the spatial and spectral features of the reconstructed images to automatically identify/count the target algae within the sample, using image processing and convolutional neural networks. Changes within the measured features and the composition of the microalgae can be rapidly analyzed to reveal even minute deviations from the normal state of the population.
Current state-of-the-art technology for in-vitro diagnostics employ laboratory tests such as ELISA that consists of a multi-step test procedure and give results in analog format. Results of these tests are interpreted by the color change in a set of diluted samples in a multi-well plate. However, detection of the minute changes in the color poses challenges and can lead to false interpretations. Instead, a technique that allows individual counting of specific binding events would be useful to overcome such challenges. Digital imaging has been applied recently for diagnostics applications. SPR is one of the techniques allowing quantitative measurements. However, the limit of detection in this technique is on the order of nM. The current required detection limit, which is already achieved with the analog techniques, is around pM. Optical techniques that are simple to implement and can offer better sensitivities have great potential to be used in medical diagnostics. Interference Microscopy is one of the tools that have been investigated over years in optics field. More of the studies have been performed in confocal geometry and each individual nanoparticle was observed separately. Here, we achieve wide-field imaging of individual nanoparticles in a large field-of-view (~166 μm × 250 μm) on a micro-array based sensor chip in fraction of a second. We tested the sensitivity of our technique on dielectric nanoparticles because they exhibit optical properties similar to viruses and cells. We can detect non-resonant dielectric polystyrene nanoparticles of 100 nm. Moreover, we perform post-processing applications to further enhance visibility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.