Optical computing is considered a promising solution for the growing demand for parallel computing in various cutting-edge fields that require high integration and high-speed computational capacity. We propose an optical computation architecture called diffraction casting (DC) for flexible and scalable parallel logic operations. In DC, a diffractive neural network is designed for single instruction, multiple data (SIMD) operations. This approach allows for the alteration of logic operations simply by changing the illumination patterns. Furthermore, it eliminates the need for encoding and decoding of the input and output, respectively, by introducing a buffer around the input area, facilitating end-to-end all-optical computing. We numerically demonstrate DC by performing all 16 logic operations on two arbitrary 256-bit parallel binary inputs. Additionally, we showcase several distinctive attributes inherent in DC, such as the benefit of cohesively designing the diffractive elements for SIMD logic operations that assure high scalability and high integration capability. Our study offers a design architecture for optical computers and paves the way for a next-generation optical computing paradigm.
Reservoir computing is a powerful tool for creating digital twins of a target systems. They can both predict future values of a chaotic timeseries to a high accuracy and also reconstruct the general properties of a chaotic attractor. In this. We show that their ability to learn the dynamics of a complex system can be extended to systems with multiple co-existing attractors, here a four-dimensional extension of the well-known Lorenz chaotic system.
Even parts of the phase space that were not in the training set can be explored with the help of a properly-trained reservoir computer. This includes entirely separate attractors, which we call "unseen". Training on a single noisy trajectory is sufficient. Because Reservoir Computers are substrate-agnostic, this allows the creation of conjugate autonomous reservoir computers for any target dynamical systems.
Computational imaging is a powerful imaging framework by cooperating optics and information science. In this talk, I will present our research activities related to computational imaging with scattering media and machine learning.
Label-free optical imaging is valuable for studying fragile biological phenomena where chemical and/or optical damages associated with exogenous labelling of biomolecules are not wanted. Molecular vibrational (MVI) and quantitative phase imaging (QPI) are the two most-established label-free imaging methods that provide biochemical and morphological information of the sample, respectively. While these methods have pioneered numerous important biological analyses along their intensive technological development over the past twenty years, their inherent limitations are still left unresolved. In this contribution, we present a unified imaging scheme that bridges the technological gap between MVI and QPI, achieving simultaneous and in-situ integration of the two complementary label-free contrasts using the midinfrared (MIR) photothermal effect. Our method is a super-resolution MIR imaging where vibrational resonances induced by wide-field MIR excitation and the resulting photothermal RI changes are detected and localized with the spatial resolution determined by a visible-light-based QPI system. We demonstrate applicability of this method, termed MV-sensitive QPI (MV-QPI), to live-cell imaging. Our MV-QPI method could allow for quantitative mapping of subcellular biomolecular distributions within the global cellular morphology in a label-free and damage-less manner, providing more comprehensive pictures of complex and fragile biological activities.
In this paper, several methods based on a data-centric approach for optical sensing and imaging are summarized and their potential capabilities for miscellaneous problems are presented. At the beginning, the framework of data-centric approach is explained briefly with a generalized formulation of a process of optical sensing and imaging. The essential idea is application of machine learning to estimate the inverse process of the target optical sensing and imaging using mathematical models. Once such an estimation is achieved, the input object and the resultant output signals can be related by the mathematical model. Based on the framework, several problems in optical sensing and imaging are demonstrated. They are single-shot super resolution in diffractive imaging, computer-generated holography based on deep learning, and wavefront sensing using deep learning. These examples are not just simple imaging but sophisticated methods in general optical sensing and imaging. The data-centric approach is expected to be useful in various problems in applied optics.
Machine learning is an efficient tool to estimate input signals from the output ones for nonlinear systems. In case of object observation through scattering media with coherent illumination, the output signals could be speckle patterns highly disordered from the input ones. Even for the case, machine learning is effective to retrieve the input information. In the method, a number of pairs of input and output signals are used for training, and then untrained input signals can be retrieved from the observed output signals. We demonstrated object recognition, imaging, and focusing through scattering media and confirmed effectiveness of the presented method.
A data-centric method is introduced for object observation through scattering media. A large number of training pairs are used to characterize the relation between the object and the observation signals based on machine learning. Using the method object information can be retrieved even from strongly-disturbed signals. As potential applications, object recognition, imaging, and focusing through scattering media were demonstrated.
We introduce our two research activities related to computational imaging through scattering media. The first topic is holographic imaging with coded diffraction. The second topic is sensing through scattering media based on machine learning. Our approaches can simplify optical setups by means of computational assistances compared with those in conventional systems.
We propose and demonstrate an optical component that overcomes critical limitations in our previously demonstrated high-speed multispectral videography—a method in which an array of periscopes placed in a prism-based spectral shaper is used to achieve snapshot multispectral imaging with the frame rate only limited by that of an image-recording sensor. The demonstrated optical component consists of a slicing mirror incorporated into a 4f-relaying lens system that we refer to as a spectrum slicer (SS). With its simple design, we can easily increase the number of spectral channels without adding fabrication complexity while preserving the capability of high-speed multispectral videography. We present a theoretical framework for the SS and its experimental utility to spectral imaging by showing real-time monitoring of a dynamic colorful event through five different visible windows.
Combination of optical encoding and algorithmic decoding provides high-performance and high-functional imaging modalities known as computational imaging. Multi-aperture optics has been effectively utilized as an optical encoder and enables us to implement novel imaging methods and systems. In this paper, two instances of computational imaging using multi-aperture optics are presented with different types of apertures embedding in the optical system. A compound-eye imager is a flexible and versatile imaging system composed of multiple image fields, which was applied to intra-oral diagnostics. Single-shot phase imaging capable of capturing a large complex field is achieved by multi-aperture optics with a coded aperture. In both cases, the encoded signals are processed to retrieve desired information under the framework of computational imaging.
Artificial compound-eye optics have been used for three-dimensional information acquisition and display. It also enables us to realize a diversity of coded imaging process in each elemental optics. In this talk, we introduce our single-shot compound-eye imaging system to observe multi-dimensional information including depth, spectrum, and polarization based on compressive sensing. Furthermore it is applicable to increase the dynamic range and field-of-view. We also demonstrate an extended depth-of-field (DOF) cameras based on compound-eye optics. These extended DOF cameras physically or computationally implement phase modulations to increase the focusing range.
KEYWORDS: Integral imaging, Cameras, Imaging systems, 3D image processing, 3D acquisition, Compressed sensing, Sensors, Digital holography, Optical filters, Stereoscopy
In this keynote address paper, we present an overview of our previously published work on using compressive sensing in multi-dimensional imaging. We shall examine a variety of multi dimensional imaging approaches and applications, including 3D multi modal imaging integrated with polarimetric and multi spectral imaging, integral imaging and digital holography. This Keynote Address paper is an overview of our previously reported work on 3D imaging with compressive sensing.
We present a framework of multi-dimensional compound-eye imaging with the thin observation module by bound
optics (TOMBO) and its applications. In the system, each of sub-optics equips optical coding elements to shear
or to weight the multi-dimensional object information along the axial direction. The encoded information is
integrated onto the detectors in the sub-optics. The object is reconstructed by a compressive sensing algorithm.
The framework can be applied to various optical information acquisitions. We describe some applications of the
framework including spectral imaging, polarization imaging, and so on.
We have proposed a compact three-dimensional shape-measurement system for intraoral diagnosis, in which multiwavelength
pattern projectors based on diffractive optical elements (DOEs) are integrated in the lens gaps of a
compound-eye camera. We have built a prototype module with blue and green pattern projectors in both sides of the
compound-eye camera to increase in-plane spatial resolution. With the two projectors, the stripe pitch was reduced to
0.73 mm in average from about 1.4 mm for one wavelength. Root-mean-square (rms) error of the measured depth map of
a plane board was 0.27 mm at the distance of 40 mm. The rms errors for the measured results of the gums and teeth of a
plaster figure and an examinee were 0.37 and 0.40 mm, respectively.
TOMBO (Thin Observation Module by Bound Optics) is a compound-eye imaging system inspired by a visual organ of insects. TOMBO has various advantages over conventional imaging systems. However, to demonstrate applicability of TOMBO as an imaging system, high-resolution imaging is significant and unavoidable. In
this study, a TOMBO system with irregular lens-array arrangement is proposed and a high-resolution imaging method integrating a super-resolution process with depth acquisition of three-dimensional objects is presented. The proposed TOMBO system improves image resolution for far objects, because it can alleviate degeneration
of the sampling points on the far objects caused by the regular arrangement of the lens array in the conventional TOMBO system. An experimental TOMBO has 1.3 mm focal length of lens, 0.5 mm pitch of lenses, 0.5 mm diameter of aperture, 3 × 3 of units, 160 × 160 pixels per unit, and 3.125 μm pitch of pixel. The target planar object is located at 5 m from the TOMBO system. The simulation result shows that the coverage ratio of the sampling points, PSNR of the super-resolved image, and the error of the depth estimation for the object are improved by 50%, 3 dB, and 56%, respectively. The experimental result shows that the error of depth estimation for the planar object located at 3.2 m is 18% and that the contrast of 123 lp/mm at the center of a unit is improved by 0.38 with the super-resolution processing.
The TOMBO (Thin Observation Module by Bound Optics) is a compound imaging system inspired by biological
visual systems. The image of an object captured by the TOMBO system is composed of multiple images
observed from multiple view-points. Owing to disparities between the individual images, the object distance
can be measured. In this paper, we propose a novel method for 3D information acquisition using the TOMBO
system. The conventional image reconstruction method on the TOMBO system assumes that a planar object
is located at a specific distance. Therefore, if the actual and the assumed object distances are different, the
correct reconstructed image is not obtained. To reconstruct the correct image of 3D objects, we execute the
image reconstruction process with several candidates of the object distance. The distance where high frequency
components are successfully reconstructed is determined as the object distance. Using the distances of all objects,
we can generate a composite image focusing on the objects. Moreover, object extraction is demonstrated by
using the measured object distances and the composite image. We reduce the processing time by adaptation of
the processing for a GPU (Graphics Processing Unit). Experimental results indicate effectiveness of the proposed
method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.