Computational optical imaging is a combination of computationally designed illumination, optics, and processing algorithms. Some of these novel optical systems are either applied for capturing multi-dimensional information or for displaying this information to the user. These capture and display devices have been studied for a long time and a few are now slowly maturing toward commercial applications.
Many 3D displays have been proposed over the years but only some of them promise to give true 3D perception to humans. In this talk, I will focus on one such display technique which is based on integral imaging, also known as light field displays, exploring how they provide 3D information and discussing enabling technologies which are required for their success. In parallel to developing these displays, understanding of how humans perceive 3D is also important and has to be taken in to account while designing these displays. I will highlight these issues for integral displays and show how these displays have a potential to provide accurate and comfortable 3D experiences.
While exploring 3D displays, one of the obvious next questions is about generating the content and information these displays can show. While showing computer generated information is a relatively easier route, capturing and converting real world content for these displays is not trivial. I will show examples of capture methods for integral displays and talk about two specific methods to capture 3D information from real world scenes and show on integral displays.
Single-molecule-based super-resolution fluorescence microscopy has recently been developed to surpass the diffraction
limit by roughly an order of magnitude. These methods depend on the ability to precisely and accurately measure the
position of a single-molecule emitter, typically by fitting its emission pattern to a symmetric estimator (e.g. centroid or
2D Gaussian). However, single-molecule emission patterns are not isotropic, and depend highly on the orientation of the
molecule’s transition dipole moment, as well as its z-position. Failure to account for this fact can result in localization
errors on the order of tens of nm for in-focus images, and ~50-200 nm for molecules at modest defocus. The latter range
becomes especially important for three-dimensional (3D) single-molecule super-resolution techniques, which typically
employ depths-of-field of up to ~2 μm. To address this issue we report the simultaneous measurement of precise and
accurate 3D single-molecule position and 3D dipole orientation using the Double-Helix Point Spread Function (DH-PSF)
microscope. We are thus able to significantly improve dipole-induced position errors, reducing standard deviations
in lateral localization from ~2x worse than photon-limited precision (48 nm vs. 25 nm) to within 5 nm of photon-limited
precision. Furthermore, by averaging many estimations of orientation we are able to improve from a lateral standard
deviation of 116 nm (~4x worse than the precision, 28 nm) to 34 nm (within 6 nm).
KEYWORDS: Point spread functions, Super resolution microscopy, Super resolution, Stereoscopy, 3D image processing, Microscopes, Imaging systems, Microscopy, Image resolution, Spindles
Double-helix point spread functions (DH-PSFs) have been used to extend localization and super-resolution microscopy
to three dimensions. Current DH-PSF design provides a long depth-of-field for 3D imaging. However, it is not optimal
for imaging under high background conditions. We present a method to design unconventional DH-PSFs with control on
characteristics such as efficiency, transverse spread, and Fisher information. This allows tailoring the PSFs for specific
applications. In particular, we introduce a design suitable for 3D localization-based super-resolution imaging under
typical background conditions and demonstrate imaging with resolution one order of magnitude beyond the diffraction
limit over a depth-of-field of 1.2 μm.
Point spread function engineering with a double helix (DH) phase mask has been recently used in a joint computationaloptical
approach for the determination of depth and intensity information from fluorescence images. In this study,
theoretically determined DH-PSFs computed from a model that incorporates different amounts of depth-induced
spherical aberration (SA) due to refractive-index mismatch in the three-dimensional imaging layers, are evaluated
through a comparison to empirically determined DH-PSFs measured from quantum dots. The theoretically-determined
DH-PSFs show a trend that captures the main effects observed in the empirically-determined DH-PSFs. Calibration
curves computed from these DH-PSFs show that SA slows down the rate of rotation observed in a DH-PSF which results
in: 1) an extended range of rotation; and 2) asymmetric rotation ranges as the focus is moved in opposite directions.
Thus, for accurate particle localization different calibration curves need to be known for different amounts of SA.
Results also show that the DH-PSF is less sensitive to SA than the conventional PSF. Based on this result, it is expected
that fewer depth-variant (DV) DH-PSFs will be required for 3D computational microscopy imaging in the presence of
SA compared to the required number of conventional DV PSFs.
Double Helix point-spread functions (DH-PSFs), the result of PSF engineering, are used for super resolution microscopy.
The DH-PSF design features two dominant lobes in the image plane which rotate with the change in axial (z) position of
the light point source. The center of the DH-PSF gives the precise XY location of the point source, while the orientation
of the lobes gives the axial location. In this paper we investigate the effect of spherical aberrations on the DH-PSF.
Physical parameters such as the lens used, the size of the particle, refractive index of medium, and depth i.e., location
within the underlying object, contribute to the amount of spherical aberration. DH-PSFs with spherical aberrations are
computed for different imaging conditions. Three-dimensional images were generated of computer-generated objects
using both space-invariant and depth-variant approach. Different approaches to estimate intensity and location of points
from these images were investigated. Our results show that the DH-PSFs are susceptible to spherical aberration leading
to an apparent shift in the location of the point source with increasing spherical aberrations which is comparable to the
conventional PSF. Estimation algorithms like the depth variant expectation maximization (DVEM) can be used to obtain
estimates of the true underlying object from the image obtained with DH-PSFs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.