PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The optical designer designs lens arrangements to make light do what is required, whether imaging or projection. To do this, they use optical design software such as Optics Studios or CodeV. Over the past few years, we've seen a number of achievements in optical design software, including complex functions such as simulating what the observer's eye will see from the exact shape of the source (OLED) in the case of a 3D display, as well as new optimization techniques. Here we present the most recent advances where advanced or customized functionality of optical design software meets 3D imaging and display systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently integral-imaging based light field display methods, which are potentially capable of rendering correct or nearly correct focus cues and addressing the vergence-accommodation conflict problem, have been explored and demonstrated in head-mounted displays. Despite their promising potentials, visual performance and potential visual artifacts in viewing light field display haven’t been fully investigated. In this talk, we present a systematic investigation on how to evaluate and characterize the visual effects as well as visual artifacts of the light field display by analyzing the simulated perceived retinal image. We will also demonstrate prototypes using time-multiplexing methods to address some of the visual artifacts or limitations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Event-Based Stereo Systems (EBSS) offer 3D imaging at a very high frame rate, high dynamic range, and low power consumption. EBSSs are based on 2D Dynamic Vision Sensors (DVSs), also known as neuromorphic cameras, which differ from traditional frame-based cameras by capturing and transmitting only changes in pixel-level brightness. A straightforward approach to capturing 3D events is by using two DVSs in a stereoscopic arrangement. A more cost-effective and robust way is to use a single static with appropriate optics to capture the 3D event scene. This talk will overview optical-algorithmic solutions we developed for capturing 3D stereo events with a single DVS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
I will present a novel lightfield system that leverages an event camera to achieve kHz 3D imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A lensless camera is a typical application of computational optical sensing and imaging. Many kinds of a lensless camera have been proposed. In general, in compensation for the compactness, it requires preprocessing (a priori information) which is time-consuming and computationally expensive. We have proposed the method to reduce the computational cost and the method to reduce memory usage was proposed. In this talk, after the brief review of our idea, some preliminary experimental results including refocusing are given to confirm the idea.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Single-cell analysis, or cytometry, is a ubiquitous tool in the biomedical sciences. Whereas most cytometers use fluorescent probes to ascertain the presence or absence of targeted molecules, biophysical parameters such as the cell density, refractive index, and water content are difficult to obtain. We present quantitative phase imaging as an effective technique to quantify the absolute intracellular water content in single cells at video rate, using an assumption of a spherical cellular geometry. Our study demonstrates the utility of QPI for rapid intracellular water quantification and shows a path forward for identifying biophysical mechanisms using label-free imaging. We further demonstrate the use of two complementary techniques - quantitative phase imaging and Brillouin spectroscopy—as a label-free image cytometry platform capable of measuring more than a dozen biophysical properties of individual cells simultaneously. Our system will unlock new avenues of research in biophysics, cell biology, and medicine.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditionally, the resolving power of passive optical imaging systems was understood to be determined by the Rayleigh resolution limit. However, a rigorous analysis of the two-point resolution problem using Quantum information theory has demonstrated that the Rayleigh limit is not fundamental. In fact, we now know that the fundamental quantum optical resolution limit can be achieved by spatial mode de-multiplexing (SPADE) or mode sorting measurements. In this talk, I will discuss our work on pursuing a broader understanding and analysis of the quantum limits of passive optical imaging in the sub-Rayleigh domain (i.e., optical super-resolution) for more complex scenes (such as point source constellations, continuous line sources etc.), including use of adaptive imaging and applications to coronagraphs and multiple-aperture imaging systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mixed spectra pixel interference is one of the main obstacles in multispectral data exploitation. In this work, we present results for a 4D 7-channel multispectral SWIR lidar that is capable of simultaneous spatial and spectral mixed pixel discrimination. Our unique multispectral lidar system allows for the detection of wavelength discrimination with depth, eliminating the mixed pixel problem. Results are demonstrated on the laboratory scale with multiple intermediate obscurations and varying target spectra.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an overview of recent developments in the field of machine learning for 2D and 3D data processing. These include deep networks with attention mechanisms, new strategies for segmentation and object extraction, transformers, as well as some methods for novel view synthesis. We will discuss the relevance of these techniques for processing data obtained from both active and passive 3D imaging systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.