Holographic displays are widely regarded as the pinnacle of three-dimensional (3D) visualization technology. In these displays, real objects must be either photographed or converted into 3D models, which are then processed through neural networks or sophisticated algorithms to generate 3D holograms. To address this challenge, we propose an end-to-end 3D hologram generation strategy that integrates the Transport of Intensity Equation (TIE) phase retrieval technique with the Double Phase-Amplitude Coding (DPAC) method. Under coherent light illumination, phase-only holograms containing depth information can be directly generated by capturing out-of-focus amplitude maps of object light waves propagating to the holographic plane via a camera. The TIE module processes the two out-of-focus amplitude maps to resolve the phase and subsequently generates a phase-only hologram through DPAC. We further conduct simulations to validate the phase retrieval capability of the TIE on complex holograms and demonstrate the feasibility of our proposed strategy.
In holographic near-eye displays, enhancing the user experience by expanding the eyebox without compromising the field of view (FOV) is crucial. Current technologies face limitations due to optical etendue, making it difficult to simultaneously achieve a large eyebox and a wide FOV. This paper presents a novel portable augmented reality holographic near-eye display system that expands the exit pupil without reducing the FOV, using exit pupil scanning technology. The system replaces conventional eyepieces and beam splitters with holographic optical elements, employs point light source illumination instead of collimated illumination, and utilizes an off-axis angular spectrum diffraction propagation model between parallel planes tailored to human visual characteristics. This approach effectively mitigates the trade-off between FOV and eyebox. Compared to traditional systems, the proposed design resolves this trade-off in simulations and reduces the form factor, offering a promising new approach for practical holographic near-eye display applications.
Learning-based computer-generated holography (CGH) has great potential for real-time, multi-depth holographic displays. However, most existing algorithms only use the amplitude of the target image as a dataset to simplify the algorithmic process. This does not adequately consider the incorporation of angular spectrum (ASM) method into neural networks that can compute multiplanar attributes. Here, we propose a multi-depth diffraction model-driven neural network (MD-Holo). MD-Holo utilizes the weights of the pre-trained ResNet34 as initialization in the encoder stage of the complex amplitude generating network to extract more general features. Motion blur, Gaussian filtering, lens blur and low-pass filtered images are added to accommodate a wider range of images. Compared to the super-resolution DIV2K dataset alone, the use of the enhanced dataset allows both the generation of high-fidelity super-resolution images and the generalization of a wider variety of images. Simulations and optical experiments show that MD-Holo can reconstruct multi-depth images with high quality and fewer artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.