KEYWORDS: Holograms, 3D modeling, OpenGL, Light sources and illumination, Computer generated holography, 3D image reconstruction, Complex amplitude, 3D displays, Holography
Holographic three-dimensional (3D) display is one of the current research hotspots in the field of true 3D display. It records and reconstructs the wavefront information of a 3D scene, and is able to provide all the depth cues required by the human eye. The point-source method is the most accurate way to calculate computer-generated holograms (CGHs) for holographic 3D display. However, it usually needs a lot of computation, especially when calculating full-parallax CGHs, which requires rendering the 3D scene from each perspective and superposition of complex amplitudes from the point sources for each pixel on the hologram plane. Serial calculation using central processing units (CPUs) is timeconsuming and difficult to meet actual demands. In this paper, OpenGL shaders are utilized to realize fast calculation of CGHs. Taking advantage of the highly parallel architecture of the graphics processing unit (GPU), OpenGL shaders are used to process the data of 3D objects, compute amplitudes of the sampling points, and implement superposition and phase extraction in parallel. Finally, Phase-only holograms are obtained by this method. Images of the scene are reconstructed from the generated holograms to verify the correctness of the proposed method. Comparing the time taken by MATLAB and OpenGL to generate holograms of three different 3D models under the same conditions, the calculation method based on OpenGL shaders proposed in this paper can significantly shorten the computation time. The computation speed is increased by up to 600 times. It is expected to achieve real-time calculation of full-parallax CGHs and promote practical applications of holographic 3D display.
KEYWORDS: OpenGL, 3D displays, Signal attenuation, Bone, 3D modeling, Virtual reality, Visualization, Data transmission, Data processing, Graphics processing units, Light
In recent years, with the rapid development of emerging technologies such as virtual reality (VR) and augmented reality (AR), 3D display technology has attracted widespread attention. As a promising true 3D display technology, multi-layer 3D display is not only able to reproduce the light field, provide complete depth cues, and restore the real scene, but also has excellent viewing and interactive experience, which is widely used in the fields of virtual reality, medical imaging, and game development. However, although multilayer 3D display technology has made significant progress in displaying static scenes, it still faces challenges in displaying dynamic 3D scenes. To overcome these challenges, this study adopts OpenGL as the core technology to achieve dynamic drawing, real-time updating and rendering of graphics and models. At the same time, CUDA technology is used to combine the parallel computing capability of GPU to achieve high frame rate light field decomposition, which achieves 16.7 frames per second display of dynamic scenes at 1920×1080 resolution. This provides users with a more stunning and realistic viewing experience. Through the application of these innovative methods, the further development of multi-layer 3D display technology has been promoted, providing new possibilities and technical support for the display of dynamic 3D scenes. These results not only help to improve the user experience, but also open up a new research direction for the development of virtual reality technology.
Among current true 3D display technologies, multi-layer 3D displays based on the principle of compressive light field have the advantages of high resolution, simple structure and faithful restoration of depth cues, demonstrating enormous research value and application potential. In recent years, multi-layer 3D displays have attracted increasing attention from researchers and some progresses have been made in improving the performance. However, there are still some limitations, such as the color deviation issue which causes unnatural colors of the reconstructed scene. In this paper, we propose using a customized look-up table (LUT) to alleviate the color deviation problem of multi-layer displays. For each of the display layers, we measured the response curves of the RGB channels, respectively, corresponding to different input gray levels. Then we compared them with a commercial standard display, so that we could correct each value within the gray range of the three channels to obtain a target output response, and the corrected values were used to build the look-up table. Using the customized LUT, we successfully achieved correction of color deviation in our multilayer display system. Finally, we demonstrated a 3D scene with natural colors, proving the effectiveness of our method on correcting the color deviation in multi-layer light field displays.
After decades of development, artificial neural network has become one of the most important research directions of artificial intelligence, and has a wide range of applications and important value in computer vision, natural language processing and other fields. Today, most of the applied artificial neural networks are based on von Neumann electronic hardware. As the semiconductor process approaches the physical limit, the performance growth encounters bottlenecks, and the power consumption problem is difficult to solve, limiting the application and further development of deep learning. Optical neural network provides a way to break through the bottleneck due to its high speed, high parallelism and low power consumption. At present, most optical neural networks are difficult to expand the depth of the network and have limited performance. In this paper, a multilayer optoelectronic hybrid convolutional neural network with an optical 4f-system recurrent structure is proposed. The electronic convolutional layer is replaced by an optical convolutional layer based on the 4f system, and the depth of the neural network is extended by the recurrent structure of the 4f system to improve its performance. Experiments show that the recognition accuracy of CIFAR-10 dataset of the proposed hybrid neural network is close to that of a corresponding electronic neural network. This work provides a possible way to build a deeper optoelectronic hybrid convolutional neural network when dealing with complicated problems.
The principle of computer-generated holograms (CGHs) for holographic displays is to use the computer to simulate the propagation of light and obtain the wavefront information of an object on the hologram plane. In recent years, algorithms for CGHs based on physical diffraction models have been gradually developed and improved. Hologram calculation methods based on points, lines, polygons and layers have been proposed, named the point-source method, the wireframe method, the polygon method and the layer method, respectively. All the four methods simulate and accumulate the physical diffraction process of the elements, i.e. points, lines, polygons and layers. Their numbers of elements differ greatly, and the reconstructed images have their own features. The type and number of elements affect the calculation speed and reconstruction quality of the hologram. This paper introduces the principles and research status of these four methods and analyze their characteristics. The total calculation time and the average elementary calculation time of the four methods are compared through numerical experiments using the same model. The calculation speed and reconstructed image quality of different methods are evaluated by the equivalent-sample-point method. This paper hopes to provide a guidance and a reference for future research on hologram calculation.
Mass spectrometer is one of the most important instruments in the field of modern analysis. Despite efforts to increase efficiency, it remains a challenge to deploy convolutional neural networks in mass spectrometer due to tight power budgets. In this paper, we propose a hybrid optical-electronic convolutional neural network to achieve fast and accurate classification and identification of mass spectra. The optical convolutional layer is realized by a folded 4f system. Our prototype with one single convolutional layer achieves 96.5% classification accuracy in an experimentally-acquired lipid dataset. A more complicated prototype adding one fully-connected layer achieves 100% accuracy. The proposed hybrid optical-electronic convolutional neural networks might enable non-professionals to avoid the accumulation of experimental experience and complicated calculations.
In recent years, three-dimensional (3D) display has received widespread attention. The light field display technology based on multi-layer translucent structures enables observers to directly acquire 3D scenes without wearing any auxiliary equipment, and has the advantages of high resolution and low cost. The use of liquid crystal panels as translucent structures makes it possible to realize dynamic 3D display. However, interactive 3D display can hardly be achieved due to the unacceptable long time to generate each frame of a 3D animation. In this paper, we reduce the time consumption for each frame by optimizing acquisition of the four-dimensional light field and calculation of the multi-layer LCD images. With the help of powerful rendering capabilities of OpenGL, we easily obtain the light field information of 3D scenes within less than 0.5s. The light field is rapidly decomposed into multiple LCD images utilizing parallel computing of graphics processing units. Human-computer interaction is realized through the development of Kinect. A 3D display system based on multi-layer LCDs is built, information flow between various components in the system is created, and interactive 3D display is implemented.
Three-dimensional (3D) display technology, which aims at presenting almost-realistic 3D images to the observer without any auxiliary devices, has drawn great attention from both academia and industry these years. The 3D display based on multi-layer translucent structure is a new parallax-based 3D display model. Compared with conventional parallax barrier and integrated imaging, it can effectively ensure the utilization of light energy and image resolution of the system, expand the screen depth at the same time, and display a realistic virtual 3D scene. In this paper, we implement a flat 3D display using multi-layer translucencies. Based on our prototype of flat 3D display, we further extend it to spatial 3D display using mirroring of a square pyramid, which allows observers to see virtual 3D objects from different directions in the air.
Optical-electronic Integrated Neural Co-processor takes vital part in optical neural network, which is mainly realized by optical interconnects. Because of the accuracy requirement and long-term goal of integration, optical interconnects should be effective and pint-size. In traditional solutions of optical interconnects, holography built on crystalloid or law of Fresnel diffraction exploited on zone plate was used. However, holographic method cannot meet the efficiency requirement and zone plate is too bulk to make the optical neural unit miniaturization. Thus, this paper aims to find a way to replace holographic method or zone plate with enough diffraction efficiency and smaller size. Metasurfaces are composed of subwavelength-spaced phase shifters at an interface of medium. Metasurfaces allow for unprecedented control of light properties. They also have advanced optical technology of enabling versatile functionalities in a planar structure. In this paper, a nanostructure is presented for optical interconnects. The comparisons of light splitting ability and simulated crosstalk between nanostructure and zone plate are also made.
Photomultiplier tubes (PMTs) are the most common photoelectric conversion apparatus used as photon counters. Because of the sensitivity of the PMTs to the interference, calibration is necessary during the application of the PMTs. Traditional solutions for calibration are either based on the inverse square law of illumination, or using light-emitting diodes (LEDs) as standard light sources. However, rigid experimental techniques are required for these solutions. And the emission spectrum of LEDs does not cover the entire spectrum of detection. In this paper, a calibration method is presented by using a customized standard light source which can provide full spectrum of weak light from the dark count level to the saturation level of the PMTs. The photon counter in a light-shielding cavity is connected, via an optical fiber, to the customized standard light source attached with an intensity detector. The calibration process is discussed and experimental results with chemical reference substance are also presented for comparison.
Visible light communication (VLC) based on light emitting diodes has been regarded as an effective complement to radio frequency signal transmission. The color filter in VLC system plays the pivotal role for boosting signal-noise-ratio. In this paper, a tri-band color transmission filter with bandwidths consisting with LED’s 30nm is designed based on guided mode resonance, incorporating a sub-wavelength aluminum grating on slab dielectric waveguide made of titanium dioxide on silica substrate. Parameters of grating structure, including the grating period, duty cycle, grating thickness, and waveguide thickness, are optimized by employing particle swarm optimization toolbox. The far field spectrum is calculated by rigorous coupled-wave analysis to verify the effectiveness of the designed filter. Three center-wavelength of transmission bands are 440nm, 530 and 630 nm. The full-width-at-half-maximum (FWHM) bandwidths of three bands are about 30nm which consist with LED’s bandwidth.
Due to its low energy consumption, high efficiency and fast switching speed, light-emitted diode (LED) has been used as a new light source in optical wireless communication. To ensure uniform lighting and signal-to-noise ratio (SNR) during the data transmission, diffractive optical elements (DOEs) can be employed as optical antennas. Different from laser, LED has a low temporal and spatial coherence. And its impacts upon the far-field diffraction patterns of DOEs remain unclear. Thus the mathematical models of far-field diffraction intensity for LED with a spectral bandwidth and source size are first derived in this paper. Then the relation between source size and uniformity of top-hat beam profile for LEDs either considering the spectral bandwidth or not are simulated. The results indicate that when the size of LED is much smaller than that of reshaped beam, the uniformity of reshaped beam obtained by light source with a spectral bandwidth is significantly better than that by a monochromatic light. However, once the size is larger than a certain threshold value, the uniformity of reshaped beam of two LED models are almost the same, and the influence introduced by spectral bandwidth can be ignored. Finally the reshaped beam profiles are measured by CCD camera when the areas of LED are 0.5×0.5mm2 and 1×1mm2. And the experimental results agree with the simulations.
Many kinds of optimization algorithm have been applied to design diffractive optical elements (DOEs) for beam shaping. However, only the selected sampling points are controlled by these optimization algorithms, the intensity distribution of other points on the output plane is always far away from the ideal distribution. In our previous research, the non-selected points were well controlled by using a hybrid algorithm merging hill-climbing with simulated annealing, but this hybrid algorithm is time-consuming. In this paper, a new hybrid algorithm merging Gerchberg-Saxton algorithm with gradient method is presented. Because of the use of iterative algorithm, the optimization time is largely reduced. The intensity distribution of the non-selected points as well as that of the selected points is well controlled, and good performance of beam shaping is obtained. Finally the experimental results demonstrate the good performance of this algorithm.
Recently, interferometric null-testing with computer-generated hologram has been proposed as a non-contact and high
precision solution to the freeform optics metrology. However, the interferometry solution owns some typical
disadvantages such as the strong sensitivity to the table vibrations or temperature fluctuations, which hinders its usage
outside the strictly controlled laboratory conditions. Phase retrieval presents a viable alternative to interferometry for
measuring wavefront and can provide a more compact, less expensive, and more stable experimental setup. In this work,
we propose a novel solution to freeform metrology based on phase retrieval and computer-generated hologram (CGH).
The CGH is designed according to the ray tracing method, so as to compensate the aspheric aberration related to the
freeform element. With careful alignment of the CGH and the freeform element in the testing system, several defocused
intensity images can be captured for phase retrieval. In this paper the experimental results related to a freeform surface
with 18×18mm2 rectangular aperture (its peak-to-valley aspherity equals to 193um) are reported, meanwhile, we also
have compared them with the measurement results given by the interferometry solution, so as to evaluate the validity of
our solution.
Fast optimization algorithms of the design of diffractive optical elements (DOEs) for beam shaping are often based on
fast Fourier transform (FFT), and the demand of the sampling theorem must be met when FFT is used to calculate the
light intensity. Limited by the fabricating technology, the pixel size of a DOE cannot be too small. For beam shaping in
Fresnel diffraction domain, given that the sampling interval of a DOE is fixed, if the diffraction distance is too short, all
FFT algorithms would not meet the demand of the sampling theorem, and then the results of beam shaping would
become worse. In this paper, the disadvantages of the FFT algorithms in near Fresnel diffraction domain are discussed,
and an area division method is proposed for the DOEs design. The simulation and experimental results show the validity
of the proposed area division method.
To deal with the problem of phase-shifting interefrometry with different unknown phase shifts, some special designed algorithms have been put forward by former researchers, such as the advanced iterative algorithm (AIA) and the principal component analysis (PCA) demodulation algorithms. This paper proposes a novel solution for it. Firstly, the captured phase-shifting interefrograms are differentiated to remove the additive background term. Then the trigonometric functions of the modulation phase can be extracted with the blind signal separation method. Simulations and experiments have been carried out to validate the feasibility of the proposed algorithm, where both open and closed fringe patterns are involved. Besides, the comparison results with the AIA and PCA algorithms are also provided.
We propose a holographic 3D display system which can produce images with adjustable viewing parameters and
eliminated zero-order interruption. The 3D scene is generated from a 3D CAD tool, and point source algorithm is used to
generate the holograms. A two-step model is introduced in the computing to generate precise Fourier holograms. A
phase-only spatial light modulator (SLM) is used in the optical reconstruction, which can replay clear images for 3D
diffusive objects. During optical reconstructing, the viewing angle and image size of the system can be adjusted by
changing the parameters of the replay lens. A filter is introduced in the replay system to eliminate the zero-order
interruption and increase the 3D image quality. Optical experiments are performed, and the results show that our
proposed holographic display system can produce noiseless 3D image reconstructions.
KEYWORDS: Remote sensing, Data storage, Holography, 3D optical data storage, Computer programming, Compact discs, Digital video discs, Telecommunications, Numerical simulations, Precision measurement
For applying to various error patterns, including random errors, burst errors, and inhomogeneously distributed errors, in
the holographic data storage (HDS) channel, a three-dimensional error correcting with matched interleaving (3DEC-MI)
scheme is proposed in this paper. The 3DEC-MI scheme combines the advantages of the three-dimensional error
correcting scheme and the matched interleaving scheme, makes full use of the priori knowledge of the error patterns in
the HDS channel, distributes errors more uniformly, and decodes data iteratively in three dimensions. It is able to
eliminate the influences of non-uniform distribution of errors within a page and across pages, overcome the effects of
burst errors, correct random errors, and effectively reduce the symbol error rate (SER) of the HDS channel.
KEYWORDS: Holography, Diffraction, Sensors, Point spread functions, Holograms, Spatial light modulators, Optical simulations, Data storage, Modulation, Holographic data storage systems
The wavelength and defocus margins for collinear holographic data storage system are theoretically analyzed based on
the first Born approximation and the scalar diffraction theory. Explicit expressions for the decay of diffracted signal in
the center of the detector plane with the shift of the reading wavelength and with the defocus of the disc are presented.
The expressions predict that the defocus margin is independent of the media thickness while a thicker disc leads to a
narrower wavelength margin. Simulation results show that the wavelength margin of collinear holographic scheme is
larger than that of the conventional 2-axis holographic scheme. The influences of the properties of reference pattern on
both margins are also discussed.
KEYWORDS: Signal to noise ratio, Spatial light modulators, Sensors, Data modeling, Data storage, Holography, Volume holography, Signal detection, Detector arrays, Holographic data storage systems
To compensate misregistrations between a detector array and a spatial light modulator in page-oriented volume
holographic data storage, a method based on a three-pixel model is proposed against sub-pixel misalignment. Several
methods for pixel mismatch compensation are reviewed. The quadratic two-pixel method is inapplicable when the local
shift is negative or the size of the aperture is relatively small. The inter-pixel crosstalk model is revised and an improved
three-pixel model is developed, which can be used to compensate arbitrarily misaligned data pages. The compensation
method uses prior information of the pixels on the input spatial light modulator (SLM). Recursive solutions are carried
out to recover the real values of the SLM pixels. Both simulation and experimental results show that the signal-to-noise
ratio (SNR) can be doubled approximately by use of the compensation method based on the three-pixel model. The
proposed method is appropriate for both positive and negative pixel shifts, and has similar effects of equalization, which
effectively improves the SNR.
A cross-shaped aperture is proposed for the Holographic Data Storage System (HDSS). Based on the non-symmetric
HDSS model, numerical simulations are carried out to compare the sensitivity to pixel shift, magnification error and
noise level of the cross-shaped aperture with the ordinary square aperture. The simulation results show that equivalent
or lower bit error rate can be achieved with the optimized cross-shaped aperture than that with the square aperture, while
the area of the cross-shaped aperture is 20 percent less than the corresponding square aperture. Thereby the multiplexing
spacing can be reduced and the areal density can be increased in HDSS. Experimental results of the performances of the
cross-shaped aperture from a custom-built HDSS are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.