Differential artery-vein analysis is valuable for early detection of diabetic retinopathy (DR) and other eye diseases. As a new optical coherence tomography (OCT) imaging modality, emerging OCT angiography (OCTA) provides capillary level resolution for accurate examination of retinal vasculatures. However, differential artery-vein analysis in OCTA, particularly for macular region in which blood vessels are small, is challenging. In coordination with an automatic vessel tracking algorithm, we report here the feasibility of using near infrared OCT oximetry to guide artery-vein classification in OCTA of macular region.
KEYWORDS: In vivo imaging, Retina, Super resolution, Retinal scanning, Rods, Information operations, Line scan image sensors, Spatial resolution, Video, Physiology
Rod-dominated transient retinal phototropism (TRP) has been observed in freshly isolated retinas, promising a noninvasive biomarker for objective assessment of retinal physiology. However, in vivo mapping of TRP is challenging due to its subcellular signal magnitude and fast time course. We report here a virtually structured detection-based super-resolution ophthalmoscope to achieve subcellular spatial resolution and millisecond temporal resolution for in vivo imaging of TRP. Spatiotemporal properties of in vivo TRP were characterized corresponding to variable light intensity stimuli, confirming that TRP is tightly correlated with early stages of phototransduction.
Rod-dominated transient retinal phototropism (TRP) has been observed in freshly isolated retinas, promising a noninvasive biomarker for high resolution assessment of retinal physiology. However, in vivo mapping of TRP is challenging due to its fast time course and sub-cellular signal magnitude. By developing a line-scanning and virtually structured detection based super-resolution ophthalmoscope, we report here in vivo observation of TRP in frog retina. In vivo characterization of TRP time course and magnitude were implemented by using variable light stimulus intensities.
We demonstrated the feasibility of using holographic waveguide for eye tracking. A custom-built holographic waveguide, a 20 mm x 60 mm x 3 mm flat glass substrate with integrated in- and out-couplers, was used for the prototype development. The in- and out-couplers, photopolymer films with holographic fringes, induced total internal reflection in the glass substrate. Diffractive optical elements were integrated into the in-coupler to serve as an optical collimator. The waveguide captured images of the anterior segment of the eye right in front of it and guided the images to a processing unit distant from the eye. The vector connecting the pupil center (PC) and the corneal reflex (CR) of the eye was used to compute eye position in the socket. An eye model, made of a high quality prosthetic eye, was used prototype validation. The benchtop prototype demonstrated a linear relationship between the angular eye position and the PC/CR vector over a range of 60 horizontal degrees and 30 vertical degrees at a resolution of 0.64-0.69 degrees/pixel by simple pixel count. The uncertainties of the measurements at different angular positions were within 1.2 pixels, which indicated that the prototype exhibited a high level of repeatability. These results confirmed that the holographic waveguide technology could be a feasible platform for developing a wearable eye tracker. Further development can lead to a compact, see-through eye tracker, which allows continuous monitoring of eye movement during real life tasks, and thus benefits diagnosis of oculomotor disorders.
Speckle formation is a limiting factor when using coherent sources for imaging and sensing, but can provide useful information about the motion of an object. Illumination sources with tunable spatial coherence are therefore desirable as they can offer both speckled and speckle-free images. Efficient methods of coherence switching have been achieved with a solid-state degenerate laser, and here we demonstrate a semiconductor-based degenerate laser system that can be switched between a large number of mutually incoherent spatial modes and few-mode operation.
Our system is designed around a semiconductor gain element, and overcomes barriers presented by previous low spatial coherence lasers. The gain medium is an electrically-pumped vertical external cavity surface emitting laser (VECSEL) with a large active area. The use of a degenerate external cavity enables either distributing the laser emission over a large (~1000) number of mutually incoherent spatial modes or concentrating emission to few modes by using a pinhole in the Fourier plane of the self-imaging cavity. To demonstrate the unique potential of spatial coherence switching for multimodal biomedical imaging, we use both low and high spatial coherence light generated by our VECSEL-based degenerate laser for imaging embryo heart function in Xenopus, an important animal model of heart disease. The low-coherence illumination is used for high-speed (100 frames per second) speckle-free imaging of dynamic heart structure, while the high-coherence emission is used for laser speckle contrast imaging of the blood flow.
KEYWORDS: Super resolution, In vivo imaging, Spatial frequencies, Retinal scanning, Image resolution, Retina, Eye, Spatial resolution, Signal to noise ratio, Microscopy
High resolution is important for sensitive detection of subtle distortions of retinal morphology at an early stage of eye diseases. We demonstrate virtually structured detection (VSD) as a feasible method to achieve in vivo super-resolution ophthalmoscopy. A line-scanning strategy was employed to achieve a super-resolution imaging speed up to 127 frames/s with a frame size of 512×512 pixels. The proof-of-concept experiment was performed on anesthetized frogs. VSD-based super-resolution images reveal individual photoreceptors and nerve fiber bundles unambiguously. Both image contrast and signal-to-noise ratio are significantly improved due to the VSD implementation.
We present a high speed, phase-sensitive, line-scanning reflectance confocal interference microscope. We achieved rapid confocal imaging using a fast line-scan camera and quantitative phase imaging using off-axis digital holography on a 1D, line-by-line basis. In our prototype system, a He-Ne laser (~1.2 mW) was used to demonstrate the principle of operation. Using a 20 kHz line scan rate (1024 pixels per line scan), we achieved a video-rate frame rate of 20 Hz for 1024x500 pixel en-face confocal images (20 MHz total pixel rate). By using an objective lens of a NA 0.65, we achieved an axial and lateral resolution of ~3.5 micrometers and ~0.8 micrometers, respectively. By z-stack imaging of a custom silicon target with a stepped structure, we confirmed that the axial sectioning of the interference microscope is similar to that of a traditional line-scan confocal microscope (our microscope with the reference arm blocked). The utility of phase-sensitive holographic detection in line-scan confocal was demonstrated in two ways. First, using a custom axial height phantom fabricated using chrome deposition, we demonstrated variations in phase corresponding to heights in the 100 nm range with a contrast-to-noise ratio of ~31 dB. Second, we demonstrate digital refocusing of an out-of-focus holographic image. The mechanism of confocality in our line-scan system is 1D physical pinholing. Our ongoing work aims to add an additional mechanism of confocality by using low spatial coherence sources to impose interferometric pinholing.
A digital adaptive optics line-scanning confocal imaging (DAOLCI) system is proposed by applying digital holographic adaptive optics to a digital form of line-scanning confocal imaging system. In DAOLCI, each line scan is recorded by a digital hologram, which allows access to the complex optical field from one slice of the sample through digital holography. This complex optical field contains both the information of one slice of the sample and the optical aberration of the system, thus allowing us to compensate for the effect of the optical aberration, which can be sensed by a complex guide star hologram. After numerical aberration compensation, the corrected optical fields of a sequence of line scans are stitched into the final corrected confocal image. In DAOLCI, a numerical slit is applied to realize the confocality at the sensor end. The width of this slit can be adjusted to control the image contrast and speckle noise for scattering samples. DAOLCI dispenses with the hardware pieces, such as Shack–Hartmann wavefront sensor and deformable mirror, and the closed-loop feedbacks adopted in the conventional adaptive optics confocal imaging system, thus reducing the optomechanical complexity and cost. Numerical simulations and proof-of-principle experiments are presented that demonstrate the feasibility of this idea.
KEYWORDS: Digital holography, Holograms, Optical fibers, 3D image processing, 3D metrology, 3D image reconstruction, Microscopy, Holography, Profiling, Particles
Three-dimensional profiling and tracking by digital holography microscopy (DHM) provide label-free and quantitative analysis of the characteristics and dynamic processes of objects, since DHM can record real-time data for microscale objects and produce a single hologram containing all the information about their three-dimensional structures. Here, we have utilized DHM to visualize suspended microspheres and microfibers in three dimensions, and record the four-dimensional trajectories of free-swimming cells in the absence of mechanical focus adjustment. The displacement of microfibers due to interactions with cells in three spatial dimensions has been measured as a function of time at subsecond and micrometer levels in a direct and straightforward manner. It has thus been shown that DHM is a highly efficient and versatile means for quantitative tracking and analysis of cell motility.
KEYWORDS: Digital holography, Holograms, 3D metrology, Particles, Holography, Profiling, 3D image processing, Microscopy, 3D image reconstruction, Optical tracking
Digital holographic microscopy (DHM) is a potent tool to perform three-dimensional imaging and tracking. We present a review of the state-of-the-art of DHM for three-dimensional profiling and tracking with emphasis on DHM techniques, reconstruction criteria for three-dimensional profiling and tracking, and their applications in various branches of science, including biomedical microscopy, particle imaging velocimetry, micrometrology, and holographic tomography, to name but a few. First, several representative DHM configurations are summarized and brief descriptions of DHM processes are given. Then we describe and compare the reconstruction criteria to obtain three-dimensional profiles and four-dimensional trajectories of objects. Details of the simulated and experimental evidences of DHM techniques and related reconstruction algorithms on particles, biological cells, fibers, etc., with different shapes, sizes, and conditions are also provided. The review concludes with a summary of techniques and applications of three-dimensional imaging and four-dimensional tracking by DHM.
We are developing adaptive optics systems for aberration corrections in retinal imaging based on digital holography.
Compared to existing technologies of adaptive optics, our systems do not have hardware components such as lenslet
arrays or deformable mirrors. Instead, wavefront sensing and correction are done by acquisition and numerical
manipulation of optical phase by digital holography, thereby substantially reducing hardware complexity and
introducing novel imaging capabilities. Experimental results are presented to demonstrate capabilities of this novel
imaging system.
KEYWORDS: Fourier transforms, Integral transforms, Wave propagation, Systems modeling, Free space, Free space optics, Digital holography, Reconstruction algorithms, Convolution, Algorithm development
The linear canonical transform(LCT) is a parameterized linear integral transform, which is the general case of many
well-known transforms such as the Fourier transform(FT), the fractional Fourier transform(FRT) and the Fresnel
transform(FST). These integral transforms are of great importance in wave propagation problems because they are the
solutions of the wave equation under a variety of circumstances. In optics, the LCT can be used to model paraxial free
space propagation and other quadratic phase systems such as lens and graded-index media. A number of algorithms have
been presented to fast compute the LCT. When they are used to compute the LCT, the sampling period in the transform
domain is dependent on that in the signal domain. This drawback limits their applicability in some cases such as color
digital holography. In this paper, a Fast-Fourier-Transform-based Direct Integration algorithm(FFT-DI) for the LCT is
presented. The FFT-DI is a fast computational method of the Direct Integration(DI) for the LCT. It removes the
dependency of the sampling period in the transform domain on that in the signal domain. Simulations and experimental
results are presented to validate this idea.
The lensless Fourier transform digital holography has been widely employed in microscopic imaging. It enables
quantitative phase analysis for both reflection and transmission objects. The phase image is obtained in the numerical
reconstruction procedure. The in-focus reconstruction distance could be determined according to the extremum of the
autofocusing criterion function, which is commonly applied in finding the in-focus amplitude image of the object. Then
the reconstruction distance for the phase image is considered to be equal to the one for the amplitude image. When the
object is a pure phase sample, such as the living cell, the minimum value of the autofocusing criterion function should be
found to determine the in-focus reconstruction distance. However, in the experiment, the in-focus amplitude image is
often not an ideal uniform bright field, so this method will result in some deviation. In this contribution, two
derivatives-based criterion functions are applied to the phase image directly to accomplish the in-focus phase contrast
imaging, which is more intuitive and precise. In our experiments, the set-up of the lensless Fourier transform digital
holography is established firstly. Then the living cervical carcinoma cells are detected. The phase aberration is corrected
by two-step algorithm. The final autofocusing results verify the algorithm proposed in this paper.
The Rayleigh-Sommerfeld formula(RS) has proved accurate for evaluating diffraction of the optical field from a planar
aperture. Thus the FFT based direct integration method for the RS(FFT-DIRS) can provide a more exact reconstructed
image from sampling points of the diffraction field of the object than the numerical method for the Fresnel formula(FR)
that is an approximation of the RS. Although the FFT-DIRS has been proposed and studied in some literatures, an
important problem remains to be solved, that is the effect of sampling on it. Sampling of the object diffracted field leads
to a periodic or quasi periodic shifting of the reconstructed image. If these spatial replicas overlap, the desired image can
not be recovered without the aliasing noise. So the overlapping period plays an important role in employing the FFTDIRS
for the practical applications. In this paper, a formula of this overlapping period is obtained through the
relationship between the RS and the FR. Then the validity of this formula at different distances is investigated by the
experimental results.
The expressions for the reconstructed field from the sample of the diffracted wave, which is produced by illuminating an object, are found by use of different diffraction integrals in the digital holography. The numerical reconstruction methods that truncate and sample this field are compared in overlapping quality, accuracy, pixel resolution, computation window, and speed. The fast Fourier transform (FFT)-based direct integration method for the Fresnel integral and the modified FFT-based direct integration method for the Rayleigh-Sommerfeld integral have similar overlapping quality and can flexibly control pixel resolution and computation window size. Meanwhile, the FFT-based angular spectrum method is superior to the FFT-based convolution method in accuracy and speed. The experimental results are presented to verify these consequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.