In Fourier-domain optical coherence tomography (FD-OCT), image reconstruction has been extensively studied. This paper addresses the trade-off between reconstruction time and image quality of the optimization methods by proposing an unsupervised deep learning-based approach. Different from the existing learning-based methods, the proposed unsupervised learning method incorporates a neural network as an inverse solver and eliminates the need for large training pairs. A proof-of-concept simulation was conducted, comparing our method with an iterative optimization technique using stochastic gradient descent (SGD). Results show that the proposed method achieves real-time reconstruction with a small decrease in image quality compared to SGD, while enabling real-time reconstruction at a speed of 0.008s per B-scan (125 frames per second). In contrast, the SGD method took 0.32s per B-scan, making it 40 times slower. This deep learning-based method has significant potential for real-time image reconstruction and display in future FD-OCT.
Deep learning-based computer-generated holography (CGH) has recently demonstrated tremendous potential in three-dimensional (3D) displays and yielded impressive display quality. However, current CGH techniques are mostly limited on generating and transmitting holograms with a resolution of 1080p, which is far from the ultra-high resolution (16K+) required for practical virtual reality (VR) and augmented reality (AR) applications to support a wide field of view and large eye box. One of the major obstacles in current CGH frameworks lies in the limited memory available on consumer-grade GPUs which could not facilitate the generation of highdefinition holograms. Moreover, the existing hologram compression rate can hardly permit the transmission of high-resolution holograms over a 5G communication network, which is crucial for mobile application. To overcome the aforementioned challenges, we proposed an efficient joint framework for hologram generation and transmission to drive the development of consumer-grade high-definition holographic displays. Specifically, for hologram generation, we proposed a plug-and-play module that includes a pixel shuffle layer and a lightweight holographic super-resolution network, enabling the current CGH networks to generate high-definition holograms. For hologram transmission, we presented an efficient holographic transmission framework based on foveated rendering. In simulations, we have successfully achieved the generation and transmission of holograms with a 4K resolution for the first time on an NVIDIA GeForce RTX 3090 GPU. We believe the proposed framework could be a viable approach for the evergrowing data issue in holographic displays.
Dispersion compensation is an important topic in optical coherence tomography (OCT) since the system- and sample-induced dispersion can often blur the image and degrade the axial resolution. Common numerical compensation methods rely on manual selection of the parameters and there is no universally accepted standard to determine the dispersion-free state. In this work, we propose a method that can automatically compensate the dispersion using fractional Fourier transform (FrFT) and provide a new insight on defining the sharpness metric. We exploit the sparsity of the image in the FrFT domain and thus find the optimal order of FrFT by minimizing the corresponding L1-norm. The effectiveness and robustness of the proposed method is confirmed in both numerical simulation and human skin and retina experiments.
In the measurement of tissues using Fourier-domain optical coherence tomography (FD-OCT), speckle patterns from dynamic and static components often exhibit distinct characteristics: the former can be reduced through incoherent averaging, while the latter cannot. However, in the conventional Monte Carlo (MC) based simulations of FD-OCT, the speckle patterns of dynamic medium and static regions cannot be distinguished due to the random spatial distribution of scattering events across the entire simulated phantom. To tackle this issue, we propose a hybrid phantom model for MC-based realistic simulation of speckles in FD-OCT. In the simulation using the proposed model, static tissue within the 3D structure is modeled as a swarm of fixed particles loosely packed in the background medium. Once a photon is emitted into the static tissue model, it keeps moving until encountering a fixed particle and undergoing scattering. On the other hand, the spatial distribution of scattering points in the dynamic medium is still assumed random, which makes the photon’s step size sampled based on the wavelength-dependent scattering coefficient. Compared to conventional MC simulations, speckles simulated with the proposed model at different time points exhibit a higher spatial correlation in the static structures, which allows them to remain after incoherent averaging. In contrast, speckles in the dynamic component manifest de-correlation across multiple simulations. Future works involve leveraging this method to simulate dynamic OCT and linking structural information with speckle patterns to solve inverse problems.
Peripapillary atrophy (PPA), a type of aberrant retinal symptom frequently present in older individuals or people with myopia, might indicate the severity of glaucoma or myopia. It is particularly beneficial for diagnosis when PPA is segmented effectively in fundus images. Deep learning is now frequently used for PPA segmentation. However, previous segmentation algorithms frequently mix up PPA with its neighboring tissue, the optic disc (OD), and generate the incorrect PPA area even though PPA is not present in the fundus image. To address these problems, we propose an improved segmentation network based on multi-task learning by combining detection and segmentation of PPA. We analyze the shortcomings of widely used loss functions and define a modified one to guide the training process of the network. We design a three-class segmentation task by introducing the information of OD, forcing the network to learn the difference of characteristics between OD and PPA. Evaluation on a clinical dataset shows that our method achieves an average Dice coefficient of 0.8854 in PPA segmentation, outperforming UNet and TransUNet, two state-of-the-art methods, by 24.4% and 10.6%, respectively.
Current learning-based Computer-Generated Holography (CGH) algorithms often utilize Convolutional Neural Networks (CNN)-based architectures. However, the CNN-based non-iterative methods mostly underperform the State-Of-The-Art (SOTA) iterative algorithms such as Stochastic Gradient Descent (SGD) in terms of display quality. Inspired by the global attention mechanism of Vision Transformer (ViT), we propose a novel unsupervised autoencoder-based ViT for generating phase-only holograms. Specifically, for the encoding part, we use Uformer to generate the holograms. For the decoding part, we use the Angular Spectrum Method (ASM) instead of a learnable network to reconstruct the target images. To validate the effectiveness of the proposed method, numerical simulations and optical reconstructions are performed to compare our proposal against both iterative algorithms and CNN-based techniques. In the numerical simulations, the PSNR and SSIM of the proposed method are 26.78 dB and 0.832, which are 4.02 dB and 0.09 higher than that of the CNN-based method, respectively. Moreover, the proposed method contains less speckles and features a higher display quality than other CGH methods in experiments. We suggest the improvement might be ascribed to the ViT’s global attention mechanism, which is more suitable for learning the cross-domain mapping from image (spatial) domain to hologram (Fourier) domain. We believe the proposed ViT-based CGH algorithm could be a promising candidate for future real-time high-fidelity holographic displays.
Iterative methods could provide high-quality image reconstruction for Fourier-domain optical coherence tomography (FD-OCT) by solving an inverse problem. Compared with the regular IFFT-based reconstruction, a more accurate estimation could be iteratively solved by integrating prior knowledge, however, it is often more time-consuming. To deal with the time problem, we proposed a fast iterative method for FD-OCT image reconstruction empowered by GPU acceleration. An iterative scheme is adopted, including a forward model and an inverse solver. Large-scale parallelism of OCT image reconstruction is performed on B-scans. We deployed the framework on Nvidia GeForce RTX 3090 graphic card that enables parallel processing. With the widely used toolkit Pytorch, the inverse problem of OCT image reconstruction is solved by the stochastic gradient descent (SGD) algorithm. To validate the effectiveness of the proposed method, we compare the computational time and image quality with other iterative approaches including ADMM, AR, and RFIAA method. The proposed method could provide a significant speed enhancement of 1,500 times with comparable image quality to that of ADMM reconstruction. The result indicates a potential for high-quality real-time volumetric OCT image reconstruction via iterative algorithms.
In previous Monte Carlo (MC) studies of modeling Fourier-domain optical coherence tomography (FD-OCT), the results obtained at single wavelength are often used to reconstruct the image despite of FD-OCT’s broadband nature. Here, we propose a novel image simulator for full-wavelength MC simulation of FD-OCT based on Mie theory, which combines the inverse discrete Fourier transform (IDFT) with a probability distribution-based signal pre-processing to eliminate the excessive noises in image reconstruction via IDFT caused by missing certain wavelength’s signals in some scattering events. Compared with the conventional method, the proposed simulator is more accurate and could better preserve the wavelength-dependent features.
Inpainting shadowed regions cast by superficial blood vessels in retinal optical coherence tomography (OCT) images is critical for accurate and robust machine analysis and clinical diagnosis. Traditional sequence-based approaches such as propagating neighboring information to gradually fill in the missing regions are cost-effective. But they generate less satisfactory outcomes when dealing with larger missing regions and texture-rich structures. Emerging deep learning-based methods such as encoder-decoder networks have shown promising results in natural image inpainting tasks. However, they typically need a long computational time for network training in addition to the high demand on the size of datasets, which makes it difficult to be applied on often small medical datasets. To address these challenges, we propose a novel multi-scale shadow inpainting framework for OCT images by synergically applying sparse representation and deep learning: sparse representation is used to extract features from a small amount of training images for further inpainting and to regularize the image after the multi-scale image fusion, while convolutional neural network (CNN) is employed to enhance the image quality. During the image inpainting, we divide preprocessed input images into different branches based on the shadow width to harvest complementary information from different scales. Finally, a sparse representation-based regularizing module is designed to refine the generated contents after multi-scale feature aggregation. Experiments are conducted to compare our proposal versus both traditional and deep learning-based techniques on synthetic and real-world shadows. Results demonstrate that our proposed method achieves favorable image inpainting in terms of visual quality and quantitative metrics, especially when wide shadows are presented.
Here, we analytically study the signal digitization procedure in FD-OCT and propose a novel mixed-signal framework to model its time-domain image formation. It turns out that FD-OCT is a shift-variant system, if the conventional IDFT-based technique is used to reconstruct the A-lines. Specifically, both amplitude and phase responses of the system are dependent on the axial location of the input sample. We believe this finding could provide us with new insights towards the image reconstruction of FD-OCT and guide researchers to develop better reconstruction algorithms in the future.
FD-OCT is a widely used technology which could provide high-resolution 3D reconstruction images. Conventional OCT uses IDFT reconstruction method, which could obtain an FFT-limited axial resolution. Recently, several optimization-based methods have reportedly achieved a resolution improvement, while undesired noise artifacts might appear in their result, which could degrade the image quality. In this work, we proposed an iterative error reduction method in order to remove the artifacts as well as improve the resolution. A numerical simulation is designed to validate our algorithm. Two reconstruction methods including IDFT reconstruction, and l1 minimization are selected for a comparative study. Specifically, we conduct the simulation at four different noise levels. The result shows that our proposed method could greatly suppress the artifact and obtain a great reconstruction even when the SNR is reduced.
Accurate retinal layer segmentation, especially the peripapillary retinal nerve fiber layer (RNFL) is critical for the diagnosis of ophthalmic diseases. However, due to the complex morphologies of the peripapillary region, most of the existing methods focus on segmenting the macular region and could not be directly applied to the peripapillary retinal optical coherence tomography (OCT) images. In this paper, we propose a novel graph convolutional network (GCN)-assisted segmentation framework based on a U-shape neural network for peripapillary retinal layer segmentation in OCT images. We argue that the strictly stratified structure of retina layers in addition to the centered optic disc is an ideal objective for GCN. Specifically, a graph reasoning block is inserted between the encoder and decoder of the U-shape neural network to conduct spatial reasoning. In this way, the peripapillary retina in OCT images is segmented into nine layers including RNFL. The proposed method was trained and tested on our collected dataset of peripapillary retinal OCT images. Experimental results showed that our segmentation method outperformed other state-of-the-art methods. In particular, compared with ReLayNet, the average and RNFL Dice coefficients are improved by 1.2% and 2.6%, respectively.
Optical Coherence Tomography (OCT) has established itself as an important tool for studying the role of cilia in Mucociliary Clearance (MCC) due to its ability to observe the cilia’s temporal characteristics over a large field of view. To obtain useful, quantitative measures of this dynamic morphology, the ciliated layer of tissue needs to be segmented from other static components. This is currently accomplished using Speckle Variance processing, a technique whose success relies on subjective thresholding and lacks sensitivity to other sources of speckle noise. We present a modified, frequency constrained, version of Robust Principle Component Analysis (RPCA) which we call Frequency Constrained RPCA (FC-RPCA) as an alternative method for dynamic segmentation of cilia from time-varying OCT B-scans. Based in Sparse Representation theory, FC-RPCA decomposes stacks of images in time into low-rank (static) and a sparse (dynamic) matrices. The sparse matrix represents the segmented cilia layer because of the sparse frequency spectrum exhibited by their characteristic beating pattern. This novel algorithm introduces an additional feature, a user defined frequency constraint on the sparse component, which prevents other sources of speckle noise, like slow moving mucus clouds at the tissues surface, from being segmented with the cilia. The algorithm was used to segment motile cilia in 17 datasets of ex-vivo human ciliated epithelium with high accuracy. Furthermore, FC-RPCA requires no parameter tuning across datasets, demonstrating its capability as a robust tool for processing large volumes of data. When compared with the standard Speckle Variance method, FC-RPCA performed with improved accuracy and selectivity.
Subcellular resolution is required for OCT to portray the microstructural information of myocardium issue that is comparable to histology. Compare with its intrinsic intensity contrast, functional OCT system may provide contrast related to the tissue composition. We present a high-resolution (HR) cross-polarization OCT system that can provide functional contrast of human myocardium tissue in one-shot measurement. The system is implemented based on our previously reported high-resolution long imaging range OCT system with minimal modification. It features a broadband supercontinuum source, single-channel and one-shot detection, with moderate signal processing. The system has an axial resolution of 3.07 μm, and it is capable to produce accurate polarization information by calibrating the reconstruction performance with a quarter wave plate. The orthogonal polarization channels are multiplexed to fit within one imaging range. Following CP-OCT detection, the retardation can be reconstructed based on the complex signals, and the depolarization effect can be depicted by the channel intensity ratio. Tissue specimens from ten fresh human hearts are used to demonstrate the capability of CP-OCT contrasts. By analyzing the intrinsic and functional OCT contrasts of fresh human myocardium tissues against histology slides, we show that various tissue structures and tissue types of the myocardium, such as fibrosis and ablated lesions, can be better depicted by the function contrasts. We also suggest the possibility of using A-line features from the two orthogonal polarization channels to distinguish normal myocardium, fibrotic myocardium, and ablated lesions. This may serve as a rapid and cost-efficient solution for assessment of myocardium and further facilitate automatic tissue classification.
Phase-resolved optical coherence tomography (OCT), a functional extension of OCT, provides depth-resolved phase information with extra contrast. In cardiology, changes in the mechanical properties have been associated with tissue remodeling and disease progression. Here we present the capability of profiling structural deformation of the sample in vivo by using a highly stable swept source OCT system The system, operating at 1300 nm, has an A-line acquisition rate of 200 kHz. We measured the phase noise floor to be 6.5 pm±3.2 pm by placing a cover slip in the sample arm, while blocking the reference arm. We then conducted a vibrational frequency test by measuring the phase response from a polymer membrane stimulated by a pure tone acoustic wave from 10 kHz to 80 kHz. The measured frequency response agreed with the known stimulation frequency with an error < 0.005%. We further measured the phase response of 7 fresh swine hearts obtained from Green Village Packing Company through a mechanical stretching test, within 24 hours of sacrifice. The heart tissue was cut into a 1 mm slices and fixed on two motorized stages. We acquired 100,000 consecutive M-scans, while the sample is stretched at a constant velocity of 10 um/s. The depth-resolved phase image presents linear phase response over time at each depth, but the slope varies among tissue types. Our future work includes refining our experiment protocol to quantitatively measured the elastic modulus of the tissue in vivo and building a tissue classifier based on depth-resolved phase information.
The ciliated epithelium is important to the human respiratory system because it clears mucus that contains harmful microorganisms and particulate matter. We report the ex vivo visualization of human trachea/bronchi ciliated epithelium and induced flow characterized by using spectral-domain optical coherence tomography (SD-OCT). A total number of 17 samples from 7 patients were imaged. Samples were obtained from Columbia University Department of Anesthesiology’s tissue bank. After excision, the samples were placed in Gibco Medium 199 solution with oxygen at 4°C until imaging. The samples were maintained at 36.7°C throughout the experiment. The imaging protocol included obtaining 3D volumes and 200 consecutive B-scans parallel to the head-to-feet direction (superior-inferior axis) of the airway, using Thorlabs Telesto system at 1300 nm at 28 kHz A-line rate and a custom built high resolution SDOCT system at 800nm at 32 kHz A-line rate. After imaging, samples were processed with H and E histology. Speckle variance of the time resolved datasets demonstrate significant contrast at the ciliated epithelium sites. Flow images were also obtained after injecting 10μm polyester beads into the solution, which shows beads traveling trajectories near the ciliated epithelium areas. In contrary, flow images taken in the orthogonal plane show no beads traveling trajectories. This observation is in line with our expectation that cilia drive flow predominantly along the superior-inferior axis. We also observed the protective function of the mucus, shielding the epithelium from the invasion of foreign objects such as microspheres. Further studies will be focused on the cilia’s physiological response to environmental changes such as drug administration and physical injury.
Functional extensions to optical coherence tomography (OCT) provide useful imaging contrasts that are complementary to conventional OCT. Our goal is to characterize tissue types within the myocardial due to remodeling and therapy. High-speed imaging is necessary to extract mechanical properties and dynamics of fiber orientation changes in a beating heart.
Functional extensions of OCT such as polarization sensitive and optical coherence elastography (OCE) require high phase stability of the system, which is a drawback of current mechanically tuned swept source OCT systems. Here we present a high-speed functional imaging platform, which includes an ultrahigh-phase-stable swept source equipped with KTN deflector from NTT-AT. The swept source does not require mechanical movements during the wavelength sweeping; it is electrically tuned. The inter-sweep phase variance of the system was measured to be less than 300 ps at a path length difference of ~2 mm.
The axial resolution of the system is 20 µm and the -10 dB fall-off depth is about 3.2 mm. The sample arm has an 8 mmx8 mm field of view with a lateral resolution of approximately 18 µm. The sample arm uses a two-axis MEMS mirror, which is programmable and capable of scanning arbitrary patterns at a sampling rate of 50 kHz.
Preliminary imaging results showed differences in polarization properties and image penetration in ablated and normal myocardium. In the future, we will conduct dynamic stretching experiments with strips of human myocardial tissue to characterize mechanical properties using OCE. With high speed imaging of 200 kHz and an all-fiber design, we will work towards catheter-based functional imaging.
We introduce a fluorescent imaging method that is capable of detecting fluorescent micro-particles over an ultra-wide field of view of 19 cm × 28 cm using a modified flatbed scanner. We added a custom-designed absorbing emission filter, a computer controlled two dimensional LED array, and modified the driver of the scanner to maximize the sensitivity, exposure time, and gain for fluorescent detection of micro-objects. This high-throughput fluorescent imaging device used in conjunction with a microfluidic sample holder enables rapid screening of fluorescent micro-objects inside more than 2.2mL of optically dense media (i.e., whole blood) within 5 minutes. The device is sensitive enough to detect fluorescently labeled cells, and generates images that have an effective pixel count of 2.2 Giga-pixels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.