The COVID pandemic prompted the need for rapid detection of the SARS-CoV-2 virus and potentially other pathogens. In this study, we report a rapid, label-free optical detection method for SARS-CoV-2 that is aimed at detecting the virus in the patient’s breath condensates. We show in the published pre-clinical study that, through phase imaging with computational specificity (PICS), we can detect and classify SARS-CoV-2 versus other viruses (H1N1, HAdV and ZIKV) with 96% accuracy, within a minute after sample collection. PICS combines ultrasensitive quantitative phase imaging (QPI) with advanced deep-learning algorithms to detect and classify viral particles. The second stage of our project, currently under development, involves clinical validation of our proposed testing technique. Breath samples collected from patients in the clinic will be imaged with QPI and a U-Net model trained on the breath samples will identify the SARS-CoV-2 in the sample within a minute.
In this study, we use phase imaging with computational specificity (PICS) to detect single Adenovirus and SARS-CoV2 particles. These viruses are sub-diffraction particles, with maximum diameter of approximately 120nm, which implies that we cannot fully visualize their internal structure. However, due to the very high spatial sensitivity of SLIM (0.3 nm pathlength), we can detect and localize individual viruses and, furthermore, using deep learning, classify them with high accuracy.
Quantitative phase imaging (QPI), with its capability to capture intrinsic contrast within transparent samples, has emerged as an important imaging method for biomedical research. However, due to its label-free nature, QPI lacks specificity and thus faces limitations in complex cellular systems. In our previous works, we have proposed phase imaging with computational specificity (PICS), a novel AI-enhanced imaging approach that advances QPI by utilizing deep learning for specificity. Here we present that PICS can be applied to study individual cell behavior and cellular dry mass change across different phases of the cell cycle. The cell cycle information is traditionally obtained by fluorescence microscopy with markers like Fluorescence Ubiquitin Cell Cycle Indicator (FUCCI). Our work showed that using deep learning, we can train a neural network to accurately predict the cell cycle phase (G1, S, or G2) for each individual cell.
Quantitative Phase Imaging (QPI) has been widely applied in characterizing cells and tissues. Spatial light interference microscopy (SLIM) is a highly sensitive QPI method. However, as a phase-shifting technique, SLIM is limited in acquisition rate to at most 15 fps. On the other hand, Diffraction Phase Microscopy (DPM) is such a method, with the advantage of being common-path. However, laser-based DPM systems are plagued by spatial noise due to speckles and multiple reflections. Here, we propose using deep learning to produce SLIM-quality phase maps from DPM, single shot, images. We constructed a deep learning model based on U-Net and trained on over 1,000 pairs of DPM and SLIM images. From the test set, we observed that the model learned to remove the speckles in DPM and overcame the background phase noise. We implemented the neural network inference into the live acquisition software and allows us to acquire single-shot DPM images and infer from them SLIM images in real time.
We demonstrate that live-dead cell assay can be conducted in a label-free manner using quantitative phase imaging and deep learning. We apply the concept of our newly-developed phase imaging with computational specificity (PICS) to digitally stain for the live/dead markers. HeLa cultured mixed with viability fluorescent reagents (ReadyProbes, ThermoFisher) were imaged for 24 hours by spatial light interference microscopy (SLIM) and fluorescent microscopy. Based on the ratio of the two fluorescence signals, semantic segmentation maps were generated to label the state of the cell as either live, injured, or dead. We trained an EfficientNet to infer cell viability from SLIM images with semantic maps as ground truth. Validated on the testing dataset, the trained network reported an F1 score of 73.4%, 97.0%, and 94.3% in identifying live, injured, and dead cells, respectively.
We propose synthetic aperture gradient light interference microscopy (SA-GLIM) as a solution to avoid computational complexity in standard Fourier pytchographic microscopy. This new system combines direct phase measurements from GLIM with various illumination angles, and a synthetic aperture reconstruction method, to produce high resolution, large FOV quantitative phase maps. Using a 5× objective lens (NA = 0.15), SA-GLIM generates phase maps with a spatial resolution of 850 nm and FOV approximately 1.7×1.7 mm2. We tested the performance using a mixture of polystyrene beads (1 μm and 3 μm in diameter), and the smaller beads can be easily resolved in the final image. Compared with standard FPM, SA-GLIM records substantially fewer low-resolution images, which makes the data throughput highly efficient.
Fluorescence microscopy has been proven a valid method of classifying sperm with different characteristics such as gender. However, it has been observed that they introduced an increase in oxidative stress as well as undesired bias. We show that spatial light interference microscopy, a QPI method that can reveal the intrinsic contrast of cell structures, is ideal for the study of sperm. To enable high-throughput sperm quality assessment using QPI, we propose a new analysis method based on deep learning and the U-Net architecture. We show that our model can achieve satisfying precision and accuracy and that it can be integrated within our image acquisition software for near real-time analysis.
Microscopic imaging modalities can be classified into two categories: those that form contrast from external agents such as dyes, and label-free methods that generate contrast from the object’s unmodified structure. While label-free methods such as brightfield, phase contrast, or quantitative phase imaging (QPI) are substantially easier to use, as well as non-toxic, their lack of specificity leads many researchers to turn to labels for insights into biological processes, despite limitations due to photobleaching and phototoxicity. The label-free image may contain the structures of interest, but it is often difficult or time-consuming to distinguish these structures from their surroundings. Here we summarize our recent progress in shattering this tradeoff, by using machine learning to perform automated segmentation on label-free, intrinsic contrast, quantitative phase images.
Histological staining of tissue samples is one of the most helpful tools in diagnosing and prognosing various cancers. However, in order to prepare the slide for a histopathologist to examine, the tissue must first undergo a series of time-consuming processes, such as a staining technique to visually differentiate features in the sample.
In this study, we use a label-free method to generate a virtually-stained microscopic image using a single spatial light interference microscopy (SLIM) image of an unlabeled tissue sample, therefore eliminating the need for standard histochemical administration.
This novel approach will render histopathological practices faster and more cost-effective, while providing medically relevant dry mass information associated with SLIM images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.