The use of photo-activated fluorescent molecules to create long sequences of low emitter-density diffraction-limited images enables high-precision emitter localization. However, this is achieved at the cost of lengthy imaging times, limiting temporal resolution. In recent years, a variety of approaches have been suggested to reduce imaging times, ranging from classical optimization and statistical algorithms to deep learning methods. Classical methods often rely on prior knowledge of the optical system and require heuristic adjustment of parameters or do not lead to good enough performance. Deep learning methods proposed to date tend to suffer from poor generalization ability outside the specific distribution they were trained on, and tend to lead to black-box solutions that are hard to interpret. In this paper, we suggest combining a recent high-performing classical method, SPARCOM, with model-based deep learning, using the algorithm unfolding approach which relies on an iterative algorithm to design a compact neural network considering domain knowledge. We show that the resulting network, Learned SPARCOM (LSPARCOM), requires far fewer layers and parameters, and can be trained on a single field of view. Nonetheless it yields comparable or superior results to those obtained by SPARCOM with no heuristic parameter determination or explicit knowledge of the point spread function, and is able to generalize better than standard deep learning techniques. Thus, we believe LSPARCOM will find broad use in single molecule localization microscopy of biological structures, and pave the way to interpretable, efficient live-cell imaging in a broad range of settings.
Breakthroughs in the field of chemistry have enabled surpassing the classical optical diffraction limit by utilizing photo-activated fluorescent molecules. In the single-molecule localization microscopy (SMLM) approach, a sequence of diffraction-limited images, produced by a sparse set of emitting fluorophores with minimally overlapping point- spread functions is acquired, allowing the emitters to be localized with high precision by simple post-processing. However, the low emitter density concept requires lengthy imaging times to achieve full coverage of the imaged specimen on the one hand, and minimal overlap on the other. Thus, this concept in its classical form has low temporal resolution, limiting its application to slow-changing specimens. In recent years, a variety of approaches have been suggested to reduce imaging times by allowing the use of higher emitter densities. One of these methods is the sparsity-based approach for super-resolution microscopy from correlation information of high emitter-density frames, dubbed SPARCOM, which utilizes sparsity in the correlation domain while assuming that the blinking emitters are uncorrelated over time and space, yielding both high temporal and spatial resolution. However, SPARCOM has only been formulated for the two-dimensional setting, where the sample is assumed to be an infinitely thin single-layer, and thus is unsuitable to most biological specimens. In this work, we present an extension of SPARCOM to the more challenging three-dimensional scenario, where we recover a volume from a set of recorded frames, rather than an image.
We present highly parallel and efficient algorithms for real-time reconstruction of the quantitative three-dimensional (3-D) refractive-index maps of biological cells without labeling, as obtained from the interferometric projections acquired by tomographic phase microscopy (TPM). The new algorithms are implemented on the graphic processing unit (GPU) of the computer using CUDA programming environment. The reconstruction process includes two main parts. First, we used parallel complex wave-front reconstruction of the TPM-based interferometric projections acquired at various angles. The complex wave front reconstructions are done on the GPU in parallel, while minimizing the calculation time of the Fourier transforms and phase unwrapping needed. Next, we implemented on the GPU in parallel the 3-D refractive index map retrieval using the TPM filtered-back projection algorithm. The incorporation of algorithms that are inherently parallel with a programming environment such as Nvidia’s CUDA makes it possible to obtain real-time processing rate, and enables high-throughput platform for label-free, 3-D cell visualization and diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.