SignificanceQuantitative phase imaging (QPI) can visualize cellular morphology and measure dry mass. Automated segmentation of QPI imagery is desirable for tracking neuron growth. Convolutional neural networks (CNNs) have provided state-of-the-art results for image segmentation. Improving the amount and robustness of training data is often crucial to improving CNN output on novel samples, but acquiring enough labeled data can be labor intensive. Data augmentation and simulation can be used to address this, but it is unclear whether low-complexity data can result in useful network generalization.AimWe trained CNNs on abstract images of neurons and on augmented images of real neurons. We then benchmarked the resulting models against human labeling.ApproachWe used a stochastic simulation of neuron growth to guide abstract QPI image and label generation. We then tested the segmentation performance of networks trained on augmented data and networks trained on simulated data against manual labeling established via consensus of three human labelers.ResultsWe show that training on augmented real data resulted in a model that achieved the best Dice coefficients in our group of CNNs. The largest percent difference in dry mass estimation with respect to the ground truth was driven by segmentation errors of cell debris and phase noise. The error in dry mass when considering the cell body alone was similar between the CNNs. Neurite pixels only accounted for ∼6 % of the total image space, making them a difficult feature to learn. Future efforts should consider methods for improving neurite segmentation quality.ConclusionsAugmented data outperformed the simulated abstract data for this testing set. The quality of segmentation of neurites was the key difference in performance between the models. Notably, even humans performed poorly when segmenting neurites. Further work is needed to improve the segmentation quality of neurites.
Morphological changes in neurons can denote cell health, growth, and death in response to environmental stressors. Quantitative phase imaging (QPI) has been used to assess neuronal network mass over time, which reveals such changes. High quality segmentation of cells in QPI is necessary to extract dry mass effectively. Neural networks are effective at segmentation but require vast amounts of data to train. Previously, we trained neural networks to segment neurons using simulated images that were generated from a biological neuron growth model. Images were simulated by approximating cell bodies as ellipsoids, and neurites as thin rectangular regions. The simplicity of the neuron images limited the quality of segmentation especially around neurites, which exhibit weak phase signals. In this work, improved segmentation quality is demonstrated by increasing the amount of complexity in the simulation. Namely, a data set of 5000 training images is procedurally generated by cropping cells from a sample of ten images. Cells are randomly placed, scaled and rotated into scenes of random noise and of background generated by our microscope. After training the network, its performance is tested on 100 images independent of the training data. This resulted in improvement in the Dice coefficient between the network output and the ground truth when compared with the best performing model.
Morphological changes in neurons are closely related to neurological disorders. Quantitative Phase Imaging (QPI) has been used to assess neuronal changes over time, using mass-sensitive contrast to quantitatively track network growth. QPI requires high quality segmentation of neurons in order to measure neuron cell body and neurite mass distributions. Neural networks are the state of the art for segmentation, but require thousands of images in order to generalize well. However, recent work on network functionality has shown that networks generalize by learning simple functions. Whether low data complexity hinders this has yet to be seen. Here we test this by simulating low complexity data, specifically, QPI images of neurons simulated using a neuronal growth model. We show segmentation results for feeding the network lab-acquired data.
Deep Networks trained on one kind of data tend to perform poorly, on data that is beyond its training set. We believe this is because data sets tend to focus too directly on a specific task. We circumvent this by simulating various sinusoidal signal sums, with and without envelopes, along with blurred spike trains. We then add various noise to these signals during training to allow the networks to learn a denoising technique. Without using any real Raman or Brillouin data, our network successfully denoises and removes low frequency drifts from real experimentally acquired Raman and Brillouin data.
The optical activity of Raman scattering provides insight into the absolute configuration and conformation of chiral molecules. Applications of Raman optical activity (ROA) are limited by long integration times due to a relatively low sensitivity of the scattered light to chirality (typically 10-3 to 10-5). We apply ROA techniques to hyper-Raman scattering using incident circularly polarized light and a right-angle scattering geometry. We explore the sensitivity of hyper- Raman scattering to chirality as compared to spontaneous Raman optical activity. Using the excitation wavelength at around 532 nm, the photobleaching is minimized, while the hyper-Raman scattering benefits from the electronic resonant enhancement. For S/R-2-butanol and L/D-tartaric acid, we were unable to detect the hyper-Raman optical activity at the sensitivity level of 1%. We also explored parasitic thermal effects which can be mitigating by varying the repetition rate of the laser source used for excitation of hyper-Raman scattering.
Monte Carlo Simulations (MCSs) allow for the estimation of photon propagation through media given knowledge of the geometry and optical properties. Previous research has demonstrated that the inverse of this problem may be solved as well, where neural networks trained on photon distributions can be used to estimate refractive index, scattering and absorption coefficients. To extend this work, time-dependent MCSs are used to generate data sets of photon propagation through various media. These simulations were treated as stacks of 2D images in time and used to train convolutional networks to estimate tissue parameters. To find potential features that drive network performance on this task, networks were randomly generated. Generated networks were then trained. The networks were validated using 4-fold cross validation. The consistently performing top 10 networks typically had an emphasis on convolutional chains and convolutional chains ending in max pooling.
Raman imaging continues to grow in popularity as a label-free technique for characterizing the underlying chemical structure of biological materials, both in-vitro and in-vivo. While Raman spectra demonstrate high chemical specificity, spontaneous Raman scattering is an inherently weak process and requires prohibitively long acquisition times. When Raman is utilized to image highly scattering cellular environments, integration times can be on the order of several minutes to hours. Recently developed compressed sensing techniques can greatly improve hyperspectral Raman acquisition times by randomly under-sampling the spatial dimensions. A digital micromirror device (DMD) is used to spatially encode the image plane. The encoded image is then propagated to a spectrometer where the spectral components are produced by shearing one spatial dimension. Several reconstruction algorithms have been developed that can then be used to return the original. Here, we will present single-shot, 2D Raman imaging of CHO cells using compressed hyperspectral Raman microscope. This system provides an order of magnitude improvement on traditional hyperspectral acquisition rates. Single-shot compressed hyperspectral Raman images can reveal biochemical changes due to short lifetime dynamic processes. These improvements will allow imaging of samples that metabolize quickly, rapidly oxidize, or are physically altered under experimental conditions.
Scanning confocal Raman spectroscopy was applied for detecting and identifying topically applied ocular pharmaceuticals on rabbit corneal tissue. Raman spectra for Cyclosporin A, Difluprednate, and Dorzolamide were acquired together with Raman spectra from rabbit corneas with an unknown amount of applied drug. Kernel principle component analysis (KPCA) was then used to explore a transform that can describe the acquired set of Raman spectra. Using this transform, we observe some spectral similarity between cornea spectra and Cyclosporin A, with little similarity to Dorzolamide and Difluprednate. Further investigation is needed to identify why these differences occur.
Identification and analysis of laser-induced lesions on the retina can be challenging in both the research and clinical settings depending on the age of a lesion and the imaging modality used for detection. Previous research exploring retinal damage thresholds utilized the consensus of an expert panel to confirm energies required for minimal visible lesions, a method that includes some subjectivity. Because of this, there is a desire to develop an image processing architecture to accurately locate retinal laser lesions in images generated from clinically relevant modalities. Issues such as imaging aberrations inducing circular artifacts, perceived stretch in lesions, and differences in the appearance of lesions across the dataset preclude use of traditional image processing tools. A database containing images of laser lesions has been developed in order to provide a reference for researchers and clinicians. In this work, we explored using various Convolutional Neural Network (CNN) architectures and preprocessing techniques to more objectively identify and analyze retinal laser lesions. Specifically, we developed frequency domain filtering techniques in order to emphasize lesion qualities. We consider this task to be one of image segmentation to make the networks somewhat size invariant. Since the lesions account for a small amount of the image pixels, we implemented an intersection-based loss function. We evaluated the performance of our trained networks against more complicated architecture variants. Additionally, we trained a network to segment and classify lesions as the result of photochemical, photomechanical or photothermal damage.
Identification and analysis of laser-induced lesions on the retina can be challenging in both the research and clinical settings depending on the age of a lesion and the imaging modality used for detection. Previous research exploring retinal damage thresholds utilized the consensus of an expert panel to confirm energies required for minimal visible lesions, a method that includes some subjectivity. Because of this, there is a desire to develop an image processing architecture to accurately locate retinal laser lesions in images generated from clinically relevant modalities. Issues such as imaging aberrations inducing circular artifacts, perceived stretch in lesions, and differences in the appearance of lesions across the dataset preclude use of traditional image processing tools. A database containing images of laser lesions has been developed in order to provide a reference for researchers and clinicians. In this work, we explored using various Convolutional Neural Network (CNN) architectures and preprocessing techniques to more objectively identify and analyze retinal laser lesions. Specifically, we developed frequency domain filtering techniques in order to emphasize lesion qualities. We consider this task to be one of image segmentation to make the networks somewhat size invariant. Since the lesions account for a small amount of the image pixels, we implemented an intersection-based loss function. We evaluated the performance of our trained networks against more complicated architecture variants. Additionally, we trained a network to segment and classify lesions as the result of photochemical, photomechanical or photothermal damage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.