PurposeThe diagnosis of primary bone tumors is challenging as the initial complaints are often non-specific. The early detection of bone cancer is crucial for a favorable prognosis. Incidentally, lesions may be found on radiographs obtained for other reasons. However, these early indications are often missed. We propose an automatic algorithm to detect bone lesions in conventional radiographs to facilitate early diagnosis. Detecting lesions in such radiographs is challenging. First, the prevalence of bone cancer is very low; any method must show high precision to avoid a prohibitive number of false alarms. Second, radiographs taken in health maintenance organizations (HMOs) or emergency departments (EDs) suffer from inherent diversity due to different X-ray machines, technicians, and imaging protocols. This diversity poses a major challenge to any automatic analysis method.ApproachWe propose training an off-the-shelf object detection algorithm to detect lesions in radiographs. The novelty of our approach stems from a dedicated preprocessing stage that directly addresses the diversity of the data. The preprocessing consists of self-supervised region-of-interest detection using vision transformer (ViT), and a foreground-based histogram equalization for contrast enhancement to relevant regions only.ResultsWe evaluate our method via a retrospective study that analyzes bone tumors on radiographs acquired from January 2003 to December 2018 under diverse acquisition protocols. Our method obtains 82.43% sensitivity at a 1.5% false-positive rate and surpasses existing preprocessing methods. For lesion detection, our method achieves 82.5% accuracy and an IoU of 0.69.ConclusionsThe proposed preprocessing method enables effectively coping with the inherent diversity of radiographs acquired in HMOs and EDs.
The use of photo-activated fluorescent molecules to create long sequences of low emitter-density diffraction-limited images enables high-precision emitter localization. However, this is achieved at the cost of lengthy imaging times, limiting temporal resolution. In recent years, a variety of approaches have been suggested to reduce imaging times, ranging from classical optimization and statistical algorithms to deep learning methods. Classical methods often rely on prior knowledge of the optical system and require heuristic adjustment of parameters or do not lead to good enough performance. Deep learning methods proposed to date tend to suffer from poor generalization ability outside the specific distribution they were trained on, and tend to lead to black-box solutions that are hard to interpret. In this paper, we suggest combining a recent high-performing classical method, SPARCOM, with model-based deep learning, using the algorithm unfolding approach which relies on an iterative algorithm to design a compact neural network considering domain knowledge. We show that the resulting network, Learned SPARCOM (LSPARCOM), requires far fewer layers and parameters, and can be trained on a single field of view. Nonetheless it yields comparable or superior results to those obtained by SPARCOM with no heuristic parameter determination or explicit knowledge of the point spread function, and is able to generalize better than standard deep learning techniques. Thus, we believe LSPARCOM will find broad use in single molecule localization microscopy of biological structures, and pave the way to interpretable, efficient live-cell imaging in a broad range of settings.
Breakthroughs in the field of chemistry have enabled surpassing the classical optical diffraction limit by utilizing photo-activated fluorescent molecules. In the single-molecule localization microscopy (SMLM) approach, a sequence of diffraction-limited images, produced by a sparse set of emitting fluorophores with minimally overlapping point- spread functions is acquired, allowing the emitters to be localized with high precision by simple post-processing. However, the low emitter density concept requires lengthy imaging times to achieve full coverage of the imaged specimen on the one hand, and minimal overlap on the other. Thus, this concept in its classical form has low temporal resolution, limiting its application to slow-changing specimens. In recent years, a variety of approaches have been suggested to reduce imaging times by allowing the use of higher emitter densities. One of these methods is the sparsity-based approach for super-resolution microscopy from correlation information of high emitter-density frames, dubbed SPARCOM, which utilizes sparsity in the correlation domain while assuming that the blinking emitters are uncorrelated over time and space, yielding both high temporal and spatial resolution. However, SPARCOM has only been formulated for the two-dimensional setting, where the sample is assumed to be an infinitely thin single-layer, and thus is unsuitable to most biological specimens. In this work, we present an extension of SPARCOM to the more challenging three-dimensional scenario, where we recover a volume from a set of recorded frames, rather than an image.
It has been shown that sub-diffraction structures can be resolved in acoustic resolution photoacoustic imaging thanks to norm-based iterative reconstruction algorithms exploiting prior knowledge of the point spread function (PSF) of the imaging system. Here, we demonstrate that super-resolution is still achievable when the receiving ultrasonic probe has much fewer elements than used conventionally (8 against 128). To this end, a proof-of-concept experiment was conducted. A microfluidic circuit containing five parallel microchannels (channel’s width 40μm, center-to center distance 180μm) filled with dye was exposed to 5ns laser pulses (=532nm, fluence=3.0mJ/cm2, PRF=100Hz). Photoacoustic signals generated by the sample were captured by a linear ultrasonic array (128 elements, pitch=0.1mm, fc=15MHz) connected to an acquisition device. The forward problem is modelled in a matrix form Y=AX, where Y are the measured photoacoustic signals and X is the object to reconstruct. The matrix A contained the PSFs at all points of the reconstruction grid, and was derived from a single PSF acquired experimentally for a 10-μm wide microchannel. For the reconstruction, we used a sparsity-based minimization algorithm. While the conventional image obtained by beamforming the signals measured with all the 128 elements of the probe cannot resolve the individual microchannels, our sparsity-based reconstruction leads to super-resolved images with only 8 elements of the probe (regularly spaced over the full probe aperture), with an image quality comparable to that obtained with all the 128 elements. These results pave the way towards super-resolution in 3D photoacoustic imaging with sparse transducers arrays.
Most of compressed sensing (CS) theory to date is focused on incoherent sensing, that is, columns from the sensing matrix are highly uncorrelated. However, sensing systems with naturally occurring correlations arise in many applications, such as signal detection, motion detection and radar. Moreover, in these applications it is often not necessary to know the support of the signal exactly, but instead small errors in the support and signal are tolerable. Despite the abundance of work utilizing incoherent sensing matrices, for this type of tolerant recovery we suggest that coherence is actually beneficial . We promote the use of coherent sampling when tolerant support recovery is acceptable, and demonstrate its advantages empirically. In addition, we provide a first step towards theoretical analysis by considering a specific reconstruction method for selected signal classes.
Low-rank matrix recovery addresses the problem of recovering an unknown low-rank matrix from few linear
measurements. Nuclear-norm minimization is a tractable approach with a recent surge of strong theoretical
backing. Analagous to the theory of compressed sensing, these results have required random measurements.
For example, m ≥ Cnr Gaussian measurements are sufficient to recover any rank-r n x n matrix with high
probability. In this paper we address the theoretical question of how many measurements are needed via any
method whatsoever - tractable or not. We show that for a family of random measurement ensembles, m ≥ 4nr-4r2 measurements are sufficient to guarantee that no rank-2r matrix lies in the null space of the measurement
operator with probability one. This is a necessary and sufficient condition to ensure uniform recovery of all rank-r
matrices by rank minimization. Furthermore, this value of m precisely matches the dimension of the manifold
of all rank-2r matrices. We also prove that for a fixed rank-r matrix, m ≥ 2nr - r2 + 1 random measurements
are enough to guarantee recovery using rank minimization. These results give a benchmark to which we may
compare the efficacy of nuclear-norm minimization.
KEYWORDS: Ultrasonography, Transducers, Signal processing, Analog electronics, Modulation, Image processing, Signal detection, Acoustics, Signal to noise ratio, Statistical analysis
Recent developments of new medical treatment techniques put challenging demands on ultrasound imaging
systems in terms of both image quality and raw data size. Traditional sampling methods result in very large
amounts of data, thus, increasing demands on processing hardware and limiting the flexibility in the postprocessing
stages.
In this paper, we apply Compressed Sensing (CS) techniques to analog ultrasound signals, following the recently
developed Xampling framework. The result is a system with significantly reduced sampling rates which, in turn,
means significantly reduced data size while maintaining the quality of the resulting images.
We address the problem of motion blur removal from an image sequence that was acquired by a sensor with
nonlinear response. Motion blur removal in purely linear settings has been studied extensively in the past. In
practice however, sensors exhibit nonlinearities, which also need to be compensated for. In this paper we study
the problem of joint motion blur removal and nonlinearity compensation. Two naive approaches for treating this
problem are to apply the inverse of the nonlinearity prior to a deblurring stage or following it. These strategies
require a preliminary motion estimation stage, which may be inaccurate for complex motion fields. Moreover,
even if the motion parameters are known, we provide theoretical arguments and also show through simulations
that theses methods yield unsatisfactory results. In this work, we propose an efficient iterative algorithm for
joint nonlinearity compensation and motion blur removal. Our approach relies on a recently developed theory for
nonlinear and nonideal sampling setups. Our method does not require knowledge of the motion responsible for
the blur. We show through experiments the effectiveness of our method compared with alternative approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.