PurposeDeep learning has demonstrated excellent performance enhancing noisy or degraded biomedical images. However, many of these models require access to a noise-free version of the images to provide supervision during training, which limits their utility. Here, we develop an algorithm (noise2Nyquist) that leverages the fact that Nyquist sampling provides guarantees about the maximum difference between adjacent slices in a volumetric image, which allows denoising to be performed without access to clean images. We aim to show that our method is more broadly applicable and more effective than other self-supervised denoising algorithms on real biomedical images, and provides comparable performance to algorithms that need clean images during training.ApproachWe first provide a theoretical analysis of noise2Nyquist and an upper bound for denoising error based on sampling rate. We go on to demonstrate its effectiveness in denoising in a simulated example as well as real fluorescence confocal microscopy, computed tomography, and optical coherence tomography images.ResultsWe find that our method has better denoising performance than existing self-supervised methods and is applicable to datasets where clean versions are not available. Our method resulted in peak signal to noise ratio (PSNR) within 1 dB and structural similarity (SSIM) index within 0.02 of supervised methods. On medical images, it outperforms existing self-supervised methods by an average of 3 dB in PSNR and 0.1 in SSIM.Conclusionnoise2Nyquist can be used to denoise any volumetric dataset sampled at at least the Nyquist rate making it useful for a wide variety of existing datasets.
We present the initial results of two unsupervised out-of-distribution (OOD) detection algorithms, designed to flag dermoscopic images of lesions from classes not seen during training. When evaluated on the ISIC 2019 dataset - using 6 classes as in-distribution and 2 as OOD - the scores from our algorithms produced AUROC’s of 0.694/0.642. The images in ISIC 2019 mainly come from two datasets - HAM and BCN. When restricting our evaluation to consider only images from HAM the AUROC was 0.758/0.765, and when considering the images from BCN only, the AUROC dropped to 0.645/0.504.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.