There is currently substantial interest in understanding batch reading to improve performance in breast-cancer screening, and to identify mechanisms of performance. We evaluated batch reading of digital breast tomosynthesis (DBT) images for breast cancer screening using observational data acquired at the University of Pittsburgh Medical Center (UPMC). We studied batches of screen exams that were defined by completion-time differences between sequentially interpreted cases, in which a completion time exceeding a threshold led to defining a new batch. After exclusions the data consisted of 121,652 exams from 15 readers, with a total of 1,081 cancers. We found that the inter-exam time threshold used for batch definition introduces selection bias that had a large impact on the cancer rate in the first case of a batch. For the smallest threshold (< 1 minute), all cases are defined as the first case of a new batch, and the cancer rate was the overall cancer rate of the data, 8.9/1000. As the threshold increased to 4-5 minutes, the cancer rate of the first case in the batch increased to nearly double the overall rate, 16.0/1000. This threshold excluded many non-cancer cases, which are typically read in 2-3 minutes for DBT, while still capturing most cancer cases, which take longer to complete. At a 10-minute completion-time difference, the first-case cancer rate decreased to 12.6/1000, and stabilized. We argue that this increase in cancer rate is likely due to readers terminating batch reading upon encountering a difficult case. Our results demonstrate a clear selection bias in batches defined by inter-exam time, and suggest using cancer rate for adjustment to reduce the effect of this bias.
KEYWORDS: Medical imaging, Digital breast tomosynthesis, Visualization, Digital mammography, Digital imaging, Image processing, Breast, Tomosynthesis, Mammography, Imaging systems
PurposeRadiologists and other image readers spend prolonged periods inspecting medical images. The visual system can rapidly adapt or adjust sensitivity to the images that an observer is currently viewing, and previous studies have demonstrated that this can lead to pronounced changes in the perception of mammogram images. We compared these adaptation effects for images from different imaging modalities to explore both general and modality-specific consequences of adaptation in medical image perception.ApproachWe measured perceptual changes induced by adaptation to images acquired by digital mammography (DM) or digital breast tomosynthesis (DBT), which have both similar and distinct textural properties. Participants (nonradiologists) adapted to images from the same patient acquired from each modality or for different patients with American College of Radiology—Breast Imaging Reporting and Data System (BI-RADS) classification of dense or fatty tissue. The participants then judged the appearance of composite images formed by blending the two adapting images (i.e., DM versus DBT or dense versus fatty in each modality).ResultsAdaptation to either modality produced similar significant shifts in the perception of dense and fatty textures, reducing the salience of the adapted component in the test images. In side-by-side judgments, a modality-specific adaptation effect was not observed. However, when the images were directly fixated during adaptation and testing, so that the textural differences between the modalities were more visible, significantly different changes in the sensitivity to the noise in the images were observed.ConclusionsThese results confirm that observers can readily adapt to the visual properties or spatial textures of medical images in ways that can bias their perception of the images, and that adaptation can also be selective for the distinctive visual features of images acquired by different modalities.
Retinal examination using direct ophthalmoscope is preferred over other techniques for screening purposes because of its portability and high magnification, despite its power sustainability and cost issues. With increasing number of low-cost sustainable devices available in the market, it is important to assess the efficacy of the devices. We compared three devices - Arclight ophthalmoscope, a D-Eye attached to iPhone 6, and conventional ophthalmoscope Heine K180 - in terms of ease of examination, usage, field of view, color rendition, patient comfort, length of examination, and closeness to the eye. Two trained optometrists examined 26 undilated eyes and graded the ease of retinal examination, ease of use and assessed vertical cup:disc ratio (VCDR). Patients reported their comfort level in terms of glare produced by the light source, length of examination and closeness to the eye. The examiners had a good agreement for all assessments. Of 26 eyes, VCDR assessment was not possible in 10/26 (38.4%) of the examinations, in (3/26, 11.5%) examinations with Arclight, in 0/26 examinations with D-Eye. Ease of use score was higher for Arclight and D-Eye than Heine. D-Eye had a relatively larger field of view than other 2 devices. Heine ranked first in color rendition. The luminance level of the high-beam setting of Arclight was more than twice that of Heine and D-Eye. Despite that, the patients reported experiencing uncomfortable glare in Heine (14/26, 53.8%), significant glare with Arclight (16/26, 61.5%) and some/no glare with D-Eye. The examination time was shorter when using D-Eye. Overall, D-Eye scored better in most of the evaluation items followed by Arclight.
Fundus cameras are the current clinical standard for capturing retinal images, which are used to diagnose a variety of sight-threatening conditions. Traditional fundus cameras are not easily transported, making them unsuitable for field use. In addition, traditional fundus cameras are expensive. Due to this, a variety of technologies have been developed such as the D-EYE Digital Ophthalmoscope (D-EYE Srl, Padova, Italy) which is compatible with various cellphone cameras. This paper reports on the comparison of the image quality of the Nidek RS-330 OCT Retina Scan Duo (Nidek, Tokyo, Japan) and the D-EYE paired with an iPhone 6 (Apple, Cupertino, USA). Twenty-one participants were enrolled in the study of whom 14 underwent nonmydriatic and mydriatic imaging with the D-EYE and the Nidek. Seven participants underwent nonmydriatic imaging with the D-EYE and the Nidek. The images were co-registered and cropped so that the region of interest was equal in both the D-EYE and Nidek images, as the D-EYE had a smaller field of view. Using the Nidek image as the reference, objective full-reference image quality analysis was performed. Metrics such as structural similarity index and peak signal noise ratio were obtained. It was found that the image quality of the D-EYE is limited by the attached iPhone camera, and is lower when compared to the Nidek. Quantification of the differences between the D-EYE and Nidek allows for targeted development of smartphone camera attachments that can help to bridge the current gap in image quality.
The direct ophthalmoscope, a handheld device gives a highly magnified image of the retina. However, the sustainability of power source and cost are a limitation considering the usage demand. We compared a low-cost solar-powered Arclight ophthalmoscope with a standard ophthalmoscope Heine K180 in terms of ease of examination, usage, field of view, color rendition and patient comfort. Two clinically trained optometrists examined 28 patients and graded the ease of retinal examination, ease of use and assessed cup-disc ratio, which is an important diagnostic parameter for glaucoma, patient comfort and length of examination (scale 1-4). The examiners had good agreement for all assessments. Of a total of 78 examinations, only 8(10.3%) did not result in cup-disc ratio measurement in the undilated pupil condition using both devices. Ease of use was scored higher for Arclight than Heine but this was not statistically significant. In conditions like large discs, the Arclight resulted in easier examinations due to its larger field of view. Color rendition was better with the Heine device. In undilated pupils, the patients often reported that there was significant glare with Heine, however, post-dilation, they reported more glare with Arclight compared to Heine (73% versus 55%). The performance of Arclight was comparable to that of Heine and can be considered a low-cost alternative to the standard direct ophthalmoscope especially in large-scale patient examinations in developing countries where cost might be a factor.
Optical Coherence tomography (OCT) images provide several indicators, e.g., the shape and the thickness of different retinal layers, which can be used for various clinical and non-clinical purposes. We propose an automated classification method to identify different ocular diseases, based on the local binary pattern features. The database consists of normal and diseased human eye SD-OCT images. We use a multiphase approach for building our classifier, including preprocessing, Meta learning, and active learning. Pre-processing is applied to the data to handle missing features from images and replace them with the mean or median of the corresponding feature. All the features are run through a Correlation-based Feature Subset Selection algorithm to detect the most informative features and omit the less informative ones. A Meta learning approach is applied to the data, in which a SVM and random forest are combined to obtain a more robust classifier. Active learning is also applied to strengthen our classifier around the decision boundary. The primary experimental results indicate that our method is able to differentiate between the normal and non-normal retina with an area under the ROC curve (AUC) of 98.6% and also to diagnose the three common retina-related diseases, i.e., Age-related Macular Degeneration, Diabetic Retinopathy, and Macular Hole, with an AUC of 100%, 95% and 83.8% respectively. These results indicate a better performance of the proposed method compared to most of the previous works in the literature.
Accurate segmentation of spectral-domain Optical Coherence Tomography (SD-OCT) images helps diagnose retinal pathologies and facilitates the study of their progression/remission. Manual segmentation is clinical-expertise dependent and highly time-consuming. Furthermore, poor image contrast due to high-reflectivity of some retinal layers and the presence of heavy speckle noise, pose severe challenges to the automated segmentation algorithms. The first step towards retinal OCT segmentation therefore, is to create a noise-free image with edge details still preserved, as achieved by image reconstruction on a wavelet-domain preceded by bilateral-filtering. In this context, the current study compares the effects of image denoising using a simple Gaussian-filter to that of wavelet-based denoising, in order to help investigators decide whether an advanced denoising technique is necessary for accurate graph-based intraretinal layer segmentation. A comparative statistical analysis conducted between the mean thicknesses of the six layers segmented by the algorithm and those reported in a previous study, reports non-significant differences for five of the layers (p > 0.05) except for one layer (p = 0.04), when denoised using Gaussian-filter. Non-significant layer thickness differences are seen between both the algorithms for all the six retinal layers (p > 0.05) when bilateral-filtering and wavelet-based denoising is implemented before boundary delineation. However, this minor improvement in accuracy is achieved at an expense of substantial increase in computation time (∼10s when run on a specific CPU) and logical complexity. Therefore, it is debatable if one should opt for advanced denoising techniques over a simple Gaussian-filter when implementing graph-based OCT segmentation algorithms.
Retinal layer shape and thickness are one of the main indicators in the diagnosis of ocular diseases. We present an active contour approach to localize intra-retinal boundaries of eight retinal layers from OCT images. The initial locations of the active contour curves are determined using a Viterbi dynamic programming method. The main energy function is a Chan-Vese active contour model without edges. A boundary term is added to the energy function using an adaptive weighting method to help curves converge to the retinal layer edges more precisely, after evolving of curves towards boundaries, in final iterations. A wavelet-based denoising method is used to remove speckle from OCT images while preserving important details and edges. The performance of the proposed method was tested on a set of healthy and diseased eye SD-OCT images. The experimental results, compared between the proposed method and the manual segmentation, which was determined by an optometrist, indicate that our method has obtained an average of 95.29%, 92.78%, 95.86%, 87.93%, 82.67%, and 90.25% respectively, for accuracy, sensitivity, specificity, precision, Jaccard Index, and Dice Similarity Coefficient over all segmented layers. These results justify the robustness of the proposed method in determining the location of different retinal layers.
Segmentation of spectral-domain Optical Coherence Tomography (SD-OCT) images facilitates visualization and
quantification of sub-retinal layers for diagnosis of retinal pathologies. However, manual segmentation is subjective,
expertise dependent, and time-consuming, which limits applicability of SD-OCT. Efforts are therefore being made to
implement active-contours, artificial intelligence, and graph-search to automatically segment retinal layers with accuracy
comparable to that of manual segmentation, to ease clinical decision-making. Although, low optical contrast, heavy
speckle noise, and pathologies pose challenges to automated segmentation. Graph-based image segmentation approach
stands out from the rest because of its ability to minimize the cost function while maximising the flow. This study has
developed and implemented a shortest-path based graph-search algorithm for automated intraretinal layer segmentation
of SD-OCT images. The algorithm estimates the minimal-weight path between two graph-nodes based on their gradients.
Boundary position indices (BPI) are computed from the transition between pixel intensities. The mean difference
between BPIs of two consecutive layers quantify individual layer thicknesses, which shows statistically insignificant
differences when compared to a previous study [for overall retina: p = 0.17, for individual layers: p > 0.05 (except one
layer: p = 0.04)]. These results substantiate the accurate delineation of seven intraretinal boundaries in SD-OCT images
by this algorithm, with a mean computation time of 0.93 seconds (64-bit Windows10, core i5, 8GB RAM). Besides
being self-reliant for denoising, the algorithm is further computationally optimized to restrict segmentation within the
user defined region-of-interest. The efficiency and reliability of this algorithm, even in noisy image conditions, makes it
clinically applicable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.