Image Perception, Observer Performance, and Technology Assessment

Lack of agreement between radiologists: implications for image-based model observers

[+] Author Affiliations
Juhun Lee, Robert M. Nishikawa, Margarita L. Zuley

University of Pittsburgh, Department of Radiology, Pittsburgh, Pennsylvania, United States

Ingrid Reiser

The University of Chicago, Department of Radiology, Chicago, Illinois, United States

John M. Boone

University of California Davis Medical Center, Department of Radiology, Sacramento, California, United States

J. Med. Imag. 4(2), 025502 (May 03, 2017). doi:10.1117/1.JMI.4.2.025502
History: Received November 3, 2016; Accepted April 17, 2017
Text Size: A A A

Abstract.  We tested the agreement of radiologists’ rankings of different reconstructions of breast computed tomography images based on their diagnostic (classification) performance and on their subjective image quality assessments. We used 102 pathology proven cases (62 malignant, 40 benign), and an iterative image reconstruction (IIR) algorithm to obtain 24 reconstructions per case with different image appearances. Using image feature analysis, we selected 3 IIRs and 1 clinical reconstruction and 50 lesions. The reconstructions produced a range of image quality from smooth/low-noise to sharp/high-noise, which had a range in classifier performance corresponding to AUCs of 0.62 to 0.96. Six experienced Mammography Quality Standards Act (MQSA) radiologists rated the likelihood of malignancy for each lesion. We conducted an additional reader study with the same radiologists and a subset of 30 lesions. Radiologists ranked each reconstruction according to their preference. There was disagreement among the six radiologists on which reconstruction produced images with the highest diagnostic content, but they preferred the midsharp/noise image appearance over the others. However, the reconstruction they preferred most did not match with their performance. Due to these disagreements, it may be difficult to develop a single image-based model observer that is representative of a population of radiologists for this particular imaging task.

Figures in this Article
© 2017 Society of Photo-Optical Instrumentation Engineers

Citation

Juhun Lee ; Robert M. Nishikawa ; Ingrid Reiser ; Margarita L. Zuley and John M. Boone
"Lack of agreement between radiologists: implications for image-based model observers", J. Med. Imag. 4(2), 025502 (May 03, 2017). ; http://dx.doi.org/10.1117/1.JMI.4.2.025502


Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

PubMed Articles
Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.