Image Perception, Observer Performance, and Technology Assessment

Content-based image retrieval in radiology: analysis of variability in human perception of similarity

[+] Author Affiliations
Jessica Faruque

Stanford University, Department of Electrical Engineering, 350 Serra Mall, Stanford, California 94305, United States

Christopher F. Beaulieu

Stanford University Medical Center, Department of Radiology, 300 Pasteur Drive, Room S078, MC 5105, Stanford, California 94305, United States

Jarrett Rosenberg

Stanford University, Department of Radiology, Lucas MRS Imaging Center, 1201 Welch Road, Room P-280, Stanford, California 94305-5488, United States

Daniel L. Rubin

Stanford University, Departments of Radiology and Medicine (Biomedical Informatics), Richard M. Lucas Center P285, 1201 Welch Road, Stanford, California 94305-5488, United States

Dorcas Yao

Stanford University, Department of Radiology, 3801 Miranda Avenue, Palo Alto, California 94304-1290, United States

Sandy Napel

Stanford University, Department of Radiology, James H. Clark Center, 318 Campus Drive, W3.1, Stanford, California 94305-5441, United States

J. Med. Imag. 2(2), 025501 (Apr 03, 2015). doi:10.1117/1.JMI.2.2.025501
History: Received January 27, 2015; Accepted March 10, 2015
Text Size: A A A

Abstract.  We aim to develop a better understanding of perception of similarity in focal computed tomography (CT) liver images to determine the feasibility of techniques for developing reference sets for training and validating content-based image retrieval systems. In an observer study, four radiologists and six nonradiologists assessed overall similarity and similarity in 5 image features in 136 pairs of focal CT liver lesions. We computed intra- and inter-reader agreements in these similarity ratings and viewed the distributions of the ratings. The readers’ ratings of overall similarity and similarity in each feature primarily appeared to be bimodally distributed. Median Kappa scores for intra-reader agreement ranged from 0.57 to 0.86 in the five features and from 0.72 to 0.82 for overall similarity. Median Kappa scores for inter-reader agreement ranged from 0.24 to 0.58 in the five features and were 0.39 for overall similarity. There was no significant difference in agreement for radiologists and nonradiologists. Our results show that developing perceptual similarity reference standards is a complex task. Moderate to high inter-reader variability precludes ease of dividing up the workload of rating perceptual similarity among many readers, while low intra-reader variability may make it possible to acquire large volumes of data by asking readers to view image pairs over many sessions.

Figures in this Article
© 2015 Society of Photo-Optical Instrumentation Engineers

Citation

Jessica Faruque ; Christopher F. Beaulieu ; Jarrett Rosenberg ; Daniel L. Rubin ; Dorcas Yao, et al.
"Content-based image retrieval in radiology: analysis of variability in human perception of similarity", J. Med. Imag. 2(2), 025501 (Apr 03, 2015). ; http://dx.doi.org/10.1117/1.JMI.2.2.025501


Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

PubMed Articles
Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.