Image Perception, Observer Performance, and Technology Assessment

Comparative study of computational visual attention models on two-dimensional medical images

[+] Author Affiliations
Gezheng Wen

The University of Texas at Austin, Electrical and Computer Engineering, Austin, Texas, United States

The University of Texas MD Anderson Cancer Center, Diagnostic Radiology, Houston, Texas, United States

Brenda Rodriguez-Niño, Furkan Y. Pecen

The University of Texas at Austin, Biomedical Engineering, Austin, Texas, United States

David J. Vining, Naveen Garg

The University of Texas MD Anderson Cancer Center, Diagnostic Radiology, Houston, Texas, United States

Mia K. Markey

The University of Texas at Austin, Biomedical Engineering, Austin, Texas, United States

The University of Texas MD Anderson Cancer Center, Imaging Physics, Houston, Texas, United States

J. Med. Imag. 4(2), 025503 (May 10, 2017). doi:10.1117/1.JMI.4.2.025503
History: Received February 24, 2017; Accepted April 18, 2017
Text Size: A A A

Abstract.  Computational modeling of visual attention is an active area of research. These models have been successfully employed in applications such as robotics. However, most computational models of visual attention are developed in the context of natural scenes, and their role with medical images is not well investigated. As radiologists interpret a large number of clinical images in a limited time, an efficient strategy to deploy their visual attention is necessary. Visual saliency maps, highlighting image regions that differ dramatically from their surroundings, are expected to be predictive of where radiologists fixate their gaze. We compared 16 state-of-art saliency models over three medical imaging modalities. The estimated saliency maps were evaluated against radiologists’ eye movements. The results show that the models achieved competitive accuracy using three metrics, but the rank order of the models varied significantly across the three modalities. Moreover, the model ranks on the medical images were all considerably different from the model ranks on the benchmark MIT300 dataset of natural images. Thus, modality-specific tuning of saliency models is necessary to make them valuable for applications in fields such as medical image compression and radiology education.

Figures in this Article
© 2017 Society of Photo-Optical Instrumentation Engineers

Citation

Gezheng Wen ; Brenda Rodriguez-Niño ; Furkan Y. Pecen ; David J. Vining ; Naveen Garg, et al.
"Comparative study of computational visual attention models on two-dimensional medical images", J. Med. Imag. 4(2), 025503 (May 10, 2017). ; http://dx.doi.org/10.1117/1.JMI.4.2.025503


Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

PubMed Articles
Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.