In this paper, we propose a region of interest-based (ROI-adaptive) fusion algorithm of infrared and visible images by
using the Laplacian Pyramid method. Firstly, we estimate the saliency map of infrared images, and then divide the infrared
image into two parts: the regions of interest (RoI) and the regions of non-interest (nRoI), by normalizing the saliency map.
Visible images are also segmented into two parts by using the Gauss High-pass filter: the regions of high frequency (RoH)
and the regions of low frequency (RoL). Secondly, we down-sampled both the nRoI of infrared image and the RoL of
visible image as the input of next level processing. Finally, we use normalized saliency map of infrared images as the
weighted coefficient to get the basic image on the top level and choose max gray value of the RoI of infrared image and
the RoH of visible image to get the detail image. In this way, our method can keep target feature of infrared image and
texture detail information of visual image at the same time. Experiment results show that such fusion scheme performs
better than the other fusion algorithms both on human visual system and quantitative metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.