Capturing high-quality photos in an underwater atmosphere is complicated, as light attenuation, color distortion, and reduced contrast pose significant challenges. However, one fact usually ignored is the non-uniform texture degradation in distorted images. The loss of comprehensive textures in underwater images poses obstacles in object detection and recognition. To address this problem, we have introduced an image enhancement model called scene adaptive color compensation and multi-weight fusion for extracting fine textural details under diverse environments and enhancing the overall quality of the underwater imagery. Our method blends three input images derived from the adaptive color-compensating and color-corrected version of the degraded image. The first two input images are used to adjust the low contrast and dehazing of the image respectively. Similarly, the third input image is used to extract the fine texture details based on different scales and orientations of the image. Finally, the input images with their associated weight maps are normalized and fused through multi-weight fusion. The proposed model is tested on a distinct set of underwater imagery with varying levels of degradation and frequently outperformed state-of-the-art methods, producing significant improvements in texture visibility, reducing color distortion, and enhancing the overall quality of the submerged images. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Image fusion
Color
Image enhancement
Image processing
Image quality
Tunable filters
Visualization