6 July 2020 Dual feature extractor generative adversarial network for colorization
Huan Tian, Guangqian Kong, Yun Wu
Author Affiliations +
Abstract

The coloring efficiency of most colorization algorithms is suboptimal. Therefore, the dual feature extractor generative adversarial network was designed for colorization. The U-Net-like network was used as the trunk network in the generator. The encoder was used to extract the local features of a grayscale image. The branch extractor used the ResNeXt network that was added to the SE module as a high-level feature extractor to extract the global features of the grayscale image. The two features were fused to predict the chrominance. This strategy prevented color leakage and detail loss in colorization. Moreover, the adversarial loss was added to minimize the tendency toward acquiring an unsaturated tone. The effectiveness of the proposed method was evaluated quantitatively and qualitatively.

© 2020 SPIE and IS&T 1017-9909/2020/$28.00 © 2020 SPIE and IS&T
Huan Tian, Guangqian Kong, and Yun Wu "Dual feature extractor generative adversarial network for colorization," Journal of Electronic Imaging 29(4), 043001 (6 July 2020). https://doi.org/10.1117/1.JEI.29.4.043001
Received: 10 December 2019; Accepted: 22 June 2020; Published: 6 July 2020
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
RGB color model

Gallium nitride

Data modeling

Feature extraction

Computer programming

Image fusion

Convolution

Back to Top