This work addresses the problem of single image dehazing particularly towards the goal of better visibility restoration. Athough extensive studies have been performed, almost all of them are heavily built on the atmospheric scattering model. What is worse, they usually fail to restore the visibility of dense hazy images convincingly. Inspired by the potentials of deep learning, a new end-to-end approach is presented to restore a clear image directly from a hazy image, while with an emphasis on the real-world weather conditions. In specific, an Encoder-Decoder is exploited as a generator for restoring the dehazed image in an attempt of preserve more image details. Interestingly, it is further found that the performance of the Encoder-Decoder can be largely boosted via our advocated dual principles of discriminativeness in this paper. On the one hand, the dark channel is re-explored in our framework resulting in a discriminative prior formulated specifically for the dehazing problem. On the other hand, a critic is incorporated for adversarial training against the autoencoding-based generator, implemented via the Wasserstein GAN (generative adverarial networks) regularized by the Liptchitiz penalty. The proposed approach is trained on a synthetic dataset of hazy images, while evaluated on both synthetic and real hazy images. The objective evaluation has shown that the proposed approach performs competitively with the state-of-the-art approaches, but outperforms them in terms of the visibility restoration especially in the scenarios of dense haze.
In the traditional uniform blind deblurring methods, we have witnessed the great advances by utilizing various image priors which are expected to favor clean images than blurred images and act through regularizing the solution space. However, these methods failed in dealing with non-uniform blind deblurring because of the inaccuracy in kernel estimation. Learning-based methods can generate clear images in an end-to-end way potentially without an intermediate step of blur kernel estimation. To better deal with the non-uniform deblurring problem in dynamic scenes, in this paper we present a new type of image priors complementary to the deep learning-based blind estimation framework. Specifically, inspired by the interesting discovery of dark and bright channels in dehazing, the opposite-channel-based discriminative priors are developed and directly integrated to the loss of our advocated deep deblurring model, so as to achieve more accurate and robust blind deblurring performance. It deserves noticing that, our deep model is formulated in the framework of the Wasserstein generative adversarial networks regularized by the Liptchitz penalty (WGAN-LP), and the network structures are relatively simpler yet more stable than other deep deblurring methods. We evaluate the proposed method on a large scale blur dataset with complex non-uniform motions. Experimental results show that it achieves state-of-the-art non-uniform blind deblurring performance not only quantitatively but also qualitatively
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.