Abstract
Single image dehazing has been a challenging problem which aims to recover clear images from hazy ones. The performance of existing image dehazing methods is limited by hand-designed features and priors. In this paper, we propose a multi-scale deep neural network for single image dehazing by learning the mapping between hazy images and their transmission maps. The proposed algorithm consists of a coarse-scale net which predicts a holistic transmission map based on the entire image, and a fine-scale net which refines dehazed results locally. To train the multi-scale deep network, we synthesize a dataset comprised of hazy images and corresponding transmission maps based on the NYU Depth dataset. In addition, we propose a holistic edge guided network to refine edges of the estimated transmission map. Extensive experiments demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world images in terms of quality and speed.
Original language | English |
---|---|
Pages (from-to) | 240-259 |
Number of pages | 20 |
Journal | International Journal of Computer Vision |
Volume | 128 |
Issue number | 1 |
DOIs | |
Publication status | Published - 2020 Jan 1 |
Bibliographical note
Funding Information:This work is supported by the National Key R&D Program of China (Grant No. 2018YFB0803701), Beijing Natural Science Foundation (No. KZ201910005007), National Natural Science Foundation of China (Nos. U1636214, U1803264, U1605252, 61802403, 61602464, 61872421, 61922043), Peng Cheng Laboratory Project of Guangdong Province PCL2018KP004, and the Natural Science Foundation of Jiangsu Province (No. BK20180471). The work of W. Ren is supported in part by the CCF-DiDi GAIA (YF20180101), CCF-Tencent Open Fund, Zhejiang Lab’s International Talent Fund for Young Professionals, and the Open Projects Program of the National Laboratory of Pattern Recognition. The work of M.-H. Yang is supported by Directorate for Computer and Information Science and Engineering (CAREER 1149783).
Funding Information:
This work is supported by the National Key R&D Program of China (Grant No. 2018YFB0803701), Beijing Natural Science Foundation (No. KZ201910005007), National Natural Science Foundation of China (Nos. U1636214, U1803264, U1605252, 61802403, 61602464, 61872421, 61922043), Peng Cheng Laboratory Project of Guangdong Province PCL2018KP004, and the Natural Science Foundation of Jiangsu Province (No. BK20180471). The work of W. Ren is supported in part by the CCF-DiDi GAIA (YF20180101), CCF-Tencent Open Fund, Zhejiang Lab?s International Talent Fund for Young Professionals, and the Open Projects Program of the National Laboratory of Pattern Recognition. The work of M.-H. Yang is supported by Directorate for Computer and Information Science and Engineering (CAREER 1149783).
Publisher Copyright:
© 2019, Springer Science+Business Media, LLC, part of Springer Nature.
All Science Journal Classification (ASJC) codes
- Software
- Computer Vision and Pattern Recognition
- Artificial Intelligence