Fog-free training for foggy scene understanding

Minyoung Lee, Kyungwoo Song, Junsuk Choe

Research output: Contribution to journalArticlepeer-review

Abstract

It is essential to have semantic segmentation models that work effectively in foggy driving scenarios. This is because fog severely affects the safety of autonomous driving systems, posing significant visibility challenges and increasing the risk of accidents. Traditional methods often use complex and high-cost foggy datasets for training, which can be expensive and difficult to scale. To tackle this issue, we propose a novel fog-free method called ShiftMatch. Our method does not rely on foggy images for training. Instead, it creates virtual domain-shifted images by applying simple data augmentation methods and normalization techniques. During training, we ensure that the segmentation results from both the original and the domain-shifted images are consistent. This approach prevents the model from overfitting to specific domain features, enabling it to learn domain-invariant features effectively. Despite its cost-efficiency, ShiftMatch achieves state-of-the-art performance on three real foggy scene segmentation datasets. Additionally, it demonstrates superior performance in nighttime, rain, and snow driving scenarios.

Original languageEnglish
Pages (from-to)129-135
Number of pages7
JournalPattern Recognition Letters
Volume189
DOIs
Publication statusPublished - 2025 Mar

Bibliographical note

Publisher Copyright:
© 2025 Elsevier B.V.

All Science Journal Classification (ASJC) codes

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Fog-free training for foggy scene understanding'. Together they form a unique fingerprint.

Cite this