In this paper, we tackle the unsupervised domain adaptation (UDA) for semantic segmentation, which aims to segment the unlabeled real data using labeled synthetic data. The main problem of UDA for semantic segmentation relies on reducing the domain gap between the real image and synthetic image. To solve this problem, we focused on separating information in an image into content and style. Here, only the content has cues for semantic segmentation, and the style makes the domain gap. Thus, precise separation of content and style in an image leads to effect as supervision of real data even when learning with synthetic data. To make the best of this effect, we propose a zero-style loss. Even though we perfectly extract content for semantic segmentation in the real domain, another main challenge, the class imbalance problem, still exists in UDA for semantic segmentation. We address this problem by transferring the contents of tail classes from synthetic to real domain. Experimental results show that the proposed method achieves the state-of-the-art performance in semantic segmentation on the major two UDA settings.
|Title of host publication||35th AAAI Conference on Artificial Intelligence, AAAI 2021|
|Publisher||Association for the Advancement of Artificial Intelligence|
|Number of pages||10|
|Publication status||Published - 2021|
|Event||35th AAAI Conference on Artificial Intelligence, AAAI 2021 - Virtual, Online|
Duration: 2021 Feb 2 → 2021 Feb 9
|Name||35th AAAI Conference on Artificial Intelligence, AAAI 2021|
|Conference||35th AAAI Conference on Artificial Intelligence, AAAI 2021|
|Period||21/2/2 → 21/2/9|
Bibliographical noteFunding Information:
This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2019R1A2C1007153).
Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
All Science Journal Classification (ASJC) codes
- Artificial Intelligence