Self-training guided disentangled adaptation for cross-domain remote sensing image semantic segmentation



Zhao, Qi, Lyu, Shuchang, Zhao, Hongbo, Liu, Binghao, Chen, Lijiang and Cheng, Guangliang ORCID: 0000-0001-8686-9513
(2024) Self-training guided disentangled adaptation for cross-domain remote sensing image semantic segmentation. International Journal of Applied Earth Observation and Geoinformation, 127. p. 103646.

[img] Text
ISPRS2023_IJAEOG.pdf - Author Accepted Manuscript
Available under License Creative Commons Attribution.

Download (18MB) | Preview

Abstract

Remote sensing (RS) image semantic segmentation using deep convolutional neural networks (DCNNs) has shown great success in various applications. However, the high dependence on annotated data makes it challenging for DCNNs to adapt to different RS scenes. To address this challenge, we propose a cross-domain RS image semantic segmentation task that considers ground sampling distance, remote sensing sensor variation, and different geographical landscapes as the main factors causing domain shifts between source and target images. To mitigate the negative impact of domain shift, we propose a self-training guided disentangled adaptation network (ST-DASegNet) that consists of source and target student backbones to extract source-style and target-style features. To align cross-domain single-style features, we adopt feature-level adversarial learning. We also propose a domain disentangled module (DDM) to extract universal and distinct features from single-domain cross-style features. Finally, we fuse these features and generate predictions using source and target student decoders. Moreover, we employ an exponential moving average (EMA) based cross-domain separated self-training mechanism to ease the instability and disadvantageous effect during adversarial optimization. Our experiments on several prominent RS datasets (Potsdam, Vaihingen, and LoveDA) demonstrate that ST-DASegNet outperforms previous methods and achieves new state-of-the-art results. Visualization and analysis also confirm the interpretability of ST-DASegNet. The code is publicly available at https://github.com/cv516Buaa/ST-DASegNet.

Item Type: Article
Divisions: Faculty of Science and Engineering > School of Electrical Engineering, Electronics and Computer Science
Depositing User: Symplectic Admin
Date Deposited: 24 Apr 2024 11:21
Last Modified: 24 Apr 2024 11:21
DOI: 10.1016/j.jag.2023.103646
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3180561