Learning Latent Representation for Robust Unsupervised Domain Adaptation



Gao, Zhiqiang
(2023) Learning Latent Representation for Robust Unsupervised Domain Adaptation. PhD thesis, University of Liverpool.

[img] Text
Thesis_zhiqiang_gao_0613.pdf - Author Accepted Manuscript
Access to this file is embargoed until 1 August 2025.

Download (17MB)

Abstract

Deep Neural Networks (DNNs) have achieved impressive performance for various applications, but they may not generalize well on new data due to the data distribution shift problem. This problem can manifest in various ways, such as sample selection bias, class distribution shift, and covariate shift. One of these is domain shift, which occurs when test data are sampled from a new target domain that differs from the training data in terms of appearance, background, or style. Manually annotating data in a new domain can be time-consuming and expensive. To address this issue, Unsupervised Domain Adaptation (UDA) aims to infer domain-invariant representations by using labeled source domain data and unlabeled target domain data. This thesis focuses on the UDA problem and explores a more challenging case called Robust Unsupervised Domain Adaption (RUDA), where corrupted samples may exist in the target domain. DNNs are vulnerable to feature corruptions such as well-crafted adversarial attacks and common corruptions, so the performance of DNNs needs to be certified not only on clean data but also on corrupted data. The goal of this thesis is to provide a new understanding of both UDA and RUDA from the perspective of latent representation and distribution. For vanilla UDA, we investigate the incomplete domain adaptation issue of the current advanced adversarial domain adaptation method. To solve this problem, we propose a feature gradient distribution divergence as a complementary metric. For robustness against common corruptions in UDA, we show that the key to achieving robustness is to alleviate the feature shift of corrupted samples. To accomplish this, we develop an unsupervised adversarial regularization method that penalizes these feature shifts and enables the model to better generalize to unseen types of corruptions. For robustness against adversarial attacks, we investigate how to generalize well on adversarial attacks generated from future data of the target domain. We demonstrate that reducing the feature-shift distribution divergence between the training and testing datasets of the target domain can certify better robust generalization.

Item Type: Thesis (PhD)
Uncontrolled Keywords: Unsupervised domain adaptation, Transfer learning, Robustness against adversarial attacks, Robustness against common corruptions
Divisions: Faculty of Science and Engineering > School of Electrical Engineering, Electronics and Computer Science
Depositing User: Symplectic Admin
Date Deposited: 25 Aug 2023 14:48
Last Modified: 25 Aug 2023 14:48
DOI: 10.17638/03171010
Supervisors:
  • Ma, Jieming
  • Huang, Yi
  • Huang, Kaizhu
  • Liu, Dawei
URI: https://livrepository.liverpool.ac.uk/id/eprint/3171010