Task Nuisance Filtration for Unsupervised Domain Adaptation

David Uliel*, Raja Giryes

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In unsupervised domain adaptation (UDA) labeled data is available for one domain (Source Domain) which is generated according to some distribution, and unlabeled data is available for a second domain (Target Domain) which is generated from a possibly different distribution but has the same task. The goal is to learn a model that performs well on the target domain although labels are available only for the source data. Many recent works attempt to align the source and the target domains by matching their marginal distributions in a learned feature space. In this paper, we address the domain difference as a nuisance, and enables better adaptability of the domains, by encouraging minimality of the target domain representation, disentanglement of the features, and a smoother feature space that cluster better the target data. To this end, we use the information bottleneck theory and a classical technique from the blind source separation framework, namely, ICA (independent components analysis). We show that these concepts can improve performance of leading domain adaptation methods on various domain adaptation benchmarks.

Original languageEnglish
Pages (from-to)303-311
Number of pages9
JournalIEEE Open Journal of Signal Processing
Volume6
DOIs
StatePublished - 2025

Funding

FundersFunder number
Israel Innovation Authority
Tel Aviv University
KLA

    Keywords

    • Blind source separation
    • domain adaptation
    • information theory
    • machine learning
    • mutual information

    Fingerprint

    Dive into the research topics of 'Task Nuisance Filtration for Unsupervised Domain Adaptation'. Together they form a unique fingerprint.

    Cite this