TY - JOUR
T1 - Task Nuisance Filtration for Unsupervised Domain Adaptation
AU - Uliel, David
AU - Giryes, Raja
N1 - Publisher Copyright:
© 2025 The Authors.
PY - 2025
Y1 - 2025
N2 - In unsupervised domain adaptation (UDA) labeled data is available for one domain (Source Domain) which is generated according to some distribution, and unlabeled data is available for a second domain (Target Domain) which is generated from a possibly different distribution but has the same task. The goal is to learn a model that performs well on the target domain although labels are available only for the source data. Many recent works attempt to align the source and the target domains by matching their marginal distributions in a learned feature space. In this paper, we address the domain difference as a nuisance, and enables better adaptability of the domains, by encouraging minimality of the target domain representation, disentanglement of the features, and a smoother feature space that cluster better the target data. To this end, we use the information bottleneck theory and a classical technique from the blind source separation framework, namely, ICA (independent components analysis). We show that these concepts can improve performance of leading domain adaptation methods on various domain adaptation benchmarks.
AB - In unsupervised domain adaptation (UDA) labeled data is available for one domain (Source Domain) which is generated according to some distribution, and unlabeled data is available for a second domain (Target Domain) which is generated from a possibly different distribution but has the same task. The goal is to learn a model that performs well on the target domain although labels are available only for the source data. Many recent works attempt to align the source and the target domains by matching their marginal distributions in a learned feature space. In this paper, we address the domain difference as a nuisance, and enables better adaptability of the domains, by encouraging minimality of the target domain representation, disentanglement of the features, and a smoother feature space that cluster better the target data. To this end, we use the information bottleneck theory and a classical technique from the blind source separation framework, namely, ICA (independent components analysis). We show that these concepts can improve performance of leading domain adaptation methods on various domain adaptation benchmarks.
KW - Blind source separation
KW - domain adaptation
KW - information theory
KW - machine learning
KW - mutual information
UR - http://www.scopus.com/inward/record.url?scp=85217139228&partnerID=8YFLogxK
U2 - 10.1109/OJSP.2025.3536850
DO - 10.1109/OJSP.2025.3536850
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85217139228
SN - 2644-1322
VL - 6
SP - 303
EP - 311
JO - IEEE Open Journal of Signal Processing
JF - IEEE Open Journal of Signal Processing
ER -