TY - GEN
T1 - DiDA
T2 - 11th International Conference on Information Technology in Medicine and Education, ITME 2021
AU - Cao, Jinming
AU - Katzir, Oren
AU - Jiang, Peng
AU - Lischinski, Dani
AU - Cohen-Or, Daniel
AU - Tu, Changhe
AU - Li, Yangyan
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - Unsupervised domain adaptation aims at learning a shared model for two related domains by leveraging supervision from a source domain to an unsupervised target domain. A number of effective domain adaptation approaches rely on the ability to extract domain-invariant latent factors which are common to both domains. Extracting latent commonality is also useful for disentanglement analysis. It enables separation between the common and the domain-specific features of both domains, which can be recombined for synthesis. In this paper, we propose a strategy to boost the performance of domain adaptation and disentangled synthesis iteratively. The key idea is that by learning to separately extract both the common and the domain-specific features, one can synthesize more target domain data with supervision, thereby boosting the domain adaptation performance. Better common feature extraction, in turn, helps further improve the feature disentanglement and the following disentangled synthesis. We show that iterating between domain adaptation and disentangled synthesis can consistently improve each other on several unsupervised domain adaptation benchmark datasets and tasks, under various domain adaptation backbone models.
AB - Unsupervised domain adaptation aims at learning a shared model for two related domains by leveraging supervision from a source domain to an unsupervised target domain. A number of effective domain adaptation approaches rely on the ability to extract domain-invariant latent factors which are common to both domains. Extracting latent commonality is also useful for disentanglement analysis. It enables separation between the common and the domain-specific features of both domains, which can be recombined for synthesis. In this paper, we propose a strategy to boost the performance of domain adaptation and disentangled synthesis iteratively. The key idea is that by learning to separately extract both the common and the domain-specific features, one can synthesize more target domain data with supervision, thereby boosting the domain adaptation performance. Better common feature extraction, in turn, helps further improve the feature disentanglement and the following disentangled synthesis. We show that iterating between domain adaptation and disentangled synthesis can consistently improve each other on several unsupervised domain adaptation benchmark datasets and tasks, under various domain adaptation backbone models.
KW - Disentanglement
KW - Domain Adaptation
KW - Unsupervised Learning
UR - http://www.scopus.com/inward/record.url?scp=85128873112&partnerID=8YFLogxK
U2 - 10.1109/ITME53901.2021.00049
DO - 10.1109/ITME53901.2021.00049
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85128873112
T3 - Proceedings - 11th International Conference on Information Technology in Medicine and Education, ITME 2021
SP - 201
EP - 208
BT - Proceedings - 11th International Conference on Information Technology in Medicine and Education, ITME 2021
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 19 November 2021 through 21 November 2021
ER -