TY - JOUR
T1 - Risk bounds for unsupervised cross-domain mapping with ipms
AU - Galanti, Tomer
AU - Benaim, Sagie
AU - Wolf, Lior
N1 - Publisher Copyright:
© 2021 Microtome Publishing. All rights reserved.
PY - 2021
Y1 - 2021
N2 - The recent empirical success of unsupervised cross-domain mapping algorithms, in mapping between two domains that share common characteristics, is not well-supported by theoretical justifications. This lacuna is especially troubling, given the clear ambiguity in such mappings. We work with adversarial training methods based on integral probability metrics (IPMs) and derive a novel risk bound, which upper bounds the risk between the learned mapping h and the target mapping y, by a sum of three terms: (i) the risk between h and the most distant alternative mapping that was learned by the same cross-domain mapping algorithm, (ii) the minimal discrepancy between the target domain and the domain obtained by applying a hypothesis h_ on the samples of the source domain, where h∗Is a hypothesis selectable by the same algorithm, and (iii) an approximation error term that decreases as the capacity of the class of discriminators increases and is empirically shown to be small. The bound is directly related to Occam's razor and encourages the selection of the minimal architecture that supports a small mapping discrepancy. The bound leads to multiple algorithmic consequences, including a method for hyperparameter selection and early stopping in cross-domain mapping.
AB - The recent empirical success of unsupervised cross-domain mapping algorithms, in mapping between two domains that share common characteristics, is not well-supported by theoretical justifications. This lacuna is especially troubling, given the clear ambiguity in such mappings. We work with adversarial training methods based on integral probability metrics (IPMs) and derive a novel risk bound, which upper bounds the risk between the learned mapping h and the target mapping y, by a sum of three terms: (i) the risk between h and the most distant alternative mapping that was learned by the same cross-domain mapping algorithm, (ii) the minimal discrepancy between the target domain and the domain obtained by applying a hypothesis h_ on the samples of the source domain, where h∗Is a hypothesis selectable by the same algorithm, and (iii) an approximation error term that decreases as the capacity of the class of discriminators increases and is empirically shown to be small. The bound is directly related to Occam's razor and encourages the selection of the minimal architecture that supports a small mapping discrepancy. The bound leads to multiple algorithmic consequences, including a method for hyperparameter selection and early stopping in cross-domain mapping.
KW - Adversarial training
KW - Cross-domain alignment
KW - Image to image translation
KW - Integral probability metrics
KW - Unsupervised learning
UR - http://www.scopus.com/inward/record.url?scp=85107304836&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85107304836
SN - 1532-4435
VL - 22
JO - Journal of Machine Learning Research
JF - Journal of Machine Learning Research
ER -