Robust Bi-tempered logistic loss based on bregman divergences

Ehsan Amid, Manfred K. Warmuth, Rohan Anil, Tomer Koren

Research output: Contribution to journalConference articlepeer-review

70 Scopus citations

Abstract

We introduce a temperature into the exponential function and replace the softmax output layer of the neural networks by a high-temperature generalization. Similarly, the logarithm in the loss we use for training is replaced by a low-temperature logarithm. By tuning the two temperatures, we create loss functions that are non-convex already in the single layer case. When replacing the last layer of the neural networks by our bi-temperature generalization of the logistic loss, the training becomes more robust to noise. We visualize the effect of tuning the two temperatures in a simple setting and show the efficacy of our method on large datasets. Our methodology is based on Bregman divergences and is superior to a related two-temperature method that uses the Tsallis divergence.

Original languageEnglish
JournalAdvances in Neural Information Processing Systems
Volume32
StatePublished - 2019
Event33rd Annual Conference on Neural Information Processing Systems, NeurIPS 2019 - Vancouver, Canada
Duration: 8 Dec 201914 Dec 2019

Funding

FundersFunder number
National Science FoundationIIS-1546459
National Science Foundation

    Fingerprint

    Dive into the research topics of 'Robust Bi-tempered logistic loss based on bregman divergences'. Together they form a unique fingerprint.

    Cite this