TY - GEN
T1 - TAFSSL
T2 - 16th European Conference on Computer Vision, ECCV 2020
AU - Lichtenstein, Moshe
AU - Sattigeri, Prasanna
AU - Feris, Rogerio
AU - Giryes, Raja
AU - Karlinsky, Leonid
N1 - Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020
Y1 - 2020
N2 - Recently, Few-Shot Learning (FSL), or learning from very few (typically 1 or 5) examples per novel class (unseen during training), has received a lot of attention and significant performance advances. While number of techniques have been proposed for FSL, several factors have emerged as most important for FSL performance, awarding SOTA even to the simplest of techniques. These are: the backbone architecture (bigger is better), type of pre-training (meta-training vs multi-class), quantity and diversity of the base classes (the more the merrier), and using auxiliary self-supervised tasks (a proxy for increasing the diversity). In this paper we propose TAFSSL, a simple technique for improving the few shot performance in cases when some additional unlabeled data accompanies the few-shot task. TAFSSL is built upon the intuition of reducing the feature and sampling noise inherent to few-shot tasks comprised of novel classes unseen during pre-training. Specifically, we show that on the challenging miniImageNet and tieredImageNet benchmarks, TAFSSL can improve the current state-of-the-art in both transductive and semi-supervised FSL settings by more than 5 %, while increasing the benefit of using unlabeled data in FSL to above 10 % performance gain.
AB - Recently, Few-Shot Learning (FSL), or learning from very few (typically 1 or 5) examples per novel class (unseen during training), has received a lot of attention and significant performance advances. While number of techniques have been proposed for FSL, several factors have emerged as most important for FSL performance, awarding SOTA even to the simplest of techniques. These are: the backbone architecture (bigger is better), type of pre-training (meta-training vs multi-class), quantity and diversity of the base classes (the more the merrier), and using auxiliary self-supervised tasks (a proxy for increasing the diversity). In this paper we propose TAFSSL, a simple technique for improving the few shot performance in cases when some additional unlabeled data accompanies the few-shot task. TAFSSL is built upon the intuition of reducing the feature and sampling noise inherent to few-shot tasks comprised of novel classes unseen during pre-training. Specifically, we show that on the challenging miniImageNet and tieredImageNet benchmarks, TAFSSL can improve the current state-of-the-art in both transductive and semi-supervised FSL settings by more than 5 %, while increasing the benefit of using unlabeled data in FSL to above 10 % performance gain.
KW - Few-Shot Learning
KW - Semi-supervised
KW - Transductive
UR - http://www.scopus.com/inward/record.url?scp=85097427298&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-58571-6_31
DO - 10.1007/978-3-030-58571-6_31
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85097427298
SN - 9783030585709
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 522
EP - 539
BT - Computer Vision – ECCV 2020 - 16th European Conference, 2020, Proceedings
A2 - Vedaldi, Andrea
A2 - Bischof, Horst
A2 - Brox, Thomas
A2 - Frahm, Jan-Michael
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 23 August 2020 through 28 August 2020
ER -