A Self Supervised StyleGAN for Image Annotation and Classification With Extremely Limited Labels

Dana Cohen Hochberg*, Hayit Greenspan, Raja Giryes

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

The recent success of learning-based algorithms can be greatly attributed to the immense amount of annotated data used for training. Yet, many datasets lack annotations due to the high costs associated with labeling, resulting in degraded performances of deep learning methods. Self-supervised learning is frequently adopted to mitigate the reliance on massive labeled datasets since it exploits unlabeled data to learn relevant feature representations. In this work, we propose SS-StyleGAN, a self-supervised approach for image annotation and classification suitable for extremely small annotated datasets. This novel framework adds self-supervision to the StyleGAN architecture by integrating an encoder that learns the embedding to the StyleGAN latent space, which is well-known for its disentangled properties. The learned latent space enables the smart selection of representatives from the data to be labeled for improved classification performance. We show that the proposed method attains strong classification results using small labeled datasets of sizes 50 and even 10. We demonstrate the superiority of our approach for the tasks of COVID-19 and liver tumor pathology identification.

Original languageEnglish
Pages (from-to)3509-3519
Number of pages11
JournalIEEE Transactions on Medical Imaging
Volume41
Issue number12
DOIs
StatePublished - 1 Dec 2022

Keywords

  • Classification
  • StyleGAN
  • pathology identification
  • representative selection
  • self-supervised learning

Fingerprint

Dive into the research topics of 'A Self Supervised StyleGAN for Image Annotation and Classification With Extremely Limited Labels'. Together they form a unique fingerprint.

Cite this