Evaluation Metrics for Conditional Image Generation

Yaniv Benny*, Tomer Galanti, Sagie Benaim, Lior Wolf

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

20 Scopus citations

Abstract

We present two new metrics for evaluating generative models in the class-conditional image generation setting. These metrics are obtained by generalizing the two most popular unconditional metrics: the Inception Score (IS) and the Fréchet Inception Distance (FID). A theoretical analysis shows the motivation behind each proposed metric and links the novel metrics to their unconditional counterparts. The link takes the form of a product in the case of IS or an upper bound in the FID case. We provide an extensive empirical evaluation, comparing the metrics to their unconditional variants and to other metrics, and utilize them to analyze existing generative models, thus providing additional insights about their performance, from unlearned classes to mode collapse.

Original languageEnglish
Pages (from-to)1712-1731
Number of pages20
JournalInternational Journal of Computer Vision
Volume129
Issue number5
DOIs
StatePublished - May 2021

Funding

FundersFunder number
Horizon 2020 Framework Programme
European Research Council
Horizon 2020725974

    Keywords

    • Conditional generation
    • Evaluation metrics
    • Fréchet Inception Distance
    • Image generation
    • Inception Score

    Fingerprint

    Dive into the research topics of 'Evaluation Metrics for Conditional Image Generation'. Together they form a unique fingerprint.

    Cite this