Describing Sets of Images with Textual-PCA

Oded Hupert, Idan Schwartz, Lior Wolf

Research output: Contribution to conferencePaperpeer-review

Abstract

We seek to semantically describe a set of images, capturing both the attributes of single images and the variations within the set. Our procedure is analogous to Principle Component Analysis, in which the role of projection vectors is replaced with generated phrases. First, a centroid phrase that has the largest average semantic similarity to the images in the set is generated, where both the computation of the similarity and the generation are based on pretrained vision-language models. Then, the phrase that generates the highest variation among the similarity scores is generated, using the same models. The next phrase maximizes the variance subject to being orthogonal, in the latent space, to the highest-variance phrase, and the process continues. Our experiments show that our method is able to convincingly capture the essence of image sets and describe the individual elements in a semantically meaningful way within the context of the entire set. Our code is available at: https://github.com/OdedH/textual-pca.

Original languageEnglish
Pages3840-3850
Number of pages11
StatePublished - 2022
Event2022 Findings of the Association for Computational Linguistics: EMNLP 2022 - Abu Dhabi, United Arab Emirates
Duration: 7 Dec 202211 Dec 2022

Conference

Conference2022 Findings of the Association for Computational Linguistics: EMNLP 2022
Country/TerritoryUnited Arab Emirates
CityAbu Dhabi
Period7/12/2211/12/22

Funding

FundersFunder number
European Union's Horizon 2020 research and innovation pro-gramme
Horizon 2020 Framework ProgrammeERC CoG 725974
European Research Council

    Fingerprint

    Dive into the research topics of 'Describing Sets of Images with Textual-PCA'. Together they form a unique fingerprint.

    Cite this