Learning Multimodal Affinities for Textual Editing in Images

Or Perel, Oron Anschel, Omri Ben-Eliezer, Shai Mazor, Hadar Averbuch-Elor

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Nowadays, as cameras are rapidly adopted in our daily routine, images of documents are becoming both abundant and prevalent. Unlike natural images that capture physical objects, document-images contain a significant amount of text with critical semantics and complicated layouts. In this work, we devise a generic unsupervised technique to learn multimodal affinities between textual entities in a document-image, considering their visual style, the content of their underlying text, and their geometric context within the image. We then use these learned affinities to automaticallycluster the textual entities in the image into different semantic groups. The core of our approach is a deep optimization scheme dedicated for an image provided by the user that detects and leverages reliable pairwise connections in the multimodal representation of the textual elements to properly learn the affinities. We show that our technique can operate on highly varying images spanning a wide range of documents and demonstrate its applicability for various editing operations manipulating the content, appearance, and geometry of the image.

Original languageEnglish
Article number26
JournalACM Transactions on Graphics
Volume40
Issue number3
DOIs
StatePublished - Jun 2021
Externally publishedYes

Keywords

  • Clustering
  • Document images
  • Image editing
  • Infographics
  • Multimodal representations
  • Vision and language

Fingerprint

Dive into the research topics of 'Learning Multimodal Affinities for Textual Editing in Images'. Together they form a unique fingerprint.

Cite this