TY - JOUR
T1 - Learning Multimodal Affinities for Textual Editing in Images
AU - Perel, Or
AU - Anschel, Oron
AU - Ben-Eliezer, Omri
AU - Mazor, Shai
AU - Averbuch-Elor, Hadar
N1 - Publisher Copyright:
© 2021 Association for Computing Machinery.
PY - 2021/6
Y1 - 2021/6
N2 - Nowadays, as cameras are rapidly adopted in our daily routine, images of documents are becoming both abundant and prevalent. Unlike natural images that capture physical objects, document-images contain a significant amount of text with critical semantics and complicated layouts. In this work, we devise a generic unsupervised technique to learn multimodal affinities between textual entities in a document-image, considering their visual style, the content of their underlying text, and their geometric context within the image. We then use these learned affinities to automaticallycluster the textual entities in the image into different semantic groups. The core of our approach is a deep optimization scheme dedicated for an image provided by the user that detects and leverages reliable pairwise connections in the multimodal representation of the textual elements to properly learn the affinities. We show that our technique can operate on highly varying images spanning a wide range of documents and demonstrate its applicability for various editing operations manipulating the content, appearance, and geometry of the image.
AB - Nowadays, as cameras are rapidly adopted in our daily routine, images of documents are becoming both abundant and prevalent. Unlike natural images that capture physical objects, document-images contain a significant amount of text with critical semantics and complicated layouts. In this work, we devise a generic unsupervised technique to learn multimodal affinities between textual entities in a document-image, considering their visual style, the content of their underlying text, and their geometric context within the image. We then use these learned affinities to automaticallycluster the textual entities in the image into different semantic groups. The core of our approach is a deep optimization scheme dedicated for an image provided by the user that detects and leverages reliable pairwise connections in the multimodal representation of the textual elements to properly learn the affinities. We show that our technique can operate on highly varying images spanning a wide range of documents and demonstrate its applicability for various editing operations manipulating the content, appearance, and geometry of the image.
KW - Clustering
KW - Document images
KW - Image editing
KW - Infographics
KW - Multimodal representations
KW - Vision and language
UR - http://www.scopus.com/inward/record.url?scp=85122681566&partnerID=8YFLogxK
U2 - 10.1145/3451340
DO - 10.1145/3451340
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85122681566
SN - 0730-0301
VL - 40
JO - ACM Transactions on Graphics
JF - ACM Transactions on Graphics
IS - 3
M1 - 26
ER -