In this work we address the task of adapting the model representation of a given image, in the context of a second, target image model. We present the BlobEMD framework, in which the images are represented as sets of blobs and optimal correspondences are found between the representations of the images and are used to adapt the representation of the source image to that of the target image. The context-based model adaptation allows for similarity measures between images that are insensitive to the segmentation process and to different levels of details of the representation. We show applications for matching models of heavily dithered images with models of full resolution images, and for content-based image segmentation where the transition from regions to representative silhouettes is shown.
|Number of pages||4|
|Journal||Proceedings - International Conference on Pattern Recognition|
|State||Published - 2002|