CG2Real: Improving the realism of computer generated images using a large collection of photographs

Micah K. Johnson*, Kevin Dale, Shai Avidan, Hanspeter Pfister, William T. Freeman, Wojciech Matusik

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

70 Scopus citations

Abstract

Computer-generated (CG) images have achieved high levels of realism. This realism, however, comes at the cost of long and expensive manual modeling, and often humans can still distinguish between CG and real images. We introduce a new data-driven approach for rendering realistic imagery that uses a large collection of photographs gathered from online repositories. Given a CG image, we retrieve a small number of real images with similar global structure. We identify corresponding regions between the CG and real images using a mean-shift cosegmentation algorithm. The user can then automatically transfer color, tone, and texture from matching regions to the CG image. Our system only uses image processing operations and does not require a 3D model of the scene, making it fast and easy to integrate into digital content creation workflows. Results of a user study show that our hybrid images appear more realistic than the originals.

Original languageEnglish
Article number5620893
Pages (from-to)1273-1285
Number of pages13
JournalIEEE Transactions on Visualization and Computer Graphics
Volume17
Issue number9
DOIs
StatePublished - 2011

Funding

FundersFunder number
John A. and Elizabeth S. Armstrong Fellowship at Harvard
National Science FoundationPHY-0835713, 0739255

    Keywords

    • Image enhancement
    • image databases
    • image-based rendering

    Fingerprint

    Dive into the research topics of 'CG2Real: Improving the realism of computer generated images using a large collection of photographs'. Together they form a unique fingerprint.

    Cite this