TY - JOUR
T1 - Makeup lamps
T2 - Live augmentation of human faces via projection
AU - Bermano, Amit H.
AU - Billeter, Markus
AU - Iwai, Daisuke
AU - Grundhöfer, Anselm
N1 - Publisher Copyright:
© 2017 The Author(s).
PY - 2017
Y1 - 2017
N2 - We propose the first system for live dynamic augmentation of human faces. Using projector-based illumination, we alter the appearance of human performers during novel performances. The key challenge of live augmentation is latency - an image is generated according to a specific pose, but is displayed on a different facial configuration by the time it is projected. Therefore, our system aims at reducing latency during every step of the process, from capture, through processing, to projection. Using infrared illumination, an optically and computationally aligned high-speed camera detects facial orientation as well as expression. The estimated expression blendshapes are mapped onto a lower dimensional space, and the facial motion and non-rigid deformation are estimated, smoothed and predicted through adaptive Kalman filtering. Finally, the desired appearance is generated interpolating precomputed offset textures according to time, global position, and expression. We have evaluated our system through an optimized CPU and GPU prototype, and demonstrated successful low latency augmentation for different performers and performances with varying facial play and motion speed. In contrast to existing methods, the presented system is the first method which fully supports dynamic facial projection mapping without the requirement of any physical tracking markers and incorporates facial expressions.
AB - We propose the first system for live dynamic augmentation of human faces. Using projector-based illumination, we alter the appearance of human performers during novel performances. The key challenge of live augmentation is latency - an image is generated according to a specific pose, but is displayed on a different facial configuration by the time it is projected. Therefore, our system aims at reducing latency during every step of the process, from capture, through processing, to projection. Using infrared illumination, an optically and computationally aligned high-speed camera detects facial orientation as well as expression. The estimated expression blendshapes are mapped onto a lower dimensional space, and the facial motion and non-rigid deformation are estimated, smoothed and predicted through adaptive Kalman filtering. Finally, the desired appearance is generated interpolating precomputed offset textures according to time, global position, and expression. We have evaluated our system through an optimized CPU and GPU prototype, and demonstrated successful low latency augmentation for different performers and performances with varying facial play and motion speed. In contrast to existing methods, the presented system is the first method which fully supports dynamic facial projection mapping without the requirement of any physical tracking markers and incorporates facial expressions.
UR - http://www.scopus.com/inward/record.url?scp=85062690868&partnerID=8YFLogxK
U2 - 10.1111/cgf.13128
DO - 10.1111/cgf.13128
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85062690868
VL - 36
SP - 311
EP - 323
JO - Eurographics Symposium on Geometry Processing
JF - Eurographics Symposium on Geometry Processing
SN - 1727-8384
IS - 2
ER -