Abstract
We present Blendshapes GHUM, an on-device ML pipeline that predicts 52 facial blendshape coefficients at 30+ FPS on modern mobile phones, from a single monocular RGB image and enables facial motion capture applications like virtual avatars. Our main contributions are: i) an annotation-free offline method for obtaining blendshape coefficients from real-world human scans, ii) a lightweight real-time model that predicts blendshape coefficients based on facial landmarks.
Abstract (translated)
我们介绍了Blendshapes GHUM,一个内置的机器学习管道,可以在现代智能手机上以30+帧每秒的速度从单个单眼RGB图像中预测52个面部Blendshape系数,并实现类似于虚拟角色的面部运动捕捉应用。我们的主要贡献是: i) 一种无标注的离线方法,从现实世界的人类扫描中获得Blendshape系数,ii) 一种轻量级实时模型,基于面部地标预测Blendshape系数。
URL
https://arxiv.org/abs/2309.05782