1Harbin Institute of Technology 2Beijing Normal University 3Tsinghua University
*Corresponding author †Work done during an internship at Tsinghua University
We present GaussianAvatar, an efficient approach to creating realistic human avatars with dynamic 3D appearances from a single video. We start by introducing animatable 3D Gaussians to explicitly represent humans in various poses and clothing styles. Such an explicit and animatable representation can fuse 3D appearances more efficiently and consistently from 2D observations. Our representation is further augmented with dynamic properties to support pose-dependent appearance modeling, where a dynamic appearance network along with an optimizable feature tensor is designed to learn the motion-to-appearance mapping. Moreover, by leveraging the differentiable motion condition, our method enables a joint optimization of motions and appearances during avatar modeling, which helps to tackle the long-standing issue of inaccurate motion estimation in monocular settings. The efficacy of GaussianAvatar is validated on both the public dataset and our collected dataset, demonstrating its superior performances in terms of appearance quality and rendering efficiency.
Illustration of the proposed method. Our method learns motion-to-appearance mapping by a dynamic appearance network and an optimizable feature tensor. The predicted point offsets, colors, and scales with fixed rotations and opacity constitute the animatable 3D Gaussians in canonical space. Following this, the 3D Gaussians undergo deformation into motion space via Linear Blend Skinning (LBS) and are subsequently rendered as images.
@inproceedings{hu2024gaussianavatar,
title={GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians},
author={Hu, Liangxiao and Zhang, Hongwen and Zhang, Yuxiang and Zhou, Boyao and Liu, Boning and Zhang, Shengping and Nie, Liqiang},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2024}
}