Generating lifelike 3D humans from a single RGB image remains a challenging task in computer vision, as it requires accurate modeling of geometry, high-quality texture, and plausible unseen parts. Existing methods typically use multi-view diffusion models for 3D generation, but they often face inconsistent view issues, which hinder high-quality 3D human generation. To address this, we propose Human-VDM, a novel method for generating 3D human from a single RGB image using Video Diffusion Models. Human-VDM provides temporally consistent views for 3D human generation using Gaussian Splatting. It consists of three modules: a view-consistent human video diffusion module, a video augmentation module, and a Gaussian Splatting module. First, a single image is fed into a human video diffusion module to generate a coherent human video. Next, the video augmentation module applies super-resolution and video interpolation to enhance the textures and geometric smoothness of the generated video. Finally, the 3D Human Gaussian Splatting module learns lifelike humans under the guidance of these high-resolution and view-consistent images. Experiments demonstrate that Human-VDM achieves high-quality 3D human from a single image, outperforming state-of-the-art methods in both generation quality and quantity. The code will be available upon acceptance.
An image I is first input to a view-consistent human video diffusion module to generate a coherent human video. Next, the video augmentation module applies super-resolution and frame interpolation to enhance texture and generate high-quality interpolated frames. Finally, 3D Human Gaussian splatting learns lifelike 3D humans.
We propose a novel single-view 3D human generation framework that leverages the human video diffusion model to produce view-consistent human frames.
We carefully designed a video augmentation model that consists of super-resolution and interpolation to enhance the quality of the generated video.
We introduce an effective Gaussian Splatting framework for 3D human reconstruction with offset prediction.
Extensive experiments demonstrate that the proposed Human-VDM can generate realistic 3D humans from single-view images, outperforming state-of-the-art methods in both quality and effectiveness.
Quantitative comparison of the proposed Human-VDM with recent State-of-the-art models on THuman 2.0 dataset.
Qualitative Visual results of Human-VDM and comparisions to State-of-the-art models including in-the-wild testing
@misc{liu2024humanvdmlearningsingleimage3d,
title={Human-VDM: Learning Single-Image 3D Human Gaussian Splatting from Video Diffusion Models},
author={Zhibin Liu and Haoye Dong and Aviral Chharia and Hefeng Wu},
year={2024},
eprint={2409.02851},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2409.02851},
}