University of Surrey

Test tubes in the lab Research in the ATI Dance Research

Learning to animate volumetric video.

Regateiro, João P. C. (2020) Learning to animate volumetric video. Doctoral thesis, University of Surrey.

[img]
Preview
Text
Regateiro_thesis.pdf - Version of Record
Available under License Creative Commons Attribution Non-commercial Share Alike.

Download (78MB) | Preview

Abstract

4D human performance capture aims to create volumetric representations of observed human subjects performing arbitrary motions with the ability to replay and render dynamic scenes with the realism of the recorded video. This representation has the potential to enable highly realistic content production for immersive virtual and augmented reality experiences. Human models are typically rendered using detailed, explicit 3D models, which consist of meshes and textures, and animated using tailored motion models to simulate human behaviour and activity. However, designing a realistic 3D human model is still a costly and laborious process. Hence, this work investigates techniques to learn models of human body shape and appearance, aiming to facilitate the generation of highly realistic human animation, and demonstrate its potential contributions, applications, and versatility. The first contribution of this work is a skeleton driven surface registration approach to generate temporally consistent meshes from multi-view video of human subjects. 2D pose detections from multi-view video are used to estimate 3D skeletal pose on a per-frame basis, which allows a reference frame to match the pose estimation of other frames in a sequence. This allows an initial coarse alignment followed by a patch-based non-rigid mesh deformation to generate temporally consistent mesh sequences. The second contribution presents techniques to represent human-like shape using a compressed learnt model from 4D volumetric performance capture data. Sequences of 4D dynamic geometry representing a human are encoded with a generative network into a compact space representation, whilst maintaining the original properties, such as surface non-rigid deformations. This compact representation enables synthesis, interpolation and generation of 3D shapes. The third contribution is Deep4D generative network that is capable of compact representation of 4D volumetric video sequences from skeletal motion of people with two orders of magnitude compression. A variational encoder-decoder is employed to learn an encoded latent space that maps from 3D skeletal pose to 4D shape and appearance. This enable high-quality 4D volumetric video synthesis to be driven by skeletal animation. Finally, this thesis introduces Deep4D motion graph to implicitly combine multiple captured motions in a unified representation for character animation from volumetric video, allowing novel character movements to be generated with dynamic shape and appearance detail. Deep4D motion graphs allow character animation to be driven by skeletal motion sequences providing a compact encoded representation capable of high-quality synthesis of the 4D volumetric video with two orders of magnitude compression.

Item Type: Thesis (Doctoral)
Divisions : Theses
Authors : Regateiro, João P. C.
Date : 30 June 2020
Funders : Centre for Vision, Speech and Signal Processing (CVSSP)
DOI : 10.15126/thesis.00856979
Contributors :
ContributionNameEmailORCID
http://www.loc.gov/loc.terms/relators/THSHilton, AdrianA.Hilton@surrey.ac.uk
http://www.loc.gov/loc.terms/relators/THSVolino, Marcomarco.volino@surrey.ac.uk
Depositing User : Joao Pedro Regateiro
Date Deposited : 09 Jul 2020 15:06
Last Modified : 09 Jul 2020 15:07
URI: http://epubs.surrey.ac.uk/id/eprint/856979

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year


Information about this web site

© The University of Surrey, Guildford, Surrey, GU2 7XH, United Kingdom.
+44 (0)1483 300800