University of Surrey

Test tubes in the lab Research in the ATI Dance Research

Non-rigid alignment for temporally consistent 3D video.

Budd, C. W. (2011) Non-rigid alignment for temporally consistent 3D video. Doctoral thesis, University of Surrey (United Kingdom)..

Available under License Creative Commons Attribution Non-commercial Share Alike.

Download (38MB) | Preview


This thesis presents methods for the temporal alignment of 3D performance capture data. Discussion of sequential tracking is presented along with introduction of a novel approach for non-rigid free-form surfaces which improves on previous approaches by combining geometric and photometric features. This reduces drift and increases reliability of tracking for complex 3D video sequences of people. Subsequently Non-sequential tracking is achieved with the introduction of a novel shape similarity tree representation. This approach combined with the sequential tracking enables alignment of an entire database of multiple 3D video sequences into a consistent mesh structure. This is the first approach to enable alignment across multiple sequences. Non-sequential alignment is also shown to reduce drift and improve reliability of surface alignment overcoming situations where sequential tracking fails. Adopting this approach to representing 3D video sequences provides a number of advantages over previous work in the area. Firstly, with a hierarchical tree representation of a sequence tracking is capable of recovering from even the worst of errors. Secondly, the tree divides the sequence into multiple tracking paths represented by the branches of the tree itself. This allows tracking of long sequences with relatively few sequential alignment steps. The number of sequential alignment steps does not grow linearly with the length of the sequence but instead grows based on the depth of the tree structure. Thirdly, automatic shape matching allows global alignment of multiple sequences of the same character giving the ability to produce databases of temporally consistent 3D video. Temporally consistent 3D performance capture is an essential step in producing an animation framework based on reconstructed video data. Currently reconstruction primarily produces a topologically different representation of each frame. Consequently editing of a sequence requires alteration of each frame within the sequence. A temporally consistent representation would allow automatic propagation of edits with techniques such as space time editing. With a fully consistent database of motions for a character it is possible to parametrise and blend between motions using techniques such as motion graphs. Thus allowing re-animation of the captured character.

Item Type: Thesis (Doctoral)
Divisions : Theses
Authors :
Budd, C. W.
Date : 2011
Contributors :
Depositing User : EPrints Services
Date Deposited : 09 Nov 2017 12:13
Last Modified : 16 Mar 2018 13:17

Actions (login required)

View Item View Item


Downloads per month over past year

Information about this web site

© The University of Surrey, Guildford, Surrey, GU2 7XH, United Kingdom.
+44 (0)1483 300800