Learnt Inverse Kinematics for Animation Synthesis
Ong, E-J and Hilton, A (2006) Learnt Inverse Kinematics for Animation Synthesis Graphical Models, 68, 5-6. pp. 472-483.
ong05vvg.pdf - Accepted version Manuscript
Available under License : See the attached licence file.
Existing work on animation synthesis can be roughly split into two approaches, those that combine segments of motion capture data, and those that perform inverse kinematics. In this paper, we present a method for performing animation synthesis of an articulated object (e.g. human body and a dog) from a minimal set of body joint positions, following the approach of inverse kinematics. We tackle this problem from a learning perspective. Firstly, we address the need for knowledge on the physical constraints of the articulated body, so as to avoid the generation of a physically impossible poses. A common solution is to heuristically specify the kinematic constraints for the skeleton model. In this paper however, the physical constraints of the articulated body are represented using a hierarchical cluster model learnt from a motion capture database. Additionally, we shall show that the learnt model automatically captures the correlation between different joints through the simultaneous modelling their angles. We then show how this model can be utilised to perform inverse kinematics in a simple and efficient manner. Crucially, we describe how IK is carried out from a minimal set of end-effector positions. Following this, we show how this "learnt inverse kinematics" framework can be used to perform animation syntheses of different types of articulated structures. To this end, the results presented include the retargeting of a at surface walking animation to various uneven terrains to demonstrate the synthesis of a full human body motion from the positions of only the hands, feet and torso. Additionally, we show how the same method can be applied to the animation synthesis of a dog using only its feet and torso positions.
|Divisions :||Faculty of Engineering and Physical Sciences > Electronic Engineering > Centre for Vision Speech and Signal Processing|
|Identification Number :||https://doi.org/10.1016/j.gmod.2006.07.004|
|Additional Information :||NOTICE: this is the author’s version of a work that was accepted for publication in Graphical Models. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Graphical Models, 68, (5-6), Sep-Nov 2006, DOI 10.1016/j.gmod.2006.07.004.|
|Depositing User :||Symplectic Elements|
|Date Deposited :||24 Sep 2014 09:05|
|Last Modified :||24 Sep 2014 13:33|
Actions (login required)
Downloads per month over past year