ABSTRACT

Current interactive character authoring pipelines commonly consist of two steps: modeling and rigging of the character model which may be based on photographic reference or high-resolution 3D laser scans (Chapters 13 and 14); and move-trees based on a database of skeletal motion capture together with inverse dynamic and kinematic solvers for secondary motion. Motion graphs [Kovar et al. 02] and parameterized skeletal motion spaces [Heck and Gleicher 07] enable representation and real-time interactive control of character movement from motion capture data. Authoring interactive characters requires a high-level of manual editing to achieve acceptable realism of appearance and movement.