Research and Advances
Artificial Intelligence and Machine Learning

Controlling Physics in Realistic Character Animation

To solve the problem of generating realistic human motion, animation software has to not only produce realistic movement but provide full control of the process to the animator.
Posted
  1. Introduction
  2. Reusable Character Motion Libraries
  3. Transforming the Motion Sequence
  4. Spacetime Edits
  5. Conclusion
  6. References
  7. Author
  8. Figures
  9. Sidebar: Human Run and Jump Libraries
  10. Figure
  11. Sidebar: Turning Motion into a Spacetime-Optimization Problem

For the last 10 years, computers have been used with great success to produce realistic motion of passive structures by simulating the physical laws of motion—something that would be very difficult to do by hand. Examples include the simulation of colliding rigid bodies and of cloth motion. It would seem that creating realistic character animation would not be significantly more difficult than computing the motion of cloth or other such passive objects. As with passive-object simulations, a character’s motion needs to be consistent with the laws of physics for an animation to appear realistic. But consistency alone is not sufficient for generating realistic-looking animations. The character also produces forces that create locomotion.

Humans utilize their muscles in many ways in order to walk or run, and only a small set of simulated animations would look realistic (see Figure 1). To produce natural-looking motion, it’s not enough to have just physically correct motion; the intricacies of the character’s muscles and bones and how they pertain to motion have to be taken into account. Providing the animator with the ability to control this highly involved process adds further difficulties. This dual goal of controlled realism motivated me (and my advisor Andy Witkin, now at Pixar Animation Studios) to devise the methodology described here.

Back to Top

Reusable Character Motion Libraries

The prevalent use of keyframing and procedural methods in computer animation stems from the fact that these methods put full control of the resulting motion in the hands and imagination of the animator. The burden of animation quality rests entirely on the animator, much as puppeteers have full control over the movement of their marionettes by pulling on specific strings. Highly skilled animators and special-effects wizards appreciate this low-level control, because it allows them to fully express their artistry.

Having so much control also makes it that much more difficult for the world’s less-talented animators to create animations. In fact, the task of appropriately positioning a character in a specific pose at the right time is arduous, even for the simplest animation. Instead of incrementally setting keyframes for various character poses, the unskilled animator would ideally like to be able to edit high-level motion constructs. For example, an animator might want to reposition footprints or simply specify that a movement should be more energetic. Alternatively, the animator might want to impose greater importance on balance while performing a movement, or change a character’s behavior by specifying the walking surface be significantly more slippery.

The animator should also be able to access a human-run library, instantiating a specific run by demanding realism and specifying the character dimensions, foot placements, even the emotional state of the runner. By limiting, say, the left knee’s range of motion, the library would produce a limping run satisfying all previous requirements. The animator should be able to specify the finer detail constraints on bounce quality, air-time, and specific arm poses (see the first sidebar ). Finally, it should be possible to merge a collection of instantiated motion sequences, including, say, human run, human jump, karate kicks, soccer ball kick, and tennis serve, from the library to produce a seamless character animation. The use of such human-movement libraries would make computer animation a much more accessible storytelling medium useful to a much more diverse population of computer-content providers.

One of the most difficult problems in light of such extremely flexible libraries is how to maintain the realism of the motion despite all the possible changes in the motion specification. Although not always needed, the realism requirement would enable even unskilled animators to create the motion of synthetic humans—arguably one of the greatest challenges in computer animation. Realistic synthetic humans would be very useful in a number of areas:

  • Education. Using desktop PCs, children could learn from personal instructors that seem as real as the teachers in their classrooms.
  • Entertainment. In digital filmmaking and video-game design, creating realistic human characters is perhaps the greatest open challenge. More sophisticated models of human motion and appearance are needed.
  • Human-computer interaction. The face any computer shows its user is impersonal. Realistic full-bodied human “guides” would make computers more accessible to a wider population.
  • Teleconferencing. By combining video with more sophisticated shape and motion information, teleconferencing would obviate the need for using single-viewpoint video, allowing participants more freedom of movement and a greater sense of presence. Imagine, in the next two to five years, realistic synthetic avatars representing us in the distributed digital world, changing the notion of teleconferencing as we’ve come to know it.

Although people are generally quick at visually perceiving the subtleties of human motion in nature, our perceptual understanding of natural movement offers little help to an animator trying to generate such motion. Moreover, synthesizing and analyzing the high-quality motion of dynamic 3D articulated characters has proven to be an extremely difficult problem. The collective knowledge of biomechanics, control theory, robot-motion planning, and computer animation indicates that the underlying processes governing motion are complex and difficult to control.

The novel approach to the problem of creating animations I describe here maintains a level of realism in light of all other motion modifications. Instead of motion synthesis, I approach such animations through “motion transformation,” or the adaptation of existing motion. For example, instead of generating an animation from scratch, I transform existing human-run sequences by changing their parameters until the resulting motion meets the needs of the animation. A number of other computer-animation researchers have also adopted the motion-transformation approach, which has arguably become the most active research direction in computer animation [2, 3, 6, 12].

Any dynamically plausible motion, such as captured motion and physical simulations, can be used as input to my transformation algorithm. The first step in the algorithm is construction of a simplified character model and the fitting of the motion of the simplified model to the captured motion data. This fitted motion is a physical spacetime-optimization solution including the body’s mass properties, pose, footprint constraints, and muscles, as well as the motion property being optimized, called the “objective function” [11] (see the second sidebar). To edit an animation, the animator modifies the constraints and physical parameters of the model and other spacetime-optimization parameters, including limb geometry, footprint positions, objective function, and gravity. From this altered spacetime “parameterization,” the algorithm computes a transformed motion sequence and maps the motion change of the simplified model back onto the original motion to produce the final animation sequence.

In addition to providing a methodology for ensuring realism of such motion, the spacetime-optimization model is an intuitive tool for the high-level editing of motion sequences, including: foot placement and timing; the kinematic structure of the character; the dynamic environment of the animation; and the motion property being optimized by the animation task. The algorithm’s ability to preserve dynamics of motion and the existence of a rich set of motion controls enables animators to create motion libraries from a single input motion sequence. Once the original motion is fitted onto the spacetime-optimization model, the model can then be presented to the animator as a tool for generating the movement that meets the specifications of the given animation they’re working on.

Back to Top

Transforming the Motion Sequence

As in other motion-capture editing methods, my algorithm (presented in 1999) does not synthesize motion from scratch. Instead, it transforms the input motion sequence to satisfy the needs of the animation. Although its development was motivated by the general need to enable realistic high-level control of high-quality captured-motion sequences, the same methods can be applied to the motion of arbitrary sources of a realistic motion [8, 9].

At its core, the algorithm uses the spacetime-optimization formulation, which maintains the dynamic integrity of motion and provides intuitive motion control. Before these results were available, dynamic spacetime-optimization methods were used exclusively for motion synthesis, rather than for motion transformation [7, 10].

Also worth noting is that spacetime optimization is different from the robot-controller-simulation approach to character animation [4, 5]. Although both approaches generate realistic motion, robot-controller approaches do not solve directly for motion paths. Instead, they construct controllers that generate forces at a character’s joints based on the state of the character’s dynamic properties. Once the controllers are generated by a robot-control developer, the actuated character is placed in the dynamic simulation environment to produce the final motion sequence.

Spacetime optimization does not use controllers or perform simulations. An animation problem is phrased as a large variational optimization, whose solution is the input motion-capture sequence. Unfortunately, spacetime-optimization methods have not been shown to be feasible for computing human motion over long periods of time because of the nonlinearity and parameter-space explosion. For this reason, my approach first simplifies the character model. The entire transformation process (see Figure 2) involves four main stages:

  • Character simplification. The tool developer creates an abstract character model containing the minimal number of degrees of freedom (DOFs) necessary to capture the essence of the input motion while mapping input motion onto the simplified model.
  • Spacetime motion fitting. The tool developer finds the spacetime-optimization problem whose solution closely matches the simplified character motion.
  • Spacetime edit. The animator then adjusts spacetime motion parameters, introduces new pose constraints, changes the character kinematics, defines the objective function, and more.
  • Motion reconstruction. The algorithm remaps the change in motion introduced by the spacetime edit onto the original motion to produce the final animation.

Once the spacetime model is computed, it can be reused to generate a wide range of animations. The spacetime-edit and motion-reconstruction stages take much less time to compute than the first two stages, enabling the computation of transformed motion sequences at near-interactive speeds.

Instead of solving spacetime-constraint optimizations on the full character, the tool developer first constructs a simplified character model the algorithm then uses for all spacetime optimizations (see Figure 3). Simplified models capture the minimum amount of structure necessary for the input motion task, thus capturing the essence of the input motion. Subsequent motion transformations modify this abstract representation while preserving the specific feel and uniqueness of the original motion. The simplification process draws from ideas in biomechanics research [1]. One of them, abstractly speaking, is that highly dynamic natural motion is created by “throwing the mass around,” or changing the relative position of body mass. The result is that a human arm with more than 10 DOFs can be represented by a rigid object with only three shoulder DOFs without losing much of its “mass displacement ability.” Simplification of body parts also depends on the type of input motion. For example, although simplifying an arm may work well for the human-run motion, the same simplification would not be useful for representing, say, the ball-throwing motion.

In the motion libraries I’ve created so far, the simplification process reduces the number of kinematic DOFs, as well as muscle DOFs, by a factor of two to five (see Figure 4). Since each DOF is represented by hundreds of unknown coefficients during the optimization process, simplification reduces the size of the optimization by as many as 1,000 unknowns. More important, a character with fewer DOFs also creates constraints with significantly smaller nonlinearities. In practice, the optimization has no convergence problems with the simplified character models, and does not converge with the full-character model.

Character simplification is performed manually by the animator applying three basic principles:

  • DOF removal. Some body parts are fused together by removing the DOFs linking them. Elbow and wrist DOFs are usually removed for running and walking motion sequences in which they have little influence on motion.
  • Node subtree removal. In some cases of high-energy motion, the entire subtree of the character hierarchy can be replaced with a single object, usually a mass point with three translational DOFs. For example, the upper body of a human character can be reduced to a mass point for various jumping-motion sequences in which the upper body “catapults” in the direction of the jump.
  • Symmetric movement. Broad-jump motions contain inherent symmetry, as both legs move in unison. Thus, the simplification process abstracts both legs into one, turning the character into a monopode, as if it were a pogo stick.

Once the character model is simplified, the original motion can be mapped onto it. However, before the animator can edit the motion with spacetime constraints, the tool developer creates not only dynamically correct but realistic motion of the simplified model by identifying the spacetime-optimization problem whose solution comes very close to the original motion.

The motion-transformation framework uses an abstract representation of muscles to apply forces directly onto DOFs, much like robotic servo-motors positioned at joints apply forces on robotic limbs. The algorithm places these muscles at each character DOF, ensuring the minimum set of muscles to achieve the full range of character motion.

Most of the spacetime constraints fall out of the input motion. Therefore, in a run or walk sequence, the library creator specifies mechanical point constraints at every moment the foot is in contact with the floor. Similarly, a leg-kick animation defines a pose constraint at the time the leg strikes the target. An animator can also introduce additional constraints to add control during motion editing. For example, the animator might introduce a hurdle obstacle into the human-jump-motion environment, forcing the character to, say, clear a certain height during flight.

With the spacetime-optimization problem defined appropriately, the animator can edit the intuitive “control knobs” of the spacetime-optimization formulation to produce a nearly inexhaustible number of different realistic motion sequences.

Back to Top

Spacetime Edits

A spacetime-optimization formulation provides powerful and intuitive control of many aspects of the dynamic animation: pose and environment constraints, explicit kinematic and dynamic properties of the character, and the objective function.

By changing existing constraints, the animator can rearrange foot placements in both space and time. For example, a human-run sequence can be changed into a zig-zag run on an uphill slope by moving the floor-contact constraints further apart while progressively elevating them. The constraint timing can also be changed; an example involves extending the floor-contact-time duration of one leg to create an animation in which the character appears to favor one leg. The animator can also introduce new obstacles along the running path, producing new constraints that might require legs to clear a specified height during the flight phase of the run. It would also be possible to alter the environment of the run by changing the gravity constant, producing a human-run sequence on, say, the moon’s surface.

Changes can also be made on the character model itself. For example, the animator can change the limb dimensions or their mass distribution characteristics and observe the motion’s resulting dynamic change. The animator can remove body parts, restrict various DOFs to specific ranges, and remove DOFs altogether, effectively placing certain body parts in a cast; for example, different injured-run sequences would result from shortening a leg, making one leg heavier than the other, reducing the range of motion for the knee DOF, or removing the knee DOF altogether. Muscle properties can also affect the look of transformed motion; for example, if the force output of the muscles is limited, the character would be forced to compensate by using other muscles.

Finally, the animator can change the overall “feel” of the motion by adding additional appropriately weighted objective components; for example, a softer-looking run would result from an objective component minimizing floor-impact forces. Alternatively, the run can be made to look more stable by including a measure of balance in the objective component.

After each edit, the algorithm re-solves the spacetime-optimization problem and produces a new transformed animation. Since the optimization starting point is near the desired solution, and all dynamic constraints are satisfied at the outset, optimization converges rapidly. In practice, although the initial spacetime optimization may take more than 15 minutes to converge, spacetime optimizations during the editing process take less than two minutes.

Back to Top

Conclusion

This research represents the first solution of the problem of editing captured motion that accounts for dynamics. The powerful high-level controls of the spacetime formulation are especially appealing, because they allow the animator to apply intuitive modifications to the input motion sequence. This intuitive control makes the algorithm particularly amenable to the motion-library paradigm.

However, three important areas in realistic character animation still need to be addressed: integrating and applying multiple realistic motion sequences into a single continuous character animation; retargeting motion to different characters while preserving realism; and developing an intuitive interface to the motion-data libraries. Eventually, research in these areas would allow the reusable-motion paradigm to find its way not only into film and video games, but into every home PC. It won’t be long before every PC has sufficient 3D rendering and computing resources to let anyone use computer animation as an expressive medium, much as Web pages have emerged as a ubiquitous form of expression today. Reusable motion is a crucial concept enabling practically any animator, no matter how skilled, to become an expressive storyteller.

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. Automatically generated human jump (rendered by Peter Sumanaseni, University of Washington).

F2 Figure 2. Outline of the motion-transformation process.

F3 Figure 3. Kinematic character simplification: (a) elbows and spine abstracted away; (b) upper body reduced to the center of mass; and (c) symmetric movement abstraction.

F4 Figure 4. Full and simplified characters for human run and broad jump.

Back to Top

UF1-1 Figure. Frames from the crossed footsteps, limp, and wide footsteps run and from the diagonal, obstacle, unbalanced, and twist jump.

Back to Top

    1. Blickhan, R. and Full, R. Similarity in multilegged locomotion: Bouncing like a monopode. J. Comp. Physiol. 173 (1993), 509–517.

    2. Bruderlin, A. and Williams, L. Motion signal processing. In Proceedings of SIGGRAPH'95 (Los Angeles, Aug.). Addison Wesley, 1995, 97–104.

    3. Gleicher, M. Motion editing with spacetime constraints. In Proceedings of the 1997 Symposium on Interactive 3D Graphics, M. Cohen and D. Zeltzer, Eds. (Apr.). ACM SIGGRAPH, 1997, 139–148.

    4. Hodgins, J. Animating human motion. Scientif. Amer. 278, 3 (Mar. 1998), 64–69.

    5. Hodgins, J. and Pollard, N. Adapting simulated behaviors for new characters. In Proceedings of SIGGRAPH'97 (Los Angeles, Aug.). Addison Wesley, 1997, 153–162.

    6. Lee, J. and Shin, S.-Y. A hierarchical approach to interactive motion editing for human-like figures. In Proceedings of SIGGRAPH'99 (Los Angeles, Calif., Aug.). Addison Wesley Longman, 1999, 39–48.

    7. Liu, Z., Gortler, S., and Cohen, M. Hierarchical spacetime control. In Proceedings of SIGGRAPH'94 (Orlando, Fla., July). ACM Press, New York, 1994, 35–42.

    8. Popovicacute_l.gif, Z. Motion Transformation by Physically Based Spacetime Optimization. Ph.D. thesis, Carnegie Mellon University, 1999.

    9. Popovicacute_l.gif, Z. and Witkin, A. Physically based motion transformation. In Proceedings of SIGGRAPH'99 (Los Angeles, Aug.). Addison Wesley Longman, 1999, 11–20.

    10. Rose, C., Guenter, B., Bodenheimer, B., and Cohen, M. Efficient generation of motion transitions using spacetime constraints. In Proceedings of SIGGRAPH'96 (New Orleans, Aug.). Addison Wesley, 1996, 147–154.

    11. Witkin, A. and Kass, K. Spacetime constraints. In Proceedings of SIGGRAPH'88 (Aug. 1988), 159–168.

    12. Witkin, A. and Popovicacute_l.gif, Z. Motion warping. In Proceedings of SIGGRAPH'95 (Los Angeles, Aug.). Addison Wesley, 1995, 105–108.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More