Research and Advances
Artificial Intelligence and Machine Learning Collaborative Virtual Design Environments

3D Animation of Telecollaborative Anthropomorphic Avatars

A high-performance solution offers faster 3D representation of users by avatars without compromising real-time interactivity.
Posted
  1. Introduction
  2. Avatar Modeling
  3. Implementation
  4. Conclusion
  5. References
  6. Authors
  7. Footnotes
  8. Figures

The use of avatars in telecollaborative virtual environments has raised interest for research in the field of networked virtual reality environment (VRE) that allows many geographically distant users to interact in a common virtual environments. Even though this topic is not new, avatars are merely basic nonhuman-like representations in most existing systems. The motivation of this work is to provide more realism, not just in believable appearance, but also in realistic human-like movements. The human being is represented by a 3D anthropomorphic avatar that mimics the user’s movements in the VRE, tracking only a few sensors attached to the body.

The goal of this work is to allow real-time telecollaborative interactions between two or more geographically dispersed people, manipulating virtual parts, products, or facilities, being represented by avatars that perform human movements in a common networked VRE. Potentially, this would enable specialized engineers and designers worldwide to analyze in detail the performance of complex activities such as collaborative virtual design reviews, operation of machinery and equipment, or virtual manufacturing incorporating human factors in their simulations. Virtual repair and maintenance booklets showing interactive avatars performing manual assembly and disassembly of virtual 3D parts, useful for training and education, or even more complex and intelligent user interactions on lifelike scenarios as shown by [1] can benefit from this work.

Virtual humans could also be used in task feasibility studies in which a virtual agent and environment are used to simulate the performance of real-life tasks, such as the workspace analysis of a cockpit. In these design applications, anthropomorphic avatars are useful in determining which objects in the environment are easily reachable and manipulable by the user. For more real-time applications of virtual humans see [2].

In order for real-time interactivity between participants located in remote CAVEs to succeed, the inverse kinematics algorithm should be fast enough to find a correct set of joint angles each time they move their body parts. A fast analytical closed-form solution to this problem was developed by [10] using five sensors (head, elbows, and hands) to track the upper body and three more sensors (pelvis and ankles) to track the lower body.

The challenge is to use the minimum number of sensors per participant without decreasing the real-time performance. However, the fewer the attached sensors, the higher the degree of freedom (DOF) of the inverse kinematics to be solved. Unfortunately it is impossible to get an analytical closed-form solution to more than 6-DOF kinematics structures since it consists in an underdetermined system of equations (that is, there are more unknowns than equations) [6].

The approach developed by [11] consists of a successful hybrid analytical and numerical algorithm capable of solving a 7-DOF human arm (Jack [3]) using four sensors: head, back, and both hands).

Although these methods are quite efficient, they are cumbersome, costly approaches, and cannot be implemented using the default setup of the CAVE family of devices where (by default) only two or three sensors are attached to the user’s body (head and one or two hands). Therefore, the motivation of this work is to provide a fast, affordable, and straightforward telecollaboration with avatars using a low number of sensors.

Back to Top

Avatar Modeling

Placing a fixed coordinated system at the chest and moving coordinate systems at the joints of the torso, shoulder, elbow, and wrist, the human upper body can be simplified as an 8-DOF structure (based on [11]). Figure 1 shows the position of the three electromagnetic sensors involved in the process and how the upper human body structure has been modeled. The set of the eight qis represents the unknown joint rotation angles.

Since the head sensor in all CAVEs and ImmersaDesks is attached to the left side of the stereo shutter glasses (in order to define the user’s 3D perspective) [5], the position and orientation of the head are known. With this information, the sternum position is estimated and the inverse kinematics algorithm creates the connection between the chest and the tracked hand.

The algorithm used to solve the avatar inverse kinematics is based on the traditional Newton-Raphson method for systems of nonlinear equations. This method was chosen because it generally gives quadratic convergence and a possible solution is found in a few iterations.

Traditionally in robotics and computer graphics, homogeneous rotation matrices were used because of their simple and well-known representations of rotations. However, this method presents many inconveniences (shown by [8]), especially the gimbal lock and the cost of processing when several matrix products must be calculated in each iteration. This makes it difficult to achieve real-time performance. The use of a different and more sophisticated concept derived from quaternions, called “dual quaternions,” allows you to combine both rotations and translations, as well as to model and solve the inverse kinematics problem without compromising real-time user interactivity [12].

Dual quaternions allow you to replace the 4×4 homogeneous matrices by an 8-dimensional vector that directly corresponds to the screw 3D transformation between two coordinate frames, as shown in Figure 2.

Back to Top

Implementation

The resulting interactive virtual reality simulation was tested using the CAVE (see Figure 3). Although the method consists of an iterative algorithm, experimental results indicate the performance is highly acceptable for simulations in real time, guaranteeing maximum interactivity between participants carrying out telecollaborative activities among interconnected CAVEs.

In our experiments, repeated cyclical movements of a hand (for example, turning a steering wheel of a virtual earthmover many times in the same direction) using the previous solution as the initial guess indicate the avatar reaches unrealistic postures through accumulation of motion at the elbows. To solve this problem, it is more convenient to reinitialize the avatar to the original posture (resetting all joint angles to zero) before invoking the iterative inverse kinematics routine, each time the desired end-effector position and orientation have changed. This allows the algorithm to find the same solution given the same sensor conditions and, at the same time, realistic looking postures are obtained throughout.

In each iteration, two parallel processes independently estimate the next possible configuration of both arms according to each hand sensor. Human torso rotation depends on the position and orientation of both hands. To reproduce this natural behavior the average of the values—corresponding to the torso rotation (qi) obtained by both iterative processes—is taken into account for the next iteration. This simple approach allows both processes to adjust the torso orientation while they get close to a solution for both end effectors. The convergence of both arms is generally achieved after a few iterations, except when both arms are fully stretched out; in such a case the pseudo-inverse of the Jacobian is ill-conditioned.

After measuring the response time from the user’s hand movements to the corresponding avatar’s hand movements by tracing through a set of 1,000 different random coordinates of positions and orientations, statistical results show that our algorithm does not break down even when the user moves quickly.

The experiment, conducted on an SGI Onyx workstation, demonstrates the algorithm converges most frequently in fewer than 13 iterations that take a total of approximately 4,000 microseconds (0.004 seconds)—highly appropriate for real-time VR applications (see Figure 4 and 5).

Back to Top

Conclusion

We have presented an affordable, simple, and fast alternative approach for successful telecollaboration among users represented by 3D anthropomorphic avatars.

Compared to the more traditional technique using an additional sensor attached to the elbow, this approach loses certain fidelity with respect to the elbow movements. Therefore, it is possible to find situations in which the position and orientation of the avatar’s elbow do not correspond exactly to those of the human’s elbow, even when the end effector is correct and a realistic posture is reached by the avatar. However, the primary importance of our approach is it is fast enough to achieve real-time telecollaboration among remote users in networked VRE. Moreover, the default setting of current CAVE environments can be used most effectively without introducing additional sensors.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. The anthropomorphic avatar

F2 Figure 2. Diagram showing 3D transformation between reference frames given by a dual quaternion (based on [7]).

F3 Figure 3. User and avatar in the CAVE.

F4 Figure 4. Histogram on iterations before convergence.

F5 Figure 5. Histogram on overall time of the process.

Back to top

    1. Andrè, E., and Rist, T. Presenting through performing: On the use of multiple lifelike characters in knowledge-based presentation systems. Second International Conference on Intelligent User Interfaces (IUI 2000).

    2. Badler, N., Palmer, M., and Bindiganavale, R. Animation control for real-time virtual humans. Commun. ACM 42, 8. (Aug. 1999), 65–73.

    3. Badler, N., Phillips, C., and Webber, B. Simulating humans: Computer Graphics Animation and Control. 1993. Oxford University Press, New York, NY.

    4. Brown, A.S. Role models: virtual people take on the job of testing complex design. Mechanical Engineering (1999), pp. 44–49.

    5. CAVE Automatic Virtual Environment; www.evl.uic.edu/EVL/VR/systems.shtml#CAVE. Univ. of Illinois at Chicago.

    6. Craig, J. Introduction to Robotics: Mechanics and Control. 1986 Addison-Wesley, Reading, PA.

    7. Dam, E.B., Koch, M., Lillholm, M. Quaternions, Interpolation and Animation. Technical Report DIKU-TR-98/5. 1998. Univ. of Copenhagen, Denmark.

    8. Goddard, J.S. and Abidi, M.A. Pose and motion estimation using dual quaternion-based extended Kalman filtering. In Proceedings of SPIE: Three-Dimensional Image Capture and Applications. 1998.

    9. Luciano, C.J. Animation of Telecollaborative Avatars using Real-Time Inverse Kinematics Algorithm based on Dual Quaternions. Master's thesis in Industrial Engineering (2000). University of Illinois at Chicago.

    10. Semwall, S., Hightower, R., and Stansfield, S. Mapping algorithms for real-time control of an avatar using eight sensors. Presence 7, 1 (1998)

    11. Tolani, D. and Badler, N. Real-time inverse kinematics of the human arm. Presence 5, 4 (1996).

    12. Wagner, M. Advanced animation techniques in VRML 97. VRML '98 Symposium (Monterey, CA, 1998).

    This work was supported by Fulbright/CONICOR scholarship program (Argentina) and the Department of Mechanical Engineering at UIC.

    CAVE is a registered trademark of the University of Illinois in Chicago.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More