Creating avatar body and facial animations for VR


June 07, 2018 - Dmitry Aznaurov

There are a variety of tools that can be used to create character animations. The criteria for picking an animation tool will vary, depending on the specific requirements of the model and complexity of the animation. Each tool has different workflow speeds, versatility, cost and compatibility with other pipelines.

Understanding the software and project requirements

When creating character animations for the VirtualSpeech VR app, we considered several tools which we narrowed down to the following:

Compatibility with other software tools was a major consideration. For example, Adobe Fuze is a powerful tool but doesn’t support a native 3ds Max biped setup and works with it’s own blendshapes profile for facial animation. This meant that at certain stages, it might require significant work to transfer characters, animations, or certain features from another generator.

Creating an avatar with the iClone Character Creator

Creating an avatar with the iClone Character Creator. iClone creates highly detailed models, at the cost of a high polycount.

When creating characters, you need to consider the whole pipeline of creation - constantly thinking about compatibility, quality and tasks you may encounter.

For example, Autodesk Character Creator gives you great control over the polycount and level of detail for your character. The only problem is that their highest quality level is not realistic enough for being close to the user or camera. So you may choose Autodesk characters for crowds, while using Adobe avatars.

Editing the dimensions of an avatar with the Autodesk Character Creator.

Editing the dimensions of an avatar with the Autodesk Character Creator.

When you mix these characters, you have to come up with a set of universal solutions that can be applied to a different model types, keeping in mind the amount of time to perform adjustments and the pricing of each software package.

Animating the character body

When you fully understand the requirements and set of tools you’ll need for your project, you can proceed to the animation.

There are a variety of ways to animate your character which can be split into two main categories - facial and body animations.

The body animation can be made in variety of ways:

  • Manually - creating animation in some 3D software
  • Procedurally - creating an animation track from set of pre-made moves
  • Motion capture - translating actual body movement onto d character

Manual animation

Manual animation is the most accessible, since it’s available to anybody who has access to 3D modelling software, but it can be very time consuming. Also, although you can animate a characters movement without any additional tools involved, it’s really hard to get the character movement to appear natural. Unless you are highly skilled, the movement will most likely look robotic.

Manually editing camera animations in 3ds Max for a press conference scene.

Manually editing camera animations in 3ds Max for a press conference scene.

For the manual animation approach you have to have a really good understanding of kinematics and how the human body moves, able to analyze specific movements and utilize tools to get closer to the desirable result.

Procedural animation

Procedurally animated characters work well in certain placeholder situations. For example, when you need characters in conversation, you can use procedural animation software to fill the timeline with some generic gestures and moves.

Using pre-made animations for avatar conversations

Using pre-made animations for avatar conversations. Clay model textured avatar shown in discussion above.

Procedural animation works really well when you have a set of characters with a list of common idle animations attached to them, such as sitting on a chair, walking, talking to somebody, etc.

Although it’s a useful time-saving solution, it lacks any variety and you can get some surprising results, especially when trying to fit existing animations to different sized characters. This is where motion capture comes into play.

Motion capture animation

Mocap (Motion Capture) is the most robust way to animate a character, though it requires a sophisticated approach of building a mocap studio and refining the final result to fit your character and scene.

Filming avatar animations by tracking movement in a pre-defined space

Filming avatar animations by tracking movement in a pre-defined space. This creates more natural animations than hand created ones.

In short, you have to set up a motion capture device, whether it be with a depth motion sensor, or a set of high-framerate cameras to cover your entire capturing area. You also need suitable animation capture conditions, including:

  • Special clothing for areas of the body that need to be very precise, such as arms
  • Clean surroundings to reduce interference and noise
  • Good lighting conditions
  • Free space to move around naturally

You need to understand the body proportions of your character, clothing details and overall appeal, so your captured animation will fit it’s parameters.

Next, when you record the Mocap data, you have to process it and reduce the noise to get clean, precise movements. Finally, you have to edit your animation using 3D editing software to match certain props and avatars. This should result in accurate and realistic movement that would be difficult to create with keyframe animations by hand.

Facial animations

Once you have figured out how to bring your character to life with realistic body animations, you need to add facial animations. Without these, a level of realism is taken away from your animation and scene.

Facial animations are incredibly hard to create by hand, so how do you do it? There are two ways we’ll discuss here: facial mocap and lip sync.

Facial mocap

Facial mocap helps you impart your emotions, speech and other facial expressions onto your avatar. Facial mocap can be done in a variety of ways -usually it’s an array of dots on the actor’s face which is being tracked by a camera. Afterwards the tracking is applied to the control handlers of your model.

Facial mocap solution for the film Avatar

Facial mocap solution for the film Avatar, with green dots on the actors face to track facial movements. Image from fxguide.

Facial mocap can also be accomplished with depth scanners, or even just a regular webcam (this may lack precision through). However it can be difficult to mount a depth sensor to your head in order to record simultaneous facial and body mocap sessions.

Lip sync animations

Another element of facial animation is lip syncing. This is when you import an audio file into a software package, which calculates the facial expressions required for the different sounds (i.e. the speech) in the audio file. You can then add procedural gestures and movement, including nodding, head shaking, etc.

Facial animation and lip sync testing with FaceRig software

Facial animation and lip sync testing with FaceRig software. The software is fairly accurate, however the export options are limited.

The rest is up to you - add some emotional color to your animation track, highlight some angry, thoughtful or any other emotions that you need during your character’s monologue.

Creating an animation pipeline

When you have prepared everything you need for the animation, you’ll need to think about structuring your work. An animation artists time is precious, so you’ll need to be efficient with it. Ensure animations can be reused in different scenes.

A good way to do this is to segment your work. Idle animations can be used multiple times on different characters, whether that’s body or facial idles. It’s better to have three short idles instead of one long animation take, because afterwards you can mix those three takes randomly and produce much more diversity in your character’s movement, especially if you have a crowd to animate.

Getting this setup right will leave you with more time to work on other important animations, including your main character’s movement, which consumes the majority of your users focus and needs to be believable enough to avoid breaking the immersion.

Conclusion

To sum it up, there’s no magic bullet for producing reliable, repeatable and realistic animations. You can either use an expensive mocap studio to get results, or try to do it yourself and have much more freedom down the line if you need to change any of the animations or produce new ones for product updates. Building your own solution will take time but will, in the long run, be significantly cheaper.