Skip to content
Please disable your ad blocker or become a member to support our work ☹️

Meta 'Avatars Grow Legs' Research Shows Realtime AI-Powered Body Pose Estimation

Meta 'Avatars Grow Legs' Research Shows Realtime AI-Powered Body Pose Estimation
Please disable your ad blocker or become a member to support our work ☹️

Meta AI researchers are getting closer to delivering realistic avatar legs without extra tracking hardware.

Meta’s current legless avatars

Out of the box, current VR systems only track the position of your head and hands. The position of your elbows, torso, and legs can be estimated using a class of algorithms called inverse kinematics (IK), but this is only accurate sometimes for elbows and rarely correct for legs. There are just too many potential solutions for each given set of head and hand positions.

Please disable your ad blocker or become a member to support our work ☹️

Given the limitations of IK, some VR apps today show only your hands, and many only give you an upper body. PC headsets using SteamVR tracking support worn extra trackers such as HTC’s Vive Tracker, but buying enough of them for body tracking costs hundreds of dollars and thus this isn’t supported in most games.

In September, Meta AI researchers showed off a neural network trained with reinforcement learning called QuestSim that estimates a plausible full body pose with just the tracking data from Quest 2 and its controllers. But QuestSim's latency was 160ms – more than 11 frames at 72Hz. It would only really be suitable for seeing other people’s avatar bodies, not your own when looking down. The paper also didn't mention the system's runtime performance or what GPU it was running on.

AGRoL in action with Quest 2, running on an NVIDIA V100
Please disable your ad blocker or become a member to support our work ☹️

In a new paper titled Avatars Grow Legs (AGRoL), other Meta AI researchers and intern Yuming Du demonstrated a new approach that they claim "achieves state-of-the-art performance" with lower computational requirements than previous AI approaches. AGRoL is a diffusion model, like recent AI image generation systems such as Stable Diffusion and OpenAI's DALL·E 2.

Unlike other diffusion models though, and most AI research papers, the researchers say AGRoL "can run in real-time" on an NVIDIA V100, running at around 41 FPS. While that's a $15,000 GPU, machine learning algorithms often start off requiring that kind of hardware but end up running on smartphones with a few years of optimization advancements. That was the case for the speech recognition and synthesis models used in Google Assistant and Siri, for example.

Still, there's no indication body pose estimation of AGRoL's quality will arrive in Meta Quest products any time soon. Meta did announce its avatars will get legs this year, but it will probably be powered by a much less technically advanced algorithm, and will only be for other people's avatars, not your own.

Meta Avatars Are Getting Legs Soon
Meta Avatars are getting third-person legs soon, and a major graphics overhaul next year. They will also get support for Quest Pro’s eye tracking and face tracking later this month so your gaze, blinking, and facial expressions are mapped to your avatar in real-time. Legs will arrive in Horiz…
Please disable your ad blocker or become a member to support our work ☹️
Please disable your ad blocker or become a member to support our work ☹️
UploadVR logo

Unlock the full potential of UploadVR and support our independent journalism with an ad-free experience by becoming a Member.

Community Discussion

Please disable your ad blocker or become a member to support our work ☹️

Weekly Newsletter

See More
Please disable your ad blocker or become a member to support our work ☹️