Avatar Technology From USC Captures Your Body In 20 Minutes

by Charles Singletary • May 10th, 2017

USC’s Institute for Creative Technologies works across graphics, learning sciences, medical VR, mixed reality, and much more, and the institute has played a major part in the development of many new technologies that move from research labs into widespread use.

“This is where Palmer Luckey cut his teeth in our mixed-reality labs before he did his Kickstarter,” says current ICT Research Assistant Professor Ari Shapiro. Luckey went on to co-found one of the breakthrough companies in the VR industry with Oculus and its Rift headset, so it is  no surprise that ICT could end up serving as the origins for other pioneering pieces of technology.

Easy 3D Avatars

The graphics lab at ICT captures high-quality digital faces for film with special (and expensive) scanning equipment. Shapiro runs a research group there called the Character Animation Simulation Research Group and one of its goals is to create a digital person that can behave like a real one. There are certainly means to do this, but not in a financially accessible way that produces a high-quality final product.

“Can we generate something high-quality with off the shelf scanners and an automatic process,” ask Shapiro. “When you do that, do you essentially democratize this type of data? What if everyone could have their own avatar?”

Shapiro’s team has done studies to determine the objective value of such a tool, learning that there’s an interest in running a simulation with a version of yourself. Then they determined what elements of a person needed to be reflected in these digital creations.

“What other elements of us need to be embedded? Our personality, our style, our posture, and that sort of thing,” he adds.

The research group started testing with Microsoft’s Kinect four years ago, doing some body scanning and facial scanning. It resulted in ways to scan the face/body, attach hands and fingers, etc, but the major key was that it was obtained with off the shelf components. Not only does the removal of specialized equipment drop cost, you also don’t need artists or technicians in the loop. Years later, the video above shows a functional prototype that shows off realistic avatars with realistic expressions to boot.

giphy (9)

Where Are We Now?

Shapiro says the plan is to commercialize this tool as much as possible and he sees opportunities for social VR and augmented shopping applications where users can try on different things using a replica of themselves. The software is up and running and they’ve done a few hundred demos so they’re moving rapidly along the path to making this available, but Shapiro says there’s still more to figure out when it comes to the face.

“We’re making a choice that a lot of people don’t make when they do these facial generation systems,” he says. “Most of the time, they have a working facial rig and they try to adapt it to a scan or a photograph.”

With that style, you end up with something that works and is able to “emote your speech” well but it doesn’t resemble the person as closely as desired. You’re basically trying to “fill in data” where you don’t have it.

“We’re doing the opposite. We’re basically saying that whatever you give us, we’re going to use that to reproduce the person. Ours look real with the limitation that, if you don’t give it particular emotional expressions, your character can’t do it,” he explains.

The attempt to stretch and pull these reproductions in order to exhibit emotions is how you fall into the uncanny valley — where an avatar looks almost human but is off just enough to make people seeing it uncomfortable. This is something Shapiro and his team hopes to avoid. They also want to reach a high level of quality for facial scanning that can allow for teeth and tongue modeling too.

Ultimate Goal

“If you have to use specialized equipment that can only be used in specific places, you might as well go down the traditional pipeline,” Shapiro said. He explains that well-equipped visual effects teams can produce content that looks and possibly functions better than what this rapid-scan technology puts out, but accessibility is the end goal.

“We’re trying to work it into a consumer platform,” Shapiro says. “The overall goal of this project is to create a set of technology that anybody can use to produce their avatar for any means.”

Ari Shapiro also serves as CEO for Embody Digital, a company specializing in technology for the “digital you”, and they’re already working on ways to commercialize the technology and make it available to consumers. Seems like it is only a matter of time before you’re scanning yourself into the virtual experience of your choosing.

 

Tagged with:

  • Xron

    Human avatar + AI = awesome npc :D!

  • Being ourselves, with our identity, in VR, is very important… so these projects are surely welcome. My doubt is about the price: the hardware is off-th-shelf, but a single kinect costs $100 and you surely need many to obtain such a precise scan…

    • rabs

      I like better to be someone/something else in a virtual world. It’s useful for virtual shopping, meeting relatives or professional relationship maybe, but I’m very far from those needs yet. As a company it makes more sense to be ready for the future, though.

      I guess high quality hardware should be available in some specialized shops, as an evolution of professional photography. People go there, spend 15-30min and get their avatar (and 3D print it for their mom, while they’re at it). I guess it could be tweaked at home as well.

    • Ari

      The facial scan was obtained using a single Intel RealSense sensor ($99). The body scan was done through a low-cost photogrammetry cage, but you could use a commercial hand-scanner (a la Occipital Structure Sensor or similar) and still generate a convincingly real-looking avatar.