Vive X Member ObEN Raises $7.7 Million To Bring Realistic Avatars To VR

by Jamie Feltham • November 4th, 2016

HTC announced the first class of 33 companies to join its Vive X accelerator back in July, but has since remained tight-lipped about what those startups have been up to.

This week, we’ve heard the first update from one of those companies, ObEN. The Pasadena-based group raised $7.7 million in a Series A round of funding to accelerate its work creating realistic and simple avatars for VR and AR. Investors include CrestValue Capital, Cybernaut Westlake Partners, and Leaguer Venture Investment.

ObEN is currently working on a means of constructing accurate avatars of a human with voice replication based on just one photo of them from a smartphone and a small audio sample. Using proprietary AI technology, the company aims to let you bring these avatars into a variety of VR and AR experiences. A press release makes specific mention of how it surpasses avatars that “depend upon cartoon-like characteristics,” perhaps taking a shot at the avatars seen in Facebook’s social VR experience shown on-stage at Oculus Connect 3 last month.

The video below shows an example of a 3D model created from such an image.

The company expects to launch its tech in Q1 2017. It’s going to be interesting to see how it is implemented into other experiences, especially with Oculus soon launching its own Avatars system, which is again more cartoonish than what ObEN is trying to achieve. The company’s Michael Abrash doesn’t think perfect virtual humans will be here for many years to come, though.

Interestingly, none of the Series A investors mentioned are listed as members of HTC’s VR Venture Capital Alliance (VRVCA), the group of investors that gather once every two months to review pitches from startup VR companies, including those from Vive X. Though unconfirmed, it was thought that ObEN might have been involved in the group’s first meeting back in September. We’ve reached out to HTC to clarify if the VRVCA had any involvement in this funding round.

As part of Vive X, however, ObEN will have had access to mentorship and work space, though it’s currently based in technology incubator, Idealab. The company has a range of openings across computer vision and speech recognition on its official website.

Tagged with: , ,

What's your reaction?
Like
Wow
0%
LOL
0%
Dislike
0%
  • NullReference

    So what do we think – is stepping into the Uncanny Valley the right approach – or is Facebook right to purposefully steer away from it?

  • wheeler

    Cartoonish avatars would probably be received better, but overall I’d prefer it if the virtual avatars that stick with you across experiences are completely customizable. (just with limits on poly count, size, etc etc). Give people the built-in tools to easily create their basic cartoonish or realistic avatar, but ultimately let them use just about whatever they want.

  • Who knows other use cases for photorealistic avatars (besides virtual porn)?

  • Juan Mendiola

    I worked in AAA games, so kudos to them if they can pull this off. At minimum, for realistic humans we use retopologized head scans + photos from multiple angles, which we then manually stitch together in Photoshop. The retopology can certainly be automated to some extent (computer vision), but one photo doesn’t contain enough data to texture the whole head. Also, individuals with longer hair will be hard to deal with, b/c that required more then mapping a texture to a head (ie. hair cards). But hey, maybe that’s in the secret sauce…hence the valuation.

    • GT

      Their example doesn’t show any hair (other than facial hair), so either they haven’t solved the problem or they’re keeping it hidden for now.

      Given their machine learning focus, and their claim to work with webcam-quality source photos, they might be doing something where they compare parts of the photo to a catalog of high quality premade 3D assets (collections of eyes, noses, etc, like you might find in a game character creator) and then use AI to tweak the prefabs to more closely match the person.

      The facial hair in the demo looks like a pretty close match, so they might have taken the shape and density of the facial hair and created an approximation using procedural generation to fill in the area. Also saves on bandwidth costs if you can just transmit the seed variables and adjustments instead of a full texture.