Apple’s iPhone X Facial Recognition Will Revolutionize VR/AR Development

by Ian Hamilton • November 6th, 2017

One of the co-founders of groundbreaking Los Angeles-based VR startup Kite & Lightning received his new iPhone X on Saturday. By Sunday night, Cory Strassburger produced a proof-of concept using the phone’s landmark real-time facial recognition system to animate a model for a character from their upcoming game, Bebylon: Battle Royale. While the proof-of concept is not perfect, there appears to be enough room for improvement achievable with the approach that he can use it for his specific purpose. A real-time facial capture system embedded in your phone could dramatically lower the cost to make digital characters look more alive and expressive in a wide range of content, including VR and AR software.

For those unfamiliar, the premise of Kite & Lightning’s Bebylon is a future in which immortality has been achieved. The drawback is that newborns stop aging at the baby stage. Thus, a society of “bebies” that live forever spend their days entertaining themselves with a vehicular combat game focused on over-the-top attitude and spectacle. These bebies have enormous personalities, and the iPhone X face capture system — the same one used to make animated emoji — could be how they are brought to life.

“I need fast and easy,” Strassburger wrote in an email. “The free bonus would be I can easily make a super cool AR promotional app where Bebylon players (or anyone for that matter) can capture some fun clips with their Beby character… and eventually, when this tech is more widely adopted, have players record expressions into their ‘character’s expression library’ using a complementary mobile app, that can then be uploaded to the game and used by them in gameplay for funny taunts and such.”

It’s still an early test and there may be further improvements he can achieve, but Strassburger wrote that “honestly its better data than most anything I’ve seen when it comes to real-time” capture and “I’m pretty confident that I’ll use this process for our game, cinematics and marketing content.”

Check out the video below where he does a pretty good impression of Christopher Walken while animating one of these bebies.

Apple’s iPhone facial recognition system is based on its 2015 acquisition of startup Faceshift. The startup showed how it could animate digital characters in real-time without any setup. The tech also made use of PrimeSense, another Apple acquisition that originally built the motion capture system in Microsoft’s Kinect. It appears that over the last several years Apple has been totally focused on miniaturizing the tech so that it could be used in a device as small as an iPhone. Over the next few years it is likely we’ll see the technology proliferate across Apple’s entire product line.

Using the iPhone X facial capture in combination of different technologies including Eisko and an Xsens motion capture suit, Strassburger believes there is an “an incredibly cheap” capture pipeline for human performance “that is super fast and easy to set up with very impressive results.”

Follow Kite & Lightning on Twitter for future updates, and we’ll keep following as well.

Tagged with: ,

What's your reaction?
Like
Wow
100%
LOL
0%
Dislike
0%
  • When people laughed at poop on stage I said that iPhoneX was much more than that and they had not to laugh. These are the real kind of applications that the new phone can enable…

    • dk

      face id has nothing to do with face tracking which can be done with any phone with thousand of apps ….search for the FaceRig app ……as usual people r giving apple waaaay to much credit for doing basically nothing it’s hillarious

      • JMB

        Could you please explain which app currently available for Android or iOS would allow for the same quality of real-time face capture (the subject of this article) while not using the hardware solution that FaceID is based on. Just as an FYI, FaceRig does not offer this. Thanks!

        • dk

          youtu DOT be/MpoV1YhrPug?t=3m13s ……they r using the front facing camera as every app ever ….face id is just for unlocking

          • JMB

            It’s not. The iPhoneX is projecting 30.000 infrared markers on the user’s face and can therefore gather considerably more data for real-time face capture than a system based on front cameras without additional depth information.

            FaceID is the name of a function based on this hardware capability. The point of the entire article is that this capability can be used for considerably more than just unlocking a phone.

          • dk

            see the link

          • Vishal Thakur

            @disqus_KTchdTC42u:disqus do you even get the point that @disqus_v3mwKWXDYa:disqus is trying to convey !

            First of all YES.. you can do face tracking with a camera using image processing algorithms. BUT with the iphoneX you have a much precise and accurate tracking of the Face(Thanx to those 30,000 dots and depth sensing).. If you are still not justified then can you please tell me a app that does the same kind of tracking using just the camera?

            Cheers !

          • dk

            the point is they put in the phone for unlocking …..and all other features u can have with a camera …..u can have the animoji in the iphone8 or the android messenger app u can have it in arcore ….tomorrow using the camera ……there is no difference for the consumer

            ….they put that software on board because snap chat and the thousand other apps made it popular…..not because all of a sudden they have the hardware to do it

            …..u need a depth camera with longer range on the other side of the device because with arkit/arcore u can’t see where r the walls u can’t scan shit and it’s not as robust in any conditions

            ………putting it on the front doesn’t give u anything u can’t do without it ….except for the unlocking

          • this is like saying “the Kinect can’t do anything the EyeToy wasn’t already doing.”

          • dk

            u need the depth camera for unlocking ……u need depth camera for tracking/scanning the world
            ….u don’t need dept camera for face tracking and every single app for face tracking shows that
            ……and u don’t gain anything by having it except somewhat better face unlocking than a 2d camera

          • incorrect. why are you so driven to maintain this hardline, when you are wrong?
            facetracking from depth is demonstrably better (Faster and more Accurate) with the iPhone X than is has been with Adobe Character Animator or FaceRig, or other web cam face tracking attempts.

            you are basically arguing that improved speed and accuracy are not important for face tracking.

          • dk

            no one cares ….it literally gives u nothing u need ……get 100 people get 10 different phones 10 different face tracking apps …..u literally get nothing more from it exceprt of better unlocking than a 2d pic
            …..no one needs “improved speed and accuracy” on how good face tracking already is
            what u need to be improved is the positional tracking not face tracking which u already have with a camera

          • dk

            youtu DOT be/9Ca8zWJOlFQ?t=11m43s ——> animoji is just using the camera
            …….because u don’t gain anything that u absolutely need for face tracking from the depth camera
            …….but u do need it …….u actually gain something …..by using it for unlocking

          • you don’t seem to know what you are talking about, but you keep talking. Or maybe you do know what you’re on about, and want to stop the rest of the world from believing there are improvements? i’m distracted by curiosity as to what drives you.

            You just wanted to chime in here to tell developers that they won’t be using any of that face mesh geometry which the depth camera delivers so easily? They won’t be using the blend shape estimations, because there were 2D solutions already serving this up?
            Here is a list of exciting things ARkit enables (many of which seem to be leveraging the depth camera’s 3D data):
            blogs DOT unity3d DOT com/2017/11/03/arkit-face-tracking-on-iphone-x/

          • dk

            dude apple is not using it for animoji…………that shows exactly what I’m talking about
            …….yes u can have a more detailed scan of the face but u don’t gain anything that u absolutely need for something an app will be doing on your phone…..except for unlocking

          • Marques was wrong. He’s respected as a reviewer (but i don’t believe he is a developer). That is why my retort was serving up a link to a developer toolset explanation, specifically showing how developers will use the depth data.
            + Here is a quick googled explanation of how Marques was wrong in his quick guesswork (read the “update”) : thenextweb DOT com/apple/2017/11/15/the-iphone-x-doesnt-actually-need-face-id-for-animoji-apparently/ wee.

          • dk

            “Again, I’m still not convinced Apple absolutely needs TrueDepth for Animoji to be effective. After all, the Pixel 2 and Mate 10 can create realistic depth maps without special hardware for portrait mode on their selfie cameras, and masks works well on other apps not using fancy depth sensors. I’m sure plenty of iPhone 8 users would have been just fine with Animoji made using the RGB camera.”

            what it NEEDS it for is the unlocking…….if it didn’t need it for the unlocking it wouldn’t be there……there is no functionality that disappears if the sensor disappear …….except for the unlocking …..that was the point

  • dk

    there r thousand of apps doing face tracking with just the front cam………which is most likely what apple is doing …………it’s not an apple thing

  • PK

    this is using Faceshift right? they were very good at this before Apple took them off the market, and i was waiting to see what they’d do with it. the animoji’s were not the ambitious launch i expected but this bebylon use case is much more impressive.

  • Kenny Thompson

    I’m Impressed. Would love to see it utilized in more games.

  • suprised they mentioned Xsens as lo cost mocap. isn’t that $12K and up? Optitrack will let you do full body for 8k. Curious what the evalutation factors are for basic game dev. (like, you can do limited full body mocap with motionBuilder and kinect 1, if you just need it cheap). And supposedly people are developing full body with 3 Vive Trackers, using IK solving tricks (is this still moving forward?).
    I’m not sure what to pine for (and why).