Microsoft’s on-stage HoloLens demonstrations haven’t always been impressive, but a new one at the company’s Inspire event (via The Verge) is nothing short of amazing. Using a combination of body and voice capture technologies, Azure AI, and HoloLens, Microsoft created a nearly photorealistic hologram of executive Julia White, then had the hologram deliver part of White’s keynote in Japanese — a language the real person can’t speak.
The demonstration took place while White was wearing a HoloLens headset, walking and observing in 3D space around her clone. She began by conjuring up a “Mini Me” that could be vaguely “held” in her hand. After a little sparkling green special effects flourish, the doll-like copy transformed into a full-sized clone that began speaking in the foreign language, using samples of White’s voice to speak sentences that had been machine-translated into Japanese.
It only takes a moment to understand the tremendous potential of the new technology, assuming it works in practice as seamlessly as it did in the demo. Equipped with the proper 3D depth-scanning camera hardware and AI-assisted translation tools, any presenter could quickly create believable, region-specific speeches — a keynote might be pre-recorded and shown simultaneously in 30 languages. Of course, the same technology might be used for less positive purposes, faking words or actions that wouldn’t have come from the body-scanned model.
For the time being, actually achieving this feat requires access to some professional-caliber hardware, ranging from high-end, specialized cameras to expensive HoloLens headsets. But similar body-scanning technologies are expected to make their way into next-generation smartphones over the next year or so, which could set the stage for viewing photorealistic avatars on their screens or on consumer AR headsets. Whether Microsoft brings this concept to its own Mixed Reality Headsets initiative remains to be seen.
This post by Jeremy Horwitz originally appeared on VentureBeat.