We Went On An AR Easter Egg Hunt At VRLA Using Microsoft’s HoloLens

by Ian Hamilton • April 16th, 2017

Happy Easter from all of us at UploadVR!

Today,  my kids will be exploring the yard looking for little eggs we dyed yesterday that were then hidden by the Easter Bunny in the wee hours. For the rest of the day, my oldest will likely be begging me to hide them again so she can go for another hunt. And I’ll be wishing we had holographic Easter eggs that would just let me press a button to randomly hide them throughout the yard once again. Then she and her little brother can wear lightweight glasses and go hunting for them again and again and again.

Those are the thoughts that will be running through my head today after we went on a “holographic” Easter egg hunt powered by Microsoft’s HoloLens and put together by FLARB and AfterNow for the VRLA meetup at the Los Angeles Convention Center. The demo had one of the longest lines of the entire convention with people waiting a long time to check out Microsoft’s $3,000 self-contained headset. The creators of the experience designed a colorful set with a big tree in the middle and five Easter eggs throughout the space. We had three minutes to find the eggs, which were larger than actual eggs and fairly easy to find. By staring at the eggs for a few seconds they came alive, unleashing a beautiful glowing animal.

The experience highlighted how enticing a self-contained headset could be when it lets you see the world — and people — around you enhanced with virtual elements. We hope you’re having a good holiday, and I’ll leave you with the thoughts of NYU’s Ken Perlin, who outlined for us the power of VR and AR almost two years ago now:

My interest is not really in VR. It’s in the future of reality, or rather, what reality will become after everybody is “wearing”. That is, when all children grow up in a world where everyone either has cyber-contact lenses or cyber-lens implants.

That will be the dawn of what I think of as the coming age of computer graphics: A time when the visual reality itself that everybody sees is mediated by computer graphics, because that future visual reality (thanks to Moore’s Law) will be a superset of what you can see with the unaided eye.

But my real interest is not in all of that for its own sake, but rather in the impact that these things will have for the future evolution of natural language. When children grow up in a world in which they can literally draw in the air, and can show each other visual ideas with a wave of their hand, then those children will spontaneously evolve natural language itself to incorporate these new and greater powers of expression.

After all, natural language is really the great superpower of our species. All of our other powers are outgrowths of that one. So evolving natural language to becoming even more powerful is perhaps the most important thing we can do.

But such an evolution in language is not something that we can invent. As evolutionary linguists have discovered, natural language can only be evolved by children, through their use of it when communicating with other children. What we can do — and what our group at NYU is starting to do through the technology and content experiments that we are working on now — is start to create the conditions within which such an evolution can occur.

Tagged with:

What's your reaction?
Like
Wow
0%
LOL
0%
Dislike
0%