How Mixed Reality Will Turn Life Into A Video Game

by Upload • April 30th, 2016

Above: A 3D printed “shelfie” of me and my wife, Danielle. Image courtesy of Volim Photo.

As if life wasn’t incomprehensible enough already, things are only going to get more surreal due to the emergence of a new tech ecosystem. The shipment of the Oculus Rift and Microsoft HoloLens and the development of Magic Leap are just the beginning, as a whole slew of technologies are set to completely transform the way we perceive, interact with, and even manufacture the physical world around us.

The potentially broad consumer appeal of virtual reality and augmented reality devices has tech media increasingly hooked on the rapid developments around AR and VR. And, since the hype around smart watches didn’t quite pick up steam, it’s time to fawn over something else that may genuinely be a game changing technology. However, the tech media, in the rush to push content, hasn’t quite captured the full picture. As exciting as these headsets are, they are only a piece of a larger tech ecosystem known as mixed reality.

Mixed reality, a term that Microsoft and Magic Leap already use to describe their devices, refers to a spectrum in which the digital world lays at one end and the physical world lays at the other. While the Oculus Rift would be at one end, immersing users in a completely virtual reality, HoloLens and Magic Leap are somewhere in the middle, overlaying digital artifacts over the physical environment. Then, on the other side of the spectrum, is this convoluted tangled web of nonsense physicists call “matter”.

Other technologies missing from this picture, and which will make for an even trippier picture overall are 3D scanning, haptic devices, and 3D printing. 3D scanning is about to hit the public in a big and somewhat unexpected way, as two 3D scanning smartphones are likely to hit the consumer market this year. One smartphone from Lenovo, featuring Google’s Project Tango 3D scanning platform, will be released this summer for less than $500 and there are rumors that a version of the iPhone 7 Plus, to be unveiled in September, will have depth sensing capabilities as well. When even a handful of the 2 billion people on the planet that use smartphones get their hands on these, those Snapchat filters of you looking like a Panda are going to be even more out of this world.

William – 3 days old
by alban
on Sketchfab

With low-cost depth sensing at your fingertips, you’ll not only be able to capture 3D scans of your newborn to share on Facebook via 3D modeling community Sketchfab or put your face onto your favorite video game character, but you’ll also be able to scan your apartment in order to do some interior decorating at IKEA (or at IKEA.com, depending on where you are virtually or physically). So, imagine capturing your living room for when you head to the big blue Swedish home goods store in the sky so that when you get there, you can see how a SÖFA looks in your virtual pad. Or, back at home, you can superimpose a virtual SÖFA on your actual apartment.

This same ability extends to just about everything in your life. For instance, with clothes, take the Snapchat panda example and turn it into a new dress or pants. Instead of a pointless selfie, you’ll be able to map the clothes on yourself before purchasing, making sure that they’re flattering, have the right cut and color. Or you can just Window(s 10)shop the day away while you’re supposed to be working.

For those actually doing work, like urban environmental planners, they’ll be able to virtually map out a new plan for the city commons that might line edges of the L.A. River, all the way from Long Beach to Pacoima. A bridge there, a solar array there, and a bike path on either side. All of this will be made all the more tangible with haptic feedback devices, which will make users actually feel like they’re touching their digital objects.

When you introduce 3D printing into the mix, the lines between digital and physical become even harder to discern. Not only will people be able to bring reality data (as Autodesk refers to it) into the digital realm, but they’ll be able to push digital data out into the material world. Though the technology is still in its relative infancy, evidence of this ability has already been demonstrated with 3D scans of people resulting in 3D printed “shelfies” made from dyed gypsum.

Companies like Microsoft, HP, and Autodesk have not missed out on the links between all of these technologies. While Microsoft’s HoloLens incorporates the entire mixed reality ecosystem, HP has its own version of mixed reality called “blended reality”, featuring gesture-controlled, 3D scanning PCs designed to be linked to 3D printing service bureaus. Autodesk has established its own suite of 3D scanning, 3D modeling, and 3D printing software that are tied into a larger “reality computing”. These companies are well aware of the direction things are heading, but may not be aware of the destination that mixed reality could drive us to.

Given the absurd progress of technology, it’s not entirely absurd to suggest that it may, one day, be possible to 3D print with atoms and molecules or even rearrange the basis of matter to create entirely new objects. If we can send a tiny spaceship to Alpha Centauri, we can reconstitute your brother’s face. And, if reshaping the air into new objects is possible, the distinction between digital and physical becomes irrelevant as whatever can be designed from the confines of a VR or AR headset can be materialized and vice versa.

This brings us to the Simulation Hypothesis. First posited by the Buddha around the 5th Century BCE and later made popular in Western culture by The Matrix in 1999, the Simulation Hypothesis eventually made its way into academia by Oxford philosopher Nick Bostrom in 2003, when he suggested that we were living in a simulation running on a supercomputer created by our descendants. Martin Savage, a University of Washington physics professor, has since given scientific credence to Bostrom’s thought experiment, saying, “If you make the simulations big enough, something like our universe should emerge,” and arguing that it might be possible to detect a signature in our universe, such as a limitation in the energy of cosmic rays, that might indicate that we are living in a simulated world. Though Savage and his colleagues use a lot of fancy math and physics to discuss the concept, anyone who’s seen The Truman Show or any number of sci-fi movies in the past 60 years will know the tell-tale signs of being part of a massive simulacrum: a shiny green grid at the edge of our universe or maybe just Ed Harris shouting at us from the sky.

As everyone from Siddhartha to Jim Carrey have been discussing the idea that we live in a virtual world for the past two thousand years or so, we can now begin to understand how we might arrive at such a fate. While UW’s Institute for Nuclear Theory looks for the green grid at the edge of our simulation, the mixed reality ecosystem can give us one possible direction for how that simulation was made. Perhaps someone – our descendants, our ancestors, the writers of Battlestar Galactica – 3D scanned our entire universe (as far as we will have/already explored it) and 3D printed it atom by atom.

Who knows? Maybe we were even the ones to have simulacreated our own universe in our desperate attempt to find meaning. As a scientific exploration into the possibility that our universe is a hologram, we created a simulation of our universe so realistic that it caused us to exist in the first place. That would certainly provide evidence toward Buddhism’s concept of Samsara, that we live in an endless cycle of living and dying, and the scientific theory that the universe wasn’t created by a Big Bang, but a Big Bounce in which everything expands and contracts… forever?

An animated .gif from my video game, “Pseussudio”.

As a sort of proof of concept to demonstrate what this might look like, I created my own video game in Unity using all of the tools at my disposal (You can play it here with the most recent version of Firefox). Just about everything involved is rough around the edges, but I was able to 3D scan my entire condo with a consumer-available 3D scanner, the $349 Structure Sensor from Occipital, and use it as the basis for my level. I even 3D scanned myself as the main player and my wife and a couple of friends as other characters that populate my universe.

3D printed versions of collectible items from my game. Image courtesy of Volim Photo.

The software for scanning rooms isn’t quite polished and I’m no 3D modeler, so I could not quite make our place look presentable. Just about all of the assets in the game are 3D printable, though: the condo, the people, and the floating heads that I have to collect in order to reach the game’s objective of finding meaning. Once you’ve collected all three floating heads, the level ends, you hit the spacebar, and everything repeats all over again.

 

Soon, anyone will be able to create their own similar game, as 3D scanners are made widely available this summer (You can get a headstart with my Instructable here). It’ll be easier to do and better looking than my own concept and you’ll probably be able to 3D print a lot of what you make, too. At least, this is what Microsoft has in mind with its HoloStudio app for HoloLens.

When this begins to happen, you might ask yourself what’s going to happen next. Will mixed reality turn our lives into a video game?

Contributed by artist and writer Michael Molitch-Hou, artist and writer.

 

Tagged with: