“I wanted to put another human inside a dream I had. VR let’s me do that.”
This is how Jonathan Sims responds when I ask him why he devoted so much of his time and energy toward creating Dreamtime – a startlingly unique virtual reality experience and 360 filmmaking experiment.
The kaleidoscopic visuals that give Dreamtime its out of this world atmosphere were achieved by marrying Sim’s artistic vision to Google’s powerful Deep Dream technology. Deep Dream was created as a part of the search giant’s ongoing computer vision experiments. The actual mechanics of the system are quite complex but Sims is able to explain it all quite simply:
From a layman to a layman: It’s an emerging field. Vast artificial intelligence neural networks are able to be taught to recognize images sort of like we learn to recognize images. These networks are taught what dogs, or buildings, or faces look like by being fed enormous datasets of images. Hundreds of thousands of faces.The short answer is DeepDream is asking the neural network what learned patterns it sees in other images and then asking it to render them. The trippy stuff happens when you start feeding the rendered images back to the computer over and over. This is sort of how our human brain hallucinates also.Its basically running a google image search backwards and asking the AI to show its work.
Sims is a working VFX artist who completed this project in just four months while wrapping production on the upcoming X Men: Apocalypse.
Dreamtime was created specifically by Sims to to do two things: to allow those in the waking world to experience the ethereal feelings that dreams often give us, and to encourage other immersive film makers to think differently about the way they tell their stories.
“Some people might critique the film for a ‘lack of story,'” Sims said. “But I just chose to look at story in a different way. In a 360 experience, every where the viewer looks create a story and my goal was to embrace that. If one guy wants to watch the entire thing while looking ‘backwards’ and another wants to watch looking in all different directions I think that’s perfect. That creates two unique experiences – two unique stories.”
Sims was initially inspired to create Dreamtime due to his own fascination with our brains’ nocturnal adventures, and the work of another radical film maker, Johan Nordberg.
Nordberg’s experience led to Sim’s posting a proof of concept video onto Reddit. Through that he caught the attention of Dreamscope, a visual augmentation company that became Sim’s partner throughout the project.
“It was a good pairing,” Sims said. “I was technical enough to understand how to ask for certain things and they were creative enough to understand what the project should feel like. Endlessly thankful for Dreamscope’s help.”
Using hardcore programming to create artwork might seem like shoving a square peg in a round whole. Sims admits he had to get creative in order to guide this sledgehammer of computing power with enough finesse to sculpt something elegant and beautiful:
“The final resolution was a problem we had to solve. The neural networks we were using were trained on very small images. 300×300 pixels I believe, so when it draws the objects it sees, it is limited to that size. For a 2.5k format we had to figure out how to fool the network into drawing bigger things. That was all Dreamscope. You need some serious programming mindbeams behind the wheel.
It isn’t like working with photoshop or after effects. It’s an entire new way of generating images. The simplest things are hard to do, and the strangest things just occur organically.”
The final result of these struggles is a truly unique virtual reality experience. Dreamtime certainly doesn’t look or feel very much like anything else in the scene right now and, for an industry that is becoming steadily more crowded, it is endlessly encouraging to see that some artists are still pushing the medium towards innovation.
Dreamtime is available now to stream on Google Cardboard or download for the Oculus Rift and HTC Vive.