Rony Abovitz has been dreaming about a new way of seeing the world since he started Magic Leap in 2011. He raised $1.9 billion in funding, but he stayed quiet about what the company was going to do. Today, he unveiled the Magic Leap One AR goggles for the first time. AR is expected to be a multibillion market, with support from Google and Apple, but Magic Leap is the company that is throwing the most money and people at the challenge today.
Abovtiz’s Fort Lauderdale, Florida, company will launch its “creator edition” in early 2018 with a software development kit (SDK) that will enable developers to get started on applications. But in a brief interview with GamesBeat after the announcement, Abovitz said that the company hopes that everyone will become a kind of creator of applications for the AR headset, as it is intended to be interactive and not passive form of entertainment.
The goggles are attached by a wire to a disc-shaped external computer dubbed a Lightpack, which you hang on your belt and carry around with you. It also has a handheld controller that resembles a touch-sensitive remote control. The puck does most of the processing, while some is also done in the headset. I haven’t used it yet, but the headset is aimed at bringing together digital animations overlaid on the imagery of the real world. It is ambitious, but the challenge is cramming a lot of technology into a tiny accessory, making it one of the biggest computing challenges of our time.
The device can receive a variety of inputs via voice, gesture, head position, and eye tracking. It places persistent digital objects on the physical environment around you, which is akin to blending animated overlays on the real world in a way that it’s hard to tell them apart. If you place a “virtual TV on the wall over your fireplace and when you return later, the TV will be right where you left it,” Magic Leap said.
Here’s an edited transcript of our interview.
GamesBeat: It looks like the puck itself does a lot of processing. Is that one of the decisions you made as far as how to architect this?
Rony Abovitz: Right. At a high level we decided to minimize weight on your head as much as possible. We wanted to maximize compute power. We ended up with an architecture trying to project the maxed-out CPU and GPU that you could put in your pocket. We have a real time computer vision processor with some AI and machine learning in the headset. We have a distributed computing architecture where one does real time and one does application processing. That was a deliberate design decision, definitely.
GamesBeat: Does that need something like WiGig wireless, a particular wireless connection to the headset, or is that a wired connection?
Abovitz: Yeah, it’s a wired connection. We’re working on having wireless for our next generation. We have things running in the lab that demonstrate that. But if you think about how long it took to just make Bluetooth audio good—with the amount of data we’re moving, with very low latency and massive computation between one computer on your head and the other in your pocket—you’ll see that in our next generation.
It’s a bit like, if you look at pro audio headphones, some people still like a wire because they want that clarity of sound. This is an order of magnitude more substantial as far as the information that’s running at very low latency. We wanted to tilt at high performance, very low latency, and an incredibly good experience first, in the smallest form factor possible. Then we’ll work our way to alternate form factors.
GamesBeat: Is this going to work indoors and outdoors? Do you have to be near a desktop computer?
Abovitz: You have your own computer. There’s no desktop required. It’s the end of PCs as far as we think of them, and the beginning of a new kind of computing. But yes, we’re optimizing for indoor first. I’m sure people will do alpha outdoor. But we really want to get indoor – the home, workplaces, third places, locations. We move really fast internally when we’re developing tech, but we want developers and creators and people to begin to soak up how this becomes part of life. I think it’s safe to start at home, in indoor spaces, and then work your way outdoors. That’s our pacing.
We’re also looking at, from a computer vision perspective—we do things as sophisticated as what some of the self-driving cars need to do. They don’t all work well as winter and sleet and fog come in. Optimizing for every kind of weather condition and every outdoor environment is a different problem. Just like a self-driving car can work in really nice conditions in the bay area, but then you try it in Michigan at night in a snowstorm. Outdoor, again, is something you can build up to more and more, just like with self-driving cars, except that we have to pack everything into a few grams of tech. We can’t have hundreds of pounds on a car.
I do believe we’ll get there for outdoor in our generation two time frame. But we’re giving people a lot to play with already. As I say, I think we’ll see alpha outdoor experiments as well as indoor optimization with our system.
GamesBeat: Does a consumer version follow after the Magic Leap One at some expected point in time?
Abovitz: We call Magic Leap One a “creator edition.” I think there’s a sea change happening in the idea of what is a creator and what is a consumer. In Magic Leap, you’re really actively creating with the computer. It’s not so much for passivity, the idea of just being a passive consumer of stuff. I think you’re going to see a blend with most people – not just people who are developers or artists, but a lot of people will become co-creators in some way. I think it actively inspires you to be part creator and part consumer.
That’s the blend for us, for all time. We’re trying to make a device that inspires people to be creative and become part of a much wider creative community. Every person has creative potential. We’re not releasing a dev kit. We want to inspire people who are creative, whether you’re a writer, a poet, an artist, a hardcore coder. We want to move people out of being passive consumers.
That doesn’t mean people won’t consume on it. But the notion of sitting there and just passively receiving, where I don’t engage and I don’t create myself—our idea of computing almost wants you to co-create, if that makes sense.
GamesBeat: So it’s not like the Oculus DK1, where it’s intended to be for developers exclusively. This is more like your initial project for everyone.
Abovitz: The way I think of it, if you grew up with computing like I did—the first Apple, Apple II, the first Macintoshes, you had that if you were a lover of computing. You wanted to get engaged with it. You wrote code, and you also played with programs. It was an activity of creating and sharing and using stuff. It’s more in that spirit, whereas some of the mobile devices today are purely about consumption. They almost limit the ability to develop or create anything on them.
So it’s not a developer kit, but it’s not a one-way feed built only for people to sell you things. It’s a true interactive co-creating medium, and I hope that’s what we are forever.
This post by Dean Takahashi originally appeared on VentureBeat.