4048175 37

Demystifying Magic Leap: What Is It and How Does It Work?

by Jono MacDougall • July 28th, 2016
Taking a Closer Look at What We Know and What We Expect

Magic Leap has not been forthcoming with details about how their technology works. From what little we know, it is a truly novel system with capabilities far beyond the off the shelf components consumers are accustomed to. You can’t blame Magic Leap for wanting to keep it a secret. There are many companies sniffing at their heels trying to emulate what has got people so excited. It sounds like the exact technology that Apple wishes it had. A potentially revolutionary new thing that shows signs of true innovation. It feels like it is what Microsoft was striving for with HoloLens but haven’t quite got there yet. It takes inspiration from Google Glass but is clearly gone generations beyond it.

So what is it and how does it work? I have gone through talks, patents, job applications, and the background of people working at the company to try to find out.

Osterhout Design Group (ODG) AR glasses, in partnership with BMW MINI

Osterhout Design Group (ODG) AR glasses, in partnership with BMW MINI. Magic Leap might look similar to this.

The Broad Strokes

Before we dive into too much detail, let’s get an overview of what this will be. Simply put, Magic Leap is building a device capable of manifesting objects into a person’s field of view with a level of realism beyond what we have seen so far from other similar devices. Magic Leap will come in two parts: a pair of glasses and small pocket projector/compute unit, think phone-sized rectangle without a screen. The pocket unit will be connected via cable to the glasses.

The glasses will be similar in size and design to glasses worn by people today, though they may be a bit chunkier than we are used to. The small size of the headset is the most fundamental piece of this product. It means it will be socially acceptable to wear in public and gives it the potential to have the utility and accessibility of a smartphone.

anker battery bank

The pocket compute unit could be similar to this in size.

The Pocket Projector and Compute Unit

The major trick Magic Leap has been able to pull off is to remove most of the required hardware from the glasses and put it in a separate unit. HoloLens, as a counter example, does an impressive job of keeping the size down but they can only do so much as all the components are placed in the headgear. So what will the pocket unit contain? Likely the following:

Battery

This will take a similar amount of power as a modern smartphone and potentially more depending on use. If this is to replace the smartphone it will need a seriously beefy battery. My guess would be at least 5000mAh.

CPU/GPU

These will likely be the latest generation mobile CPUs. I am guessing they will source them from Qualcomm. Luckily, they will save on the need to have high end graphics processing as MR only requires the rendering of some components and not the entire scene. This eliminates the issue VR has with intense graphic processing.

RAM

Similar requirements to a smartphone. I imagine we get 3 – 4 GB,

Custom Chips

These will be needed for SLAM processing. Definitely required for placing virtual objects in the real world. They might be building this chip in house or perhaps Movidius chips or some equivalent.

4G, WiFi, Bluetooth Connection, SIM Card, GPS Chip

Camera

There needs to be a number of cameras on the glasses but this doesn’t mean that the pocket device won’t have a camera. The requirements of the cameras on the headset to do SLAM are very different than the requirements for a good consumer camera. Given size restrictions on the glasses they may forgo a high powered camera on the headset and put it on the pocket device. This would have the added benefit of alleviating privacy concerns as you won’t be able to take photographs with the glasses alone.

Laser Projector

This is the meat of their innovation. Moving the projection system off of the glasses and on to the pocket unit affords a significant size saving for the final product. The projected light is produced on the pocket device and then sent up to the headset via a fiberoptic cable. Later in the article we will go into more detail on how this works.

Magic Leap Splash

The infamous splashing whale from Magic Leap’s marketing videos.

The Glasses

After stuffing as much as we can in the pocket component of the product, what is left for the glasses? It will need to fit the following:

IMU

This will be a traditional accelerometer, gyroscope and compass.

Headphones

Perhaps they will use bone conducting headphones as seen with google glass. This goes with their philosophy of working with your body instead of against it. Bone conducting has the advantage that you can hear the outside world as well as the played audio.

Microphone, Optics, and Cameras

The optics and the cameras are the most interesting components so let’s look at those more closely.

patent image 1

Optics

As we can see from patent applications, the optics used by Magic Leap afford considerable size advantages when compared to more traditional projection systems such as that found in HoloLens or Google Glass. The image above shows that the light source is separated from the main headset and it is how we can surmise that the pocket device will drive the light.

Secondly, it shows a lens system that is very small. The image is clearly not meant to be to scale but must represent the approximate sizes of the components involved. The only component we have actually seen is the photonic chip. Comparing components 5, 6, 7 and 8 (above) to the width of the chip and we start to see the sizes involved.

magic leap lens system

So what is going on here? How have they shrunk the optics so much while also claiming to achieve a light field display, high resolution and impressive field of view? The answer is two fold: The fiber scanning display and the photonic lightfield chip.

Fiber Scanning Display

fiber scanning displayThe fiber scanning display is a completely novel display system that has not been used in a consumer product before. Much of what we know of this invention comes from a patent application dating back to 2013.

The application is from some time ago so we can expect some of the details related to the performance of the system to have changed but the broad idea likely still holds true. The system uses an actuating fiber optic cable to scan out images much larger than the aperture of the fiber itself. It works in a similar way to old style tube TV. Instead of scanning electrons, this is scanning the light itself.

By using a piezoelectric actuator to achieve this scanning, one can maintain scan rates on the order of 24 kHz. We don’t get a framerate that high though because it takes multiple passes (the patent has an example of 250 cycles) to produce a full frame. This changes how we think about resolution. With this technology, resolution is dependant on fiber scan rate, the minimum spot size the fiber can focus to (this will define pixel pitch), the number of scans to generate a full frame and the desired frame rate. Depending on how well they have optimized since this patent was filed we can expect resolution much higher than any consumer system seen today.

fiber scan tubes

Packing a number of fiber scanning units together to increase the size of the display. Each tube is 1mm across.

While resolution and framerate are crucial to creating realistic holograms, field of view is equally important. Related to this, check out this interesting paragraph in the background information section of the patent.

“The Field of view (FOV) of the head mounted display (HMD) may be determined by the microdisplay image size together with the viewing optics. The human visual system has a total FOV of about 200° horizontal by 130° horizontal (sic), but most HMDs provide on the order of 40° FOV.  …. An angular resolution of about 50-60 arc-seconds is a threshold for 20/20 visual acuity performance, and it is determined by the pixel density of the microdisplay. To best match the capabilities of the average human visual system, an HMD should provide 20/20 visual acuity over a 40° by 40° FOV, so at an angular resolution of 50 arc-seconds this equates to about 8 megapixels (Mpx) . To increase this to a desired 120° by 80° FOV would require nearly 50 Mpx.”

fov lens magic leapThis puts two things into place. The first is that consumer displays are almost an order of magnitude behind what they need to be to increase FOV. This makes it clear why HoloLens is struggling so much to produce a large FOV. Second, it shows Magic Leap’s aspirations. They want to produce a field of view of 120° by 80°. This would be a larger field of view than the Oculus Rift and at far greater resolution. So have they achieved this? It is hard to say but the patent does give us some numbers to work with bearing in mind these are over 3 years old and they have likely improved the technology even more in the meantime.

Pixel pitch is the distance from the center of one pixel to the center of the next and is a limiting factor in resolution. Traditional microdisplays, like those used by HoloLens, have a pixel pitch of about 4-5 microns. This limits the resolution of these displays and thereby also limits the field of view that can be produced. The patent indicates that a scanning fiber display can produces a pixel pitch of 0.6 microns which is an order of magnitude improvement.

So what resolution does that produce? They quote a resolution of 4375 x 2300 in one section of the patent but I don’t think that tells the whole story. This is the example given for a naive approach before they start discussing multi-core fibers improving this further. I believe the resolution is much higher than that. This is crucial if we want a large field of view.

Finally given the stated aspiration of a 120° field of view this line is of particular note:

“The above described technologies facilitate an ultra-high resolution display that supports a large FOV in a head-mounted or other near-to-eye display configuration.”

I think this all but confirms we will have a FOV that is, at minimum, far greater than 40° and I don’t think it is crazy to think it is approaching the stated goal of 120°. If I was a gambling man, I’d put my money on 90°.

Photonic Lightfield Chip

When I first heard Rony Abovitz call his lens a “photonic lightfield chip”, I groaned. Not another crazy name for something that already exists. It is called a lens Rony! But the more I researched it the more it became apparent that it is in fact much more than a simple lens. So, how does it work and why is it far more interesting than a simple lens? Let’s talk about diffractive optical elements.

diffractive optical element

An example of diffractive optical element.

Diffractive Optical Elements (DOEs) can be thought of as very thin “lenses” that provide beam shaping, beam splitting and diffusing or homogenizing. Magic leap utilized linear diffractive grating with circular lens to split the beam wave front and produce beams with desired focus. That is to say it directs the light to your eyes and makes it look like it is in the correct focal plane. But this is far easier said than done and it is by no means easy to say. The patent document that I am pulling all of my information from is verbose to say the least.

To build a light field, Magic Leap has set up a photonic chip with two separate components. One element (6 in the diagram) which takes the projected light and inserts it into the second element (1) which redirects the light into your eyes.

Both components make use of DOEs to do their job. The main drawback of DOEs is that they are highly tuned to do one specific job. They cannot operate on different wavelengths and they cannot change properties to allow for different focal points in real time.

photonic chip doe

To solve this, Magic Leap has layered a number of DOEs together into the larger lens-like component that are tuned to different wavelengths and focal planes. These DOEs are extremely thin, they are on the same scales as the wavelength of the light they are manipulating so this doesn’t add much bulk to the apparatus. Here is where the chip nature of this optical system comes in. Magic Leap is able to turn on or off the different layers of DOEs. By doing this they can change the path in which the light reaches your eyes. This is how they change the focal point of the image and achieve a true light field. As the patent says:

“For example, a first DOE in the set, when switched ON, may produce an image at an optical viewing distance of 1 meter for a viewer looking into the primary or emission face of the planar waveguide. A second DOE in the set, when switched ON, may produce an image at an optical viewing distance of 1.25 meters.”

It might seem that this is highly limiting as you would need a large number of layers to produce the full range of focal points but this isn’t the case. Different combinations of DOEs in conjunction also produce different output states. So it isn’t one focal plane per DOE, it is one per combination of DOEs.

Changing the set of DOEs that are currently active changes the path in which the light exits the Photonic Lightfield Chip, as shown in the GIF above. They will likely have more layers than the number depicted here but how many is anyone’s guess.

Finally, we see how Magic Leap manages to create black with light as they have claimed to be able to do in the past. If we take a DOE on the outer edge of the lens and one on the inner edge we can use them to cancel out light similar to noise cancelling headphones. From the patent:

“Such may be used to cancel light from the planar waveguides with respect to light from the background or real world, in some respects similar to noise canceling headphones.”

So, why is this a chip? Well, a typical electron chip changes the flow of electrons based on certain conditions. The Magic Leap photonic lightfield chip changes the pathways of photons based on certain parameters. Sounds like a chip to me.

Where does this leave us? We have a photonic lightfield chip and we have a high resolution projector but how do we actually create an image. This is done via composition. The image is layered such that the different components are projected at different focal lengths on a subframe basis. This means, in a single frame there are multiple passes to construct the entire frame, each focal plane being laid down individually.

Magic-Leap-Dragon

Another Magic Leap marketing image.

Cameras

Magic Leap is trying to accommodate 3 different goals with camera technology. The first is the most obvious. A camera to produce everyday pictures. This is the most well understood of the camera technology they will be using and they will likely use a similar sensor to the latest in the smartphone market. If this sensor lives on the glasses or if it lives on the pocket device is still up in the air but it will have a camera that is capable of taking decent pictures.

There are two other use cases which are far more interesting. Magic Leap has repeatedly talked about the ability for the device to understand the world around it. In one particular interview, it was mentioned that it will be able to recognise objects, such as a knife and fork. To be able to do this they will need an array of cameras. As an example of a device that does this very well we can look at HoloLens. HoloLens contains an array of four environment sensing cameras that work in combination with a depth sensing camera. We get further information about Magic Leap from the patent documents.

camera magic leap diagram components

This diagram shows two components on the the left and right arms of the glasses. The top is the left arm and the bottom is the right arm.

As we can see from the above diagram, we can expect two outward facing cameras, labeled as “world camara”. That said, the text of the patent implies there could be more than two, stating “one or more right outward facing or world view cameras [per side]”. At this point it is unknown how many cameras will be included on the system nor is it known how much Magic Leap is able to shrink these components. We do know they will be on the glasses and they are vital to SLAM processing.

The final use case for cameras can also be seen in the diagram above. At least two cameras will be pointed at your eyes. This is used to track your gaze and vergence such that the focal point and the direction of view can be obtained. It will also point an infrared LED at your eye to illuminate the eye for these cameras. This eye tracking will also be critical to the user interface. I imagine the question of “what are you looking at?” will be fundamental to how you interact with Magic Leap. It will potentially be the main interactive component similar to that of a mouse.

magic leap mockup image

Clearly there is no way for me to verify this information at this time but it all adds up to sound like the product Magic Leap is trying to produce. Regardless if it turns out to be a consumer success or not, this is the first example of real innovation the tech industry has seen in some time. I am extremely excited to see what happens next for them and looking forward to the shake up this will put on the industry in general.

This post originally appeared on the Magic Leap-focused blog, GPU of the Brain by Jono MacDougall. It has been re-published here with permission from the author. Follow the blog on Twitter: @gpuofthebrain.

Tagged with: , , ,

What's your reaction?
Like
Wow
0%
LOL
0%
Dislike
0%
  • B7U3 C50SS

    Nice going Jono! Happy your post made it to uploadvr. It’s well written and thought out.

  • AlyssaAngel

    Very thorough. Lots of great information.

  • lovethetech

    If Magic Leap is so good, why Google is investing in another “MR” product of their own.

    Great marketing and hype !! Same like Segway?? But the Videos from and of the product are like very dark scenes. Magic leap is working only with movie studios It like Disney and Lucas. So they are going to sell them to watch videos in 3D????

    They have to accommodate the IMUs and the other sensors. It can not be small in size with today’s tech. Tango may provide them any spatial mapping and tracking capability. But the HMD and the smartphone tracking perspective are entirely different in the implementation method.

    Lots of questions…

    The boasting of Magic leap guy shows he is …..

    • user2

      Its only rumors that google wants to bring their own standalone headset to market. I think its more likely that they use it to work on technologies (tango, aura, soli) and then contribute them to magic leap’s product which is designed to replace smartphones.
      I guess that the first consumer version will be released in 2018 based on what was said at the brainstorm tech conference.

    • Rocky Dohmen

      Why would Google invest hundreds of Millions of dollars into Magic Leap if it wasn’t good?

    • GlobalCitizen2000

      Google wants their own product, they don’t want to give the lion’s share to another company. There’s plenty of room in this field for more than one player.

      • user2

        Thats obviously not their strategy if you look at the mobile market and niantic.

    • zaywolfe

      Google is widely known to invest in startups and other teams, sometimes even starting their own to fire up competition between their properties. That way the teams know that if they don’t deliver then the writing is on the wall.

  • Jonathan Jones

    Why start with glasses? Why not paper thin lightfield monitors and panels?

  • Sebastien Mathieu

    Great Article thanks!!, but the tech seems to good to be true…nonetheless interesting. but as a gamer I still prefer VR….

    • kalqlate

      I think you missed this:

      Finally, we see how Magic Leap manages to create black with light as
      they have claimed to be able to do in the past. If we take a DOE on the
      outer edge of the lens and one on the inner edge we can use them to
      cancel out light similar to noise cancelling headphones.

      Now, where’s your imagination?

      With this technique, the Magic Leap device can “black out” the real world background, as if there were a virtual cover on the device. Of course, to make it a perfectly closed device, they can supply a clip on cover that physically blocks out the real world, as with the Rift or Vive. Essentially, the Magic Leap device can selectively be for AR, VR, and MR.

  • Daemon Hunt

    Great article. The best I have read about this mysterious technology.

    • badsleepwalker86

      Here here! Best explanation I’ve seen yet! Makes much more sense on how they are producing those images. Sadly we consumers may not get this tech for 5-10 years.

  • user2

    Movidius chips arent used in lenovo’s phab 2 pro. Maybe qualcomm can do it without them.

  • GarynTX

    Excellent article. Very thorough. As for Magic Leap, “Shut up and take my money!”

  • André Lorenceau

    good article overall but sentences like the following kind of hurt your credibility.
    “I think this all but confirms we will have a FOV that is, at minimum, far greater than 40° and I don’t think it is crazy to think it is approaching the stated goal of 120°. If I was a gambling man, I’d put my money on 90°.”

    A patent confirms absolutely nothing. You can say whatever spec you want on a patent, I can patent a system to make a 200K screen, doesn’t mean I have built it already or can build it for any kind of reasonable price.

    • zaywolfe

      That seems more like an attempt to level expectations. If they didn’t do it hundreds of people will go out on the net talking about 120°. Even if it is a guess I’m glad the author doesn’t enable this kind of overblown hype.

      • Hannes Steffenhagen

        The 90 number is still speculation based on basically nothing.

    • Christopher Hatton

      “…hurt your credibility”. As what? Jono, the author, is clear throughout the article that this is a mixture of delving into the patent, and speculation. I for one am grateful for bringing this info to me (for free). You sound rather ungrateful.

      • André Lorenceau

        Hurts credibility = these kinds of sentences make it feel like he WANTS it to be amazing, and that would cloud his judgment.
        Lol @ ur ungrateful comment.

  • Nine2Nine

    Excellent article. I have been following this company for a long time and have read through a lot of the publicly available information and like the author of this article came to many of the same conclusions. Most blogs and news outlets focus on the photonics chip and it’s ability to enable a full lightfield. Almost all of them do not talk about the fact that it uses a new type of projection system that enables the light source to be decoupled from where it is displayed. This really is a big deal as HUDs will never replace phones until they are as small and light as glasses. One thing the article only briefly touched on was the sensor system that was speculated to be used.. While it is a given that a accelerometer, gyroscope, and compass are going to be integrated, the camera system is a complete mystery. My personal belief is that they will use a camera array system that can simultaneously capture a depth map, motion capture, pictures, and video. All wrapped up in a very small package that is thin enough to fit in a pair of glasses.. One canadite for this is Pelican Imagining. Their array systems are just a few millimeters thick and can fit into a pair of glasses. In fact around the time that Magic Leap started to receive its huge dump trucks of cash from investors, Pelican Imagining changed its focus from phones to HMDs. Also Pelican Imagining has said they are working with unnamed HMD developers.. So interesting..

  • Peter Richards

    how does a series of “guesses” at what the tech is or could be demystify anything? Title should be “our best guess at what Magic Leap may, or may not contain.”

    • Hannes Steffenhagen

      Came here to say this. Part of the article probably comes reasonably close to what magic leap is trying to do, but most of it is incredibly optimistic speculation based on effectively nothing.

      • Dj Hastings

        Well, this would probably be the case except, everything he is saying is
        exactly how the design is laid out in magic leap’s patent. So it’s not
        a guess or speculation it’s “this is exactly what Magic Leap is telling the US patent
        office that their design is” If anything, his article doesn’t go into
        enough detail, like the fact that the patent says they are using
        switchable liquid crystal lenses, or the fact that virtual images are rendered so that pixels are shown sequentially..etc.

        • kalqlate

          You said:

          If anything, his article doesn’t go into
          enough detail, like the fact that the patent says they are using
          switchable liquid crystal lenses…

          Hmm… from the article:

          “To solve this, Magic Leap has layered a number of Diffractive Optical Elements (DOEs) together into the
          larger lens-like component that are tuned to different wavelengths and
          focal planes
          . These DOEs are extremely thin, they are on the same scales
          as the wavelength of the light they are manipulating so this doesn’t
          add much bulk to the apparatus. Here is where the chip nature of this
          optical system comes in. Magic Leap is able to turn on or off the
          different layers of DOEs.
          By doing this they can change the path in
          which the light reaches your eyes. This is how they change the focal
          point of the image and achieve a true light field.”

          You continued:

          …or the fact that virtual images are rendered so that pixels are shown sequentially

          Hmm… from the article:

          “This is done via composition. The image is layered such that the
          different components are projected at different focal lengths on a
          subframe basis.
          This means, in a single frame there are multiple passes
          to construct the entire frame, each focal plane being laid down
          individually.

          In other words, let’s imagine that a regular video is running at 30 frames per second. REGULAR video presents ALL pixels at ONE focal length for each frame. Contrastingly, in the Magic Leap device EACH pixel of a frame can be placed on a different layer (DOE) to be displayed at a different focal length. This is what allows the human eye then to see light coming from different layers and angles (fields), as if it were produced naturally.

          Are you sure you were reading THIS article???

  • shane redmond

    These could be leaked images of magic leap or just concept drawings but who knows

  • Jack H

    Overall a much more reasonable guess than most but I must disagree on several points:
    1. Separation of laser sources from the rest of the optical engine- it would be quite inefficient to do so and also the long fibre optic cables that you claim would run from the pocket to the head would snap much more easily than a data cable.
    2. The assertion that diffractive optical elements aren’t polychromatic. There has been a body of work in polychromatic DOEs and in terms of commercial ventures, Vuzix and Nokia created a tirchromatic (for 3 lasers only) blazed diffractive waveguide for the Vuzix headsets and Dispelix from VTT Finland also produced a diffractive waveguide with broadband trichromatic capability and suitability for LEDs as well as laser sources.
    3. The image source being a scanned piezo fibre. Based on recent job advertisements, I think it will instead be a MEMS raster mirror like Microvision, Sercalo and Lemoptix (Intel acquired).
    4. The photonics chip could just as easily be a chip more in the tradition of its namesake instead of just an active waveguide. An integrated circuit would perform some of the beam modulation functions and there is also a method of producing both an image and per-pixel focus modulation from standard photonic circuit IP blocks.

    • Dj Hastings

      I have actually read much of the patent’s section on the waveguide design and:
      1. Everything in the patent and from ML promotional materials shows the design to be using fiberoptic cables.
      2. The patent deals with this subject in depth
      3. The only work that I’ve seen from these inventors that involves mirrors is old research papers by them from the early 2000’s. The scanning fiber piezo was invented by one of the inventors of the ML technology and it’s patent is cited specifically as being used in ML.

      • David Gomes

        There are recent ML patents that specify mirrors. Also, time has proven Jack right re fiber.

  • Dj Hastings

    Fun fact: This tech is the result of over a decade’s worth of post-doctoral research by two professors. Rony Abovitz found them somehow and decided that their tech was worth starting a company over.

  • Merrick Dida

    Just wondering…How can this technology display Black or colors darker than the surrounding ? It’s impossible, isn’t it ? It sounds like a HUGE limitation to me 🙁
    (and it would be a great explanation for the dark surrounding in all their videos)

    • kalqlate

      From the article you just read:

      Finally, we see how Magic Leap manages to create black with light as
      they have claimed to be able to do in the past. If we take a DOE on the
      outer edge of the lens and one on the inner edge we can use them to
      cancel out light similar to noise cancelling headphones.

      They do it by placing a pixel of equal wavelength and amplitude in the same pixel position on a DOE a half wavelength away from the one you want to cancel. When both pixels travel to the eye, the crest of one wave is always the trough of the other–the two pixels continually cancel each other out as the cycle of one has the negative cycle of the other.

      • Merrick Dida

        Oh ! Thanks ! I get it, even if it seems verry difficult to make complex shadow to me. Seriously, calculing an “anti-pixel” and displaying it in the very right and precise place in order to create each dark zone ? It seems possible, right, but…incredibly difficult.
        For the dragon image, Yeah, but if you look close all videos they share there is never something darker than the (always very dark) surrounding. In fact this happen only on images which are not explicitely declared as “real” magic leap footages.. So…we will see, but thanks, you gave me hopes :p

        (ps, sorry for my bad english, not my native tongue )

        • kalqlate

          Agreed! It is unfortunate that they haven’t demoed in video what they’ve demo in stills. Admittedly, I’ve had the same concern all along, but with the kind of money and the enormous number of PhDs in various disciplines that they have working on that and various other problems, it really would be crooked and nonsensical for them to mislead in that way; particularly, they would have to be misleading their investors, which I seriously doubt. They seem to be getting closer and closer to a real demo date.

          Early on, it seems they were QUITE over-optimistic regarding their time-frame for a public demo. Back in late 2014, they announced a special show that they were to present with physicist Brian Cox at the Manchester International Festival (MIF) in June 2015, titled The Age of Starlight. As that date approached, they knew that they were nowhere close to being ready for a public demo. Instead, Abovitz, remotely via telepresence robot, participated in an on-stage discussion about the technology and its possibilities. I suspect that they are waiting until everything is perfect and ready to go, and will once again schedule The Age of Starlight at the MIF as the first public demo of their device–hopefully then, June 2017.

          For not-native, your English is great! I couldn’t tell. You write like a native. o/

          (One slight correction: “you gave me hopes” to either “you give/gave me hope”. “Hope”, like other emotions, is an uncountable thing. For them, to express quantity, you can write “you give/gave me A LITTLE / A LOT OF / SOME / MUCH / GREAT hope”. 🙂

          I used to enjoy teaching English and learning other languages on the fantastic community-driven language-learning site Livemocha for about three years. If you are interested in improving your English or learning other languages and making GREAT friends from around the world, Livemocha is the place for you! I might rejoin sometime late this year or early next year. It was one of the greatest experiences of my life–it completely changed my perspective on the world and my place in it–and I still have many friends that I met there that I communicate with weekly, and in some cases, daily. I have traveled and met some of them in the real world and have plans on doing the same with others. Great times!)

  • Al

    soo, basically an unfinished Hololens?

  • Original Prankster (Internal E

    I know someone who claims to have worked on this project or with the people who invented it. He claims that its entirely a Glass-less experience. Fully augmented reality with absolutely no peripherals.

  • sagarbhandare

    Excellent article! The most detailed analysis I have read about Magic Leap’s tech.