HTC Promises Less Than 7ms Latency For Intel’s Wireless Vive

by Jamie Feltham • May 31st, 2017

Wired VR headsets have barely noticeable motion-to-photon latency. It’s essential to ensure a comfortable experience; a headset must keep the pace with where the user is turning their head, otherwise they might being to feel nauseous let alone unimmersed. Introducing wireless add-on kits for these devices threatens to risk increasing these latency times, however.

As we reported back in January, HTC is partnering with Intel to create one of these wireless solutions, named WiGig. Using a device that wirelessly communicates with your PC, WiGig promises to cut the cord that many VR users are growing frustrated with. We haven’t gone hands on with the kit ourselves just yet, but there should be less than seven milliseconds of latency “in any environment”. HTC said as much in a blog post that followed a showing of the solution at the Computex event in Taipei this week.

WiGig’s tech uses a 60GHz band, which HTC promises will deliver “pristine video quality” along with minimal latency. But don’t just take their word for it; a proof of concept version of the kit will be on display at E3 in Los Angeles in just two weeks’ time.

No word on when we’ll be able to actually take one of these devices home, nor how much it might cost, but we’ll be sure to keep you posted.

WiGig is not the only competitor in the wireless VR adapter scene. In fact it’s not even the only one developed with the help of HTC. TPCast, a member of the company’s Vive X accelerator programme, is already shipping its own adapter for the Vive in some parts of the world, with plans to bring it elsewhere later this year. It’s a solution we’re pretty impressed with, unlike some others out there.

What's your reaction?
Like
Wow
0%
LOL
0%
Dislike
0%
  • Nicholas

    TPCast has weird issues like a green line down the side of the image and no microphone or camera support which now makes it a no-sale for me (especially at $250…I’d be expecting seamless full support, not a half-baked effort).

    I’m hoping this won’t have those issues coming from HTC directly.

    • SandmaN

      Any articles you can share that show this? Could be a deal breaker for me if so.

      • Nicholas

        Some Reddit users with the initial batch of devices released in China earlier this year:

        www[dot]reddit[dot]com/r/Vive/comments/68l38y/any_chinese_tpcast_consumer_reviews_out_there_yet/
        www[dot]reddit[dot]com/r/Vive/comments/6dvrul/update_on_tpcast_and_audio_strap_performance/

        Maybe the EU/US release will fix these issues, but it doesn’t look promising from these early reports. I’m surprised none of the VR sites have picked up on this in their testing.

        • SandmaN

          Yeah, definitely hope the microphone issue is resolved for the US/EU release. Wireless headphones are one fix, but then that would negate the built-in headphones of the Deluxe Audio Strap.

          • Nicholas

            I currently use a wireless headset (which can have an optional mic attached) but I doubt it would fit over either strap with that TPCast receiver on the top. The Deluxe strap is pretty much going to be a prerequisite to use it.

  • Sean Lumly

    It seems that an update to Vive could be substantial if the recently announced technologies find their way into the future headset.
    – Wireless
    – Foveated rendering/eye tracking (DoF simulation?)
    – High resolution panels
    – Lower latency (possibly with the help of some hardware implemented reprojection)

    These changes alone will make the experience VASTLY different than the first generation, and together with a bump in graphics, make virtual worlds intensely believable.

    Lets hope that some of these technologies make it to upcoming stand-alone headsets. I would love the option to use this with a PC, or in a mobile fashion!

    • I feel eye tracking and wireless have largely been integrated independently so far, and they have conflicting requirements. Eye tracking requires extremely low latency to follow eye saccades, but wireless adds some latency.

      Has anyone tested these two techs in a single prototype? It should be possible to test this soon by having a TPCast add-on on a Tobii dev kit.

      • daveinpublic

        The could do the head tracking in the HMD, put an extra chip in there, save the camera feed from being sent wirelessly every second.

        • 1droidfan

          In order for the HMD to do re-projection it needs rendered depth at 24-bit resolution (in addition to color) that will not take kindly to the compression that the wireless will use.

          • Nicholas

            The HMD doesn’t do reprojection. It’s handled by the SteamVR compositor i.e. before being sent via the GPU over HDMI. Any wireless compression would be transparent to the GPU and HMD.

          • 1droidfan

            The comment above by Sean suggested it be done in the HMD to reduce latency, thats who the comment was directed to, hit the wrong reply.

          • Nicholas

            Aaah. Still would be tricky to do it at the HMD end without a GPU, and the frame rate is already known at the source where it can (and should) be upsampled.

          • Sean Lumly

            “In order for the HMD to do re-projection it needs rendered depth at 24-bit resolution (in addition to color) that will not take kindly to the compression that the wireless will use”

            This would be short work for even a mid-spec mobile phone SoC costing a few bucks wholesale these days, even for high-resolution displays. I intuit that a mobile CPU/GPU can do re-projection and CA correction of a raster in a few ms before writing to the display and consuming very little power in the bargain. And the memory requirements would be very low as well.

            For example, 2x 4K displays at 10bit component colour (HDR 10-bit) is only some 62MiB, costing some 5.6GiB/sec in bandwidth @ 90Hz…

          • 1droidfan

            Right but the data being sent from the CPU to the HMD would not only include color, which can be heavily compressed, but depth as well, which cannot be compressed, in order for the HMD to do the re projection. Depth buffers are generally 24-bit. Thats my point, both of these technologies are pulling strings on each other, making it difficult to increase resolution and move to wireless at the same time. My point was that trying to reduce latency by moving re projection into the HMD, so it could have the very latest head orientation, relies on moving a great deal more data. Image compression can be 50:1 to 100:1, which helps make wireless transmission possible. You cant compress a depth buffer like that.

          • Sean Lumly

            No, depth is not required for a simple re-projection, just a vector describing direction and the un-reprojected frame. A more complex reprojection that takes into account parallax may use such a buffer, but given a short frame time (11.1ms), I can’t imagine this would lead to a detectable advantage.

            And depth buffers can be adequately described using 16-bits per pixel (representing a 16b, half-precision float) or even fewer (eg. 8b). And yes, you can absolutely compress a depth buffer, even though such a buffer would not be needed in this case.

          • 1droidfan

            Sorry Sean, but you are wrong, you absolutely need depth at each pixel to re-project a color buffer.

            The way it works is, you calculate your camera space depth from the z-buffer ( or store camera space depth either way ) per pixel, move that pixel position into world space, then take the new camera positional/orientation matrix from the HMD, and calculate where that pixel would be in the new coordinate space of the HMD, project that into screen space, then sample the color at that position and you have the re projected color.

          • Sean Lumly

            Or, you forego correcting HMD displacement, and simply translate and distort the flat, 2D current frame to a slightly more recent HMD orientation. A depth buffer isn’t needed with this simple reprojection. Think of a 360deg video mapped to the inside of a sphere (as a type of analogy). The current -frame occupies a part of that sphere, and an intermediate head-orientation can be derived by simply rendering a slightly different view of the sphere corresponding to a different head orientation.

            If you are trying to account for a missed frame (as has generally been the case for something like Oculus’ ATW) then head translations should be taken into account (even 11ms is a lot of time to not register movement) and some form of motion should be preserved — a wonderful case for depth. However, if the frame is on time, and you are trying to reduce the perceived latency by a few ms, then orientation-only reprojection should be fine as the displayed frame has been rendered very closely to the last positional read, and any further reprojection will reduce latency in orientation.

            So I stand by my original point, but I also understand now why you propose depth — if you are dealing with a late frame, and want to update the next frame with a good approximation of what the scene would look like, per-pixel depth (or even a lower resolution depth buffer should work) would be handy to calculate new displacement offset from the pixels in addition to orientation translation. I, however, assuming that the frame has been transmitted wirelessly, is on time, and we are shaving a few ms from the displayed orientation, but leaving position alone.

            As an interesting aside, for missed-frame “Positional Asynchronous Time Warp” (as is referred to by Oculus), a per-pixel depthbuffer would not be needed for a nice approximation! A very low resolution depth map (eg. 320x200px) could cheaply provide coordinates for a displaced triangle grid, to do a convincing reprojection.

            You can somewhat check this out! If you go to sketcfab[dot]com, and search for 3D scans of rockfaces, you can hit ‘5’ and see the geometry. Despite it being very low (in some cases), the 3D scan can be translated quite convincingly. Within 11ms (implying a TINY head translation ~ 1cm for fast motion?) I doubt it would be humanly possible to notice the old frame mapped onto a low geometry mesh.

          • 1droidfan

            What you are describing is not re-projection though. You are re-aligning the current frame, not re-projecting it. Projection refers to the process of taking 3D points to a 2D screen. And I am not proposing using depth, thats how Oculus does it in their runtimes.

          • Sean Lumly

            Yes, it is reprojection — a fairly loose umbrella term describing the mapping of pixels from one spatial local to another.

            Reprojection: verb. To change the projection (or coordinate system) of spatial data with another projection.

            In any case, this is simply semantics. My original proposition is sound and will trust that intuition until a invalidating counter example is presented.

          • 1droidfan

            Im not trying to win an argument, what I am describing is how the oculus SDK re-projects pixels in the runtimes. If you dont believe me, try working with it, read the documentation. And like I said, it needs a depth buffer.

          • Sean Lumly

            Regardless of how Oculus does reporjection, it does not invalidate other forms (which certainly exist across myriad fields of study). Put another way, what I state is not invalidated by Oculus’s implemented methods.

            Ironically, Oculus has a form of reporjection (so called orientation-only ATW) exactly as I describe, currently used for the Gear VR. Here is an official post (please translate the URL correctly after cutting and pasting) discussing the strengths and weaknesses of such an approach as well as tangentially related concepts:

            developer3[dot]oculus[dot]com/blog/asynchronous-timewarp-examined

      • Sean Lumly

        Yeah..

        Eye tracking requires low latency indeed, but also (if I’m not mistaken) a low resolution (eg. 100×100), which would be low latency even in a setup with constrained bandwidth. And it’s possible to do something like compression on the eye frames, or even extract the pupil positional data on device before sending it, reducing the transmission to a few bytes!
        Hey, if PS4 can include a SoC that does this in their $400 PSVR ( www[dot]marvell[dot]com/multimedia-solutions/armada-1500-pro-4k/ — the URL must be modified for the link to work), I’m certain HTC (which manufactures mobile phones with mobile SoCs) can include something similar. And the SoC in the PSVR is pretty weak even by mobile phone standards.

        While wireless will add some latency, I think that simulating DoF needn’t be nearly as responsive as positional updates, as our eyes focus somewhat slowly. I would bet that the 1 or 2ms of added latency occuring with the wireless transmission will not likely affect DoF simulation in a noticeable way.

        And wireless tracking latencies can be well aided with hardware warping of the frame with a very late position query on device (with said SoC). This should gobble most of the added transmission latency. And the if you are doing this stuff on device, you also get the benefit of doing chromatic aberration correction in hardware as well as barrel distortion, saving a few fractions of a ms from the driving computer.

        I think it’s technically possible to do both, and would love to see it next year! My fingers are crossed!

  • 1droidfan

    IS that 7ms total or 7ms in addition to whats currently there?
    If its ‘in addition’ I will wait for some of the solutions that are sub-1ms latency.

    • morfaine

      7ms in addition. Rift and Vive have about 20-25ms latency without wireless.

  • But how does this compare to the likes of Rift and Vive? It would be nice to have some frame of reference here.

    • ForNein

      If you’re asking how it compares latency wise, whatever the current latency is + up to an additional 7ms. Image quality will likely remain the same.

      I’m not sure what other kind of comparison you’re wondering about since this is not a headset.

    • ChristopherRaff

      I think you are confused. This is an article about Intel’s wireless transmitter for the Vive, not the Daydream based standalone Vive headset.

    • morfaine

      Motion to photon latency in the Rift is about 20-25ms. Some of that depends on the speed of your GPU rendering.

      Larger motion to photon latency reduces presence and increases likely hood on motion sickness. Research indicates 20ms is a good target latency to achieve presence. The Rift and the Vive were both developed to get as close as possible to 20ms. As to whether the extra 7ms is substantial enough to be perceptable or cause significant issues, I don’t know.