NVIDIA Estimates VR Is 20 Years Away From Resolutions That Match The Human Eye

by Joe Durbin • May 15th, 2017

Virtual reality has a resolution problem but according to one of the industry’s most active corporations, we may only be a couple decades away from 100% believable digital worlds.

Jason Paul is the general manger responsible for overall virtual reality strategy at NVIDIA — a leading manufacturer of graphics processors and other complimentary technologies. NVIDIA’s GPUs are what power many PC-connected VR headsets like the Oculus Rift and HTC Vive. If VR is a gold rush, NVIDIA is one of the companies set to make a killing by selling shovels.

NVIDIA has a great deal to gain by making VR more enjoyable, usable, and popular for the mass market. One of the pain points that the company could most readily address with it’s unique line of products is resolution. Right now, even with the most powerful GPU that NVIDIA makes, VR still looks fuzzier and duller than anyone would prefer. A large part of that is the limitations of the displays themselves, which have a limit on how many pixels they can physically represent. However, as the company making the engines that draw and render the images for these screens, NVIDIA needs to complete its fair share of product improvements to help solve VR’s resolution problem.

UploadVR interviewed Paul at NVIDIA’s annual developer conference in San Jose last week. Among  a myriad of topics, one of the most significant things Paul shared was his estimated road map for how VR headsets would improve their resolutions.

“I actually sat down and did the math one time,” Paul recalled. “Based off how many pixels we would need to be able to push and mapping that out alongside our upcoming new GPU releases, it would take us about 20 years to achieve resolutions that can match the human eye.”

Paul describes sight as a “highly differing sense,” capable of detecting even the smallest inconstancies in its perceptions. For VR to reach a level of resolution capable of convincing our eyes that the simulated world is in fact real, we are going to need another 20 years of GPU development.

According to Paul, however, NVIDIA is also rapidly researching complimentary technologies capable of cutting this timetable down significantly. Techniques like foveated rendering are actively being explored by the company and are seen as possible shortcuts to a more immersive future.

Tagged with:

What's your reaction?
Like
Wow
0%
LOL
0%
Dislike
0%
  • jimrp

    A little upgrade would be good. I’m still happy with the 1st gen. Supersample helps.

  • Jules Urbach

    I was pleasantly surprised by how good the NVIDIA foveated rendering demo looked (I tried the calssroom scene at GTC17 last week). I couldn’t tell the difference between the two modes most of the time.

    • Dean

      I was trying to find out information about Anjul Patney’s talk at GTC on foveated rendering but have had no success. Did you go to that? Was wondering if there was any update as to how much progress they’ve made since that classroom demo at SIGGRAPH2016 and what efficiencies they are managing to see now.

  • Graham J ⭐️

    I’m unsubscribing UVR from my RSS feed due to the whole serial harassment thing.
    Bye!

  • CURTROCK

    So…..whats all this “stuff” I’m reading about UploadVR?

  • “only be a couple decades away”… ONLY??

  • Tyler Soward

    not sure I buy the 20 year estimate – advancements are cumulative and exponential – depending on what exactly we’re talking about my guess is that it will take half that time. Just a year ago it was accepted that tethered headsets were going to be necessary for the next five to ten years before the tech was up to snuff to allow low enough data transmission to make wireless solutions feasible.

  • That time frame sounds about right to me.

  • Unknown

    hahaahah 20 years?
    people actually buy that BS? Are people that clueless about the innovation that has occurred in the last 20 years alone.

  • Zerofool

    Actually, I believe that by this time, we’ll already have “Matrix”-style brain-computer interfaces which will allow the transmission of “retina-resolution” visuals directly to the brain, along with all the other senses. Yes, this would sound like science fiction to most, but the rates of innovation are exponential, not linear. And we now have some of the most brilliant scientist working in this area.
    This, however, doesn’t downplay the vital need for GPUs with enough processing power to generate these visuals.
    And yet, I think that there’s room for improvement on the software side too. Rendering engines built for VR from their inception could potentially lower the required computational power quite noticeably, it’s just a matter of lean, optimized code, written for this special purpose.