Connect with us

TechNews

Inside Reality Labs: Meta Ventures into Next Era of Human-Computer Interaction 

According to Zuckerberg, the core technology to pass the Visual Turing Test doesn’t exist yet in any consumer headset or product

Published

on

Mark Zukerberg Holocake Research
Mark Zukerberg Holocake Research

In line with the Visual Turing Test enlightenment, this VR display research is driven by Mark Zuckerberg’s vision for the future of VR. To fulfill the promise of the metaverse that he shared last Fall, there is the need to deliver visual experiences that are almost indistinguishable from reality.

Today’s VR headsets deliver incredible 3D visual experiences, but the experience still differs in many ways from what we see in the real world. 

They have a lower resolution than what’s offered by laptops, TVs and phones; the lenses distort the wearer’s view; and they cannot be used for extended periods of time. 

To get there, we need to build an unprecedented type of VR display system — a lightweight display that is so advanced it can deliver what our eyes need to function naturally so they perceive we are looking at the real world in VR. This is known as the “Visual Turing Test” and passing it is considered the holy grail of display research.

To pass the visual Turing test, the Display Systems Research (DSR) team at Reality Labs Research (RL Research) is building a new tech stack that includes:

  • “Varifocal” technology that ensures the focus is correct and enables clear and comfortable vision within arm’s length for extended periods of time
  • Distortion correction to help address optical aberrations, like warping and color fringes, introduced by viewing optics
  • Resolution that approaches or exceeds 20/20 or 6/6 human vision
  • High dynamic range (HDR) technology that expands the range of color, brightness, and contrast in VR

DSR is also developing prototypes that explore the headset form factor and how to reduce the size, weight, and power of existing VR headsets as much as possible to make room for these visual elements.

ALSO READ  Email encryption market worth $9.9 billion by 2025
Mark Zuckerberg Starburst Research Prototype
Mark Zuckerberg Starburst Research Prototype

In a fireside chat introducing and thoroughly explaining the Visual Turing Test, Mark Zuckerberg, Founder and CEO of Meta and Micheal Abrash, Chief Scientist at Reality Labs, intricately dissected the subject.

Simplified into what it will take to build next-generation displays for virtual and augmented reality, Zuckerberg explained that the technology is directed towards solving fundamental challenges facing the world.

Now making 3D displays that are as vivid and realistic as the physical world, it’s going to take solving some fundamental challenges that we’re going to get into today. Now these are some pretty interesting problems because they’re all about how we physically perceive things and how our eyes process visual signals and how our brains interpret them to construct a model of the world.”

Speaking further on why this matters, he explained that displays that match the full capacity of human vision are going to unlock some really important things. The first is a realistic sense of presence, and that’s the feeling of being with someone or in some place as if you’re physically there. “And given our focus on helping people connect, you can see why this is such a big deal.”

The second reason why these kinds of realistic displays are so important is that they’re going to unlock a whole new generation of visual experiences.

Visual Turing Test - Reality Labs Eye Chart Comparison
Reality Labs Eye Chart Comparison

Shedding light on the Visual Turing Test, Micheal Abrash said: “So Alan Turing conceived the Turing Test in 1950 to evaluate whether a computer could pass as a human. The visual Turing test, which is a phrase that we’ve adopted, along with a number of other academic researchers, is a way to evaluate whether what’s displayed in VR can be distinguished from the real world. And it’s a completely subjective test, because what’s important here is the human perception of what they’re seeing, the human experience, rather than technical measurements. And it’s a test that no VR technology can pass today.”

While VR already creates a strong sense of presence — of being in virtual places in a genuinely convincing way — it’s not yet at the level where anyone would wonder whether what they’re looking at is real or virtual.

Advertisement

According to Zuckerberg, the core technology to pass the Visual Turing Test doesn’t exist yet in any consumer headset or product.

ALSO READ  Creditswitch wins highly coveted NTITA Unified VAS Solutions Provider of the Year award

The discussion pointed out some challenges brought about by VR which includes new issues that do not exist with 2D displays with names like vergence-accommodation conflict, chromatic aberration, ocular parallax, pupil swim. 

The lenses used in current VR displays often distort the virtual image, and that reduces realism unless the distortion is fully corrected in software and doing that is very complex because the distortion varies as the eye moves around to look in different directions, Abrash explained.

While it’s not actually part of realism, headsets can be hard to use for extended periods of time because of that distortion and also because of the headset’s weight, both of which can cause some temporary discomfort and fatigue. Another key challenge is around the ability to focus at any distance.

Visual Turing Test - Reality Labs Lens Comparison
Reality Labs Lens Comparison

So the basic thing here is your eyes try to focus and you can’t because the display is projecting as if it’s at a fixed distance. As we’ve been working through this, we’ve focused our research on a couple of key areas that we think have the best possibility for making a jump forward here,” Zuckerberg said.

In line with achieving the ultimate goal, the team is focused on finding out what it would take to get to a retinal resolution headset. “That means getting up towards about 60 pixels per degree in the display, which is about a few times more than where we are today.”

Our Display Systems Research team had to get pretty creative to get there. So here is a look at a prototype called ‘Butterscotch,’ which has enough resolution that you can read the 20/20 vision line on an eye chart in VR — the kind that you’d use if you were going to get an eye test to see if you needed glasses. These prototypes, they’re custom and bespoke models that we built in our lab, so they’re not products that are ready to ship. But when I go and try this out, it’s a pretty amazing experience, and you can really see the image incredibly sharp.” Mark Zukerberg.

ALSO READ  7 ways tech can help to get you on track with your resolutions

The objective is to bring technologies such as Half Dome Series, Distortion Simulator, Butterscotch, Starburst, Holocake 2, and others, integrating the visual elements needed to pass the Visual Turing Test into a lightweight, compact, power-efficient form factor – and Mirror Lake is one of several potential pathways to that goal.

​Joan Aimuengheuwa is a content writer who takes keen interest in the scopes of innovation among African startups. She thrives at meeting targets and expectations. Contact: [email protected]

Advertisement
1 Comment

1 Comment

  1. Pingback: Inside Facebook Reality Labs: Unveiling the Next Era of Human-Computer Interaction – TechEconomy.ng – atNXT | All About Next Gen

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Advertisement
Advertisement

Facebook