Optimizing VR renderers with OVR_multiview

Share on linkedin
Share on twitter
Share on facebook
Share on google

We’ve mentioned in a recent blog post how maintaining presence is key in virtual reality systems. Rendering applications at high framerates (60, 90 or 120 Hz depending on the Head Mounted Display’s maximum refresh rate) with low motion-to-photon latency is an important part of achieving it.

In this article, I’ll explain how the OVR_multiview extension can be used to reduce the CPU and GPU overhead of rendering a VR application.

Rendering without OVR_multiview

OVR_multiview - Wide FBO, 2 viewpoints - Barrel distortion

In a standard well-optimized VR application, the scene will be rendered to an Framebuffer Object (FBO) twice – once for the left eye, once for the right. To issue the renders, an application will do the following:

  • Bind the FBO
  • Left eye
    • Set viewport to the left-half of the FBO
    • Draw all objects in the scene using the left eye camera projection matrix
  • Right eye
    • Set viewport to the right-half of the FBO
    • Draw all objects in the scene using the right eye camera projection matrix

Once the scene is rendered for each eye, the FBO contents are barrel distorted to correct pincushion distortion introduced by the HMD lenses.

Optimising OpenGL ES for mobile VR - lens distortion

In this solution the application has to submit two almost identical streams of GL calls, even though the only difference between the renders are the matrix transformations applied to vertices. This wastes application time submitting calls per-eye. It also wastes GPU driver time validating API calls and generating a GPU command buffer per-eye when a single shared command buffer would do.

OVR_multiview

With the OVR_multiview extension (and the layered OVR_multiview2 and OVR_multiview_multisampled_render_to_texture extensions), an application can bind a texture array to an FBO and instance draws to each element. This enables graphics drivers to prepare a single GPU command buffer and reuse it for each instanced render. When the extension is active, the gl_ViewID_OVR built-in can be accessed in vertex shaders to identify the element the draw will be rendered to.

As a tile-based GPU architecture, tiling must be performed per-instance. Once the tiling process completes, per-element pixel render tasks are kicked.

OVR_multiview: Optimizing draw submission

OVR_multiview - multidraw texture array - Barrel distortion

A simple use case for OVR_multiview is to create a texture array consisting of two elements that represent the left and right eye images. Each frame, an application can render the elements by performing the steps below:

  • Bind the FBO (texture array attached)
  • Pass an array of transformation matrices to shaders as a uniform
    • Array consists of two elements – one transformation for the left eye, one for the right
  • Draw all objects in the scene
  • During vertex shader execution, use gl_ViewID_OVR to determine which matrix should be used for transformations

With this simple change, an application can halve the number of OpenGL calls submitted to the driver!

OVR_multiview: Reducing fragment processing

Lenses that increase a user’s field of view are an essential part of an immersive VR system. To counter the pincushion distortion introduced by the lenses, barrel distortion must be applied before the image is displayed.

Unfortunately, modern GPUs are not designed to natively render barrel distorted images. VR applications must render a non-distorted image in a first pass and then barrel distort it in a second pass. This wastes GPU cycles and bandwidth colouring texels in the first pass that will make a minimal contribution to the outer regions of the barrel where the texel to pixel density is high in the second pass.

OVR_multiview - texture array - Barrel distortion

As shown in the diagram above, OVR_multiview can be used to sub-divide the render into regions that better represent the pixel density of the barrel area they occupy. A simple implementation of this method would (per-eye) render a high-resolution, narrow field-of-view image for the centre of the barrel and a lower-resolution, wide field-of-view image for the outer regions of the barrel. During the barrel distortion pass, a fragment shader can be used to mix the high-resolution and low-resolution images based on the pixel coordinate within the barrel.

In a render where the narrow field of view, full-resolution image accounts for 25% of the scene and the wide field-of-view render is half-resolution (25% of the full resolution), the GPU will only need to colour half as many pixels as a full-resolution render – a huge reduction in fragment shader calculations and associated bandwidth. Of course, the savings made will depend on how small you can make the narrow field-of-view without introducing artefacts.

Conclusion

With the OVR_multiview extensions and a few simple application changes, VR applications can submit work to graphics drivers much more efficiently and reduce GPU overhead by rendering fewer pixels. If you want to know more about the work Imagination is doing to optimize VR rendering, I’d highly recommend reading Christian Pötzsch’s excellent blog post on reducing the latency of asynchronous time warping with strip rendering.

1 thought on “Optimizing VR renderers with OVR_multiview”

Please leave a comment below

Comment policy: We love comments and appreciate the time that readers spend to share ideas and give feedback. However, all comments are manually moderated and those deemed to be spam or solely promotional will be deleted. We respect your privacy and will not publish your personal details.

Search by Tag

Search for posts by tag.

Search by Author

Search for posts by one of our authors.

Featured posts
Popular posts

Blog Contact

If you have any enquiries regarding any of our blog posts, please contact:

United Kingdom

benny.har-even@imgtec.com
Tel: +44 (0)1923 260 511

Related blog articles

Product and event round-up from the experts in GPU and AI

It’s certainly been a busy few months for Imagination. Towards the latter end of last year, we released a raft of new products and initiatives, and a new CEO took the helm giving us real momentum for 2019. At the

How AI is conducting the future of music technology

“We tend to think of technological advances as destroying what’s gone before, but that doesn’t usually happen. This could lead to a different way of making music.” – Jarvis Cocker, former Pulp frontman, solo artist, writer and broadcaster In recent

Why you should join Imagination at Embedded World 2019

Our technology is focussed entirely on offering SoC manufacturers low power, high-performance options for building groundbreaking products in a range of markets, from automotive to smart devices such as smart speakers to the latest smartphones. Embedded World is one of

Stay up-to-date with Imagination

Sign up to receive the latest news and product updates from Imagination straight to your inbox.

  • This field is for validation purposes and should be left unchanged.