Optimizing VR renderers with OVR_multiview

We’ve mentioned in a recent blog post how maintaining presence is key in virtual reality systems. Rendering applications at high framerates (60, 90 or 120 Hz depending on the Head Mounted Display’s maximum refresh rate) with low motion-to-photon latency is an important part of achieving it.

In this article, I’ll explain how the OVR_multiview extension can be used to reduce the CPU and GPU overhead of rendering a VR application.

Rendering without OVR_multiview

OVR_multiview - Wide FBO, 2 viewpoints - Barrel distortion

In a standard well-optimized VR application, the scene will be rendered to an Framebuffer Object (FBO) twice – once for the left eye, once for the right. To issue the renders, an application will do the following:

  • Bind the FBO
  • Left eye
    • Set viewport to the left-half of the FBO
    • Draw all objects in the scene using the left eye camera projection matrix
  • Right eye
    • Set viewport to the right-half of the FBO
    • Draw all objects in the scene using the right eye camera projection matrix

Once the scene is rendered for each eye, the FBO contents are barrel distorted to correct pincushion distortion introduced by the HMD lenses.

Optimising OpenGL ES for mobile VR - lens distortion

In this solution the application has to submit two almost identical streams of GL calls, even though the only difference between the renders are the matrix transformations applied to vertices. This wastes application time submitting calls per-eye. It also wastes GPU driver time validating API calls and generating a GPU command buffer per-eye when a single shared command buffer would do.

OVR_multiview

With the OVR_multiview extension (and the layered OVR_multiview2 and OVR_multiview_multisampled_render_to_texture extensions), an application can bind a texture array to an FBO and instance draws to each element. This enables graphics drivers to prepare a single GPU command buffer and reuse it for each instanced render. When the extension is active, the gl_ViewID_OVR built-in can be accessed in vertex shaders to identify the element the draw will be rendered to.

As a tile-based GPU architecture, tiling must be performed per-instance. Once the tiling process completes, per-element pixel render tasks are kicked.

OVR_multiview: Optimizing draw submission

OVR_multiview - multidraw texture array - Barrel distortion

A simple use case for OVR_multiview is to create a texture array consisting of two elements that represent the left and right eye images. Each frame, an application can render the elements by performing the steps below:

  • Bind the FBO (texture array attached)
  • Pass an array of transformation matrices to shaders as a uniform
    • Array consists of two elements – one transformation for the left eye, one for the right
  • Draw all objects in the scene
  • During vertex shader execution, use gl_ViewID_OVR to determine which matrix should be used for transformations

With this simple change, an application can halve the number of OpenGL calls submitted to the driver!

OVR_multiview: Reducing fragment processing

Lenses that increase a user’s field of view are an essential part of an immersive VR system. To counter the pincushion distortion introduced by the lenses, barrel distortion must be applied before the image is displayed.

Unfortunately, modern GPUs are not designed to natively render barrel distorted images. VR applications must render a non-distorted image in a first pass and then barrel distort it in a second pass. This wastes GPU cycles and bandwidth colouring texels in the first pass that will make a minimal contribution to the outer regions of the barrel where the texel to pixel density is high in the second pass.

OVR_multiview - texture array - Barrel distortion

As shown in the diagram above, OVR_multiview can be used to sub-divide the render into regions that better represent the pixel density of the barrel area they occupy. A simple implementation of this method would (per-eye) render a high-resolution, narrow field-of-view image for the centre of the barrel and a lower-resolution, wide field-of-view image for the outer regions of the barrel. During the barrel distortion pass, a fragment shader can be used to mix the high-resolution and low-resolution images based on the pixel coordinate within the barrel.

In a render where the narrow field of view, full-resolution image accounts for 25% of the scene and the wide field-of-view render is half-resolution (25% of the full resolution), the GPU will only need to colour half as many pixels as a full-resolution render – a huge reduction in fragment shader calculations and associated bandwidth. Of course, the savings made will depend on how small you can make the narrow field-of-view without introducing artefacts.

Conclusion

With the OVR_multiview extensions and a few simple application changes, VR applications can submit work to graphics drivers much more efficiently and reduce GPU overhead by rendering fewer pixels. If you want to know more about the work Imagination is doing to optimize VR rendering, I’d highly recommend reading Christian Pötzsch’s excellent blog post on reducing the latency of asynchronous time warping with strip rendering.

1 thought on “Optimizing VR renderers with OVR_multiview”

Leave a Comment

Search by Tag

Search for posts by tag.

Search by Author

Search for posts by one of our authors.

Featured posts
Popular posts

Blog Contact

If you have any enquiries regarding any of our blog posts, please contact:

United Kingdom

benny.har-even@imgtec.com
Tel: +44 (0)1923 260 511

Related blog articles

What is PowerVR Automotive? Register NOW to hear our webinar.

The automotive industry is going through many changes and that is having a huge impact on the semiconductor IP industry. The vehicle will move from being predominantly mechanical to primarily a computer on wheels enabling a future of self-driving cars,

Image-based lighting

PowerVR Tools and SDK 2018 Release 2 now available

Here’s an early Christmas present for graphics developers – the release of the latest version of our PowerVR Tools and SDK! The headline features for this release include some exciting new examples demonstrating new techniques in our SDK, and some very

on stage in China

PVRIC4 a hit at ICCAD 2018 in China

Imagination’s PVRIC4 image compression tech garnered plenty of attention at the recent ICCAD China 2018 symposium, which took place on 29th and 30th November at the Zhuhai International Convention & Exhibition Centre, China. The annual event focusses on integrated circuit

The ultimate embedded GPUs for the latest applications

Introducing PowerVR Series9XEP, Series9XMP, and Series9XTP As Benjamin Franklin once said, only three things in life are certain: death, taxes and the ongoing rapid advancement of GPUs for embedded applications*. Proving his point, this week, Imagination has once again pushed

Stay up-to-date with Imagination

Sign up to receive the latest news and product updates from Imagination straight to your inbox.

  • This field is for validation purposes and should be left unchanged.