Optimizing VR renderers with OVR_multiview

Share on linkedin
Share on twitter
Share on facebook
Share on reddit
Share on digg
Share on email

We’ve mentioned in a recent blog post how maintaining presence is key in virtual reality systems. Rendering applications at high framerates (60, 90 or 120 Hz depending on the Head Mounted Display’s maximum refresh rate) with low motion-to-photon latency is an important part of achieving it.

In this article, I’ll explain how the OVR_multiview extension can be used to reduce the CPU and GPU overhead of rendering a VR application.

Rendering without OVR_multiview

OVR_multiview - Wide FBO, 2 viewpoints - Barrel distortion

In a standard well-optimized VR application, the scene will be rendered to an Framebuffer Object (FBO) twice – once for the left eye, once for the right. To issue the renders, an application will do the following:

  • Bind the FBO
  • Left eye
    • Set viewport to the left-half of the FBO
    • Draw all objects in the scene using the left eye camera projection matrix
  • Right eye
    • Set viewport to the right-half of the FBO
    • Draw all objects in the scene using the right eye camera projection matrix

Once the scene is rendered for each eye, the FBO contents are barrel distorted to correct pincushion distortion introduced by the HMD lenses.

Optimising OpenGL ES for mobile VR - lens distortion

In this solution the application has to submit two almost identical streams of GL calls, even though the only difference between the renders are the matrix transformations applied to vertices. This wastes application time submitting calls per-eye. It also wastes GPU driver time validating API calls and generating a GPU command buffer per-eye when a single shared command buffer would do.


With the OVR_multiview extension (and the layered OVR_multiview2 and OVR_multiview_multisampled_render_to_texture extensions), an application can bind a texture array to an FBO and instance draws to each element. This enables graphics drivers to prepare a single GPU command buffer and reuse it for each instanced render. When the extension is active, the gl_ViewID_OVR built-in can be accessed in vertex shaders to identify the element the draw will be rendered to.

As a tile-based GPU architecture, tiling must be performed per-instance. Once the tiling process completes, per-element pixel render tasks are kicked.

OVR_multiview: Optimizing draw submission

OVR_multiview - multidraw texture array - Barrel distortion

A simple use case for OVR_multiview is to create a texture array consisting of two elements that represent the left and right eye images. Each frame, an application can render the elements by performing the steps below:

  • Bind the FBO (texture array attached)
  • Pass an array of transformation matrices to shaders as a uniform
    • Array consists of two elements – one transformation for the left eye, one for the right
  • Draw all objects in the scene
  • During vertex shader execution, use gl_ViewID_OVR to determine which matrix should be used for transformations

With this simple change, an application can halve the number of OpenGL calls submitted to the driver!

OVR_multiview: Reducing fragment processing

Lenses that increase a user’s field of view are an essential part of an immersive VR system. To counter the pincushion distortion introduced by the lenses, barrel distortion must be applied before the image is displayed.

Unfortunately, modern GPUs are not designed to natively render barrel distorted images. VR applications must render a non-distorted image in a first pass and then barrel distort it in a second pass. This wastes GPU cycles and bandwidth colouring texels in the first pass that will make a minimal contribution to the outer regions of the barrel where the texel to pixel density is high in the second pass.

OVR_multiview - texture array - Barrel distortion

As shown in the diagram above, OVR_multiview can be used to sub-divide the render into regions that better represent the pixel density of the barrel area they occupy. A simple implementation of this method would (per-eye) render a high-resolution, narrow field-of-view image for the centre of the barrel and a lower-resolution, wide field-of-view image for the outer regions of the barrel. During the barrel distortion pass, a fragment shader can be used to mix the high-resolution and low-resolution images based on the pixel coordinate within the barrel.

In a render where the narrow field of view, full-resolution image accounts for 25% of the scene and the wide field-of-view render is half-resolution (25% of the full resolution), the GPU will only need to colour half as many pixels as a full-resolution render – a huge reduction in fragment shader calculations and associated bandwidth. Of course, the savings made will depend on how small you can make the narrow field-of-view without introducing artefacts.


With the OVR_multiview extensions and a few simple application changes, VR applications can submit work to graphics drivers much more efficiently and reduce GPU overhead by rendering fewer pixels. If you want to know more about the work Imagination is doing to optimize VR rendering, I’d highly recommend reading Christian Pötzsch’s excellent blog post on reducing the latency of asynchronous time warping with strip rendering.

Joe Davis

Joe Davis

Joe Davis leads the PowerVR Graphics developer support team. He and his team support a wide variety of graphics developers including those writing games, middleware, UIs, navigation systems, operating systems and web browsers. Joe regularly attends and presents at developer conferences to help graphics developers get the most out of PowerVR GPUs. You can follow him on Twitter @joedavisdev.

1 thought on “Optimizing VR renderers with OVR_multiview”

Please leave a comment below

Comment policy: We love comments and appreciate the time that readers spend to share ideas and give feedback. However, all comments are manually moderated and those deemed to be spam or solely promotional will be deleted. We respect your privacy and will not publish your personal details.

Blog Contact

If you have any enquiries regarding any of our blog posts, please contact:

United Kingdom

Tel: +44 (0)1923 260 511

Search by Tag

Search by Author

Related blog articles

bseries imgic technology

Back in the high-performance game

My first encounter with the PowerVR GPU was helping the then VideoLogic launch boards for Matrox in Europe. Not long after I joined the company, working on the rebrand to Imagination Technologies and promoting both our own VideoLogic-branded boards and those of our partners using ST’s Kyro processors. There were tens of board partners but only for one brief moment did we have two partners in the desktop space: NEC and ST.

Read More »
b series hero banner 2

IMG B-Series – a multi-core revolution for a new world

B-Series uses multi-core to deliver an incredible 33 core variations for the widest range of options at all levels of performance points. From the smallest IoT cores up to the mid-range desktop equivalent B-Series an outperform mid-range next-gen consoles. Learn more in this blog post.

Read More »
pvrtune complete

What is PVRTune Complete?

PVR Tune Complete highlights exactly what the application is doing at the GPU level, helping to identify any bottlenecks in the compute stage, the renderer, and the tiler.

Read More »


Sign up to receive the latest news and product updates from Imagination straight to your inbox.