In my previous article about heterogeneous architectures, I identified memory bandwidth as the main bottleneck for implementing power-efficient algorithms for computer vision.
Luckily, Imagination has created an innovative solution designed to address this common issue in mobile and embedded devices – and it comes in the form of the PowerVR Imaging Framework.
Introducing the PowerVR Imaging Framework
The PowerVR Imaging Framework for Android comprises a set of extensions to the OpenCL and EGL Application Programming Interfaces (APIs) that enable efficient interoperability of software running on PowerVR GPUs with other components such as a CPU, ISP and VDE. These extensions enable the construction of shared memory allocations and software pipelines across multiple hardware components with no redundant memory copies (termed zero-copy).
In addition, the extensions enable direct manipulation of YUV images, required by many computer vision algorithms, and also enable YUV images to be natively read by the GPU and converted to RGB on-the-fly whenever data is read from memory into hardware registers, avoiding the bandwidth cost from otherwise having to first create a copy of the image in memory in RGB format when converting.
The figure below shows the front-end of an image processing software pipeline in Android, which implements a zero-copy flow between the ISP and GPU. The ISP acquires image sensor data which it pre-processes and then writes to an Android Gralloc (graphics allocated) buffer in system memory. In this example the ISP generates the image data in YUV NV12 format, whereby the luminance and chrominance data is stored in two separate planes. The GPU then reads this image data, operating on the Y and UV planes separately.
The PowerVR Imaging Framework is used to configure the system in this way: first it is used to instantiate two EGL Image pointers (of type EGLImageKHR) which are mapped onto the Y and UV planes; to enable OpenCL processing on the GPU, two OpenCL Image pointers (of type image2d_t) are subsequently created from the EGL Images.
To benefit from the framework’s zero-copy support, the OpenCL kernel is written so that it takes two image parameters and a sampler. The PowerVR GPU performs read (or sampling) operations on variables of these types using a dedicated hardware block known as a Texture Processing Unit (TPU). Sampling the first image returns luminance (y) values, and sampling the second image returns vectors containing chrominance (u, v) pairs.
When sampling the image, the TPU can also be configured to implement features such as image interpolation and border pixel handling. PowerVR Series6 GPUs are based on a scalar architecture, which means that there is no loss of efficiency when operating on individual components of a vector.
The image below shows how the PowerVR Imaging Framework can be integrated within Android, complete with an illustration of an example zero-copy flow. The framework is integrated at the library layer of the Android software stack, enabling efficient interoperability between APIs such as OpenCL, OpenGL ES and emerging APIs such as OpenVX. Code written in these APIs can communicate and share data efficiently on the underlying hardware such as the ISP, GPU, CPU and VDE. In this example, frames of data from the ISP are placed in memory and then streamed directly into the GPU for processing, for example using the zero-copy implementation explained above. For each input frame, the GPU produces an output frame, which is mapped to an EGL_GL_TEXTURE_2D object for rendering to screen.
In Android, access to ISP hardware is provided by a Camera Hardware Abstraction Layer (HAL) and access to the VDE hardware is provided by a Video HAL. Because the framework is integrated at the library layer, designers can extend or replace the existing camera and media player applications with more customised, differentiated software solutions.
You can find a number of the extensions from the PowerVR Imaging Framework already integrated in a number of mobile devices available today, including the Asus ZenFone 2 ZE551ML smartphone (Intel Atom Z3680 processor, PowerVR G6430 GPU).
Read more about the PowerVR Imaging Framework and how it is used by OEMs in the official press release.
Stay tuned to our blog as in our next post we take you through a heterogeneous compute case study built around image processing.
Here is a menu to help you navigate through every article published in this heterogeneous compute series:
- A primer on mobile systems used for heterogeneous computing
- A quick guide to writing OpenCL kernels for PowerVR Rogue GPUs
- Increasing performance and power efficiency in heterogeneous software
- The PowerVR Imaging Framework for Android
- Heterogeneous compute case study: image convolution filtering
- Deep dive: Implementing computer vision with PowerVR
- The PowerVR Imaging Framework camera demo
- Supported zero-copy flows inside the PowerVR Imaging Framework
- Measuring GPU compute performance
- Imagination’s smart, efficient approach to mobile compute
- The complete glossary to heterogeneous compute on PowerVR
Please let us know if you have any feedback on the materials published on the blog and leave a comment on what you’d like to see next. Make sure you also follow us on Twitter (@ImaginationPR, @GPUCompute and @PowerVRInsider) for more news and announcements from Imagination.