Recently, we released our first AI-orientated SDK – the PowerVR CLDNN. In this post we’d like to explain what it is, what it’s for, and how to use it, so read on for more.

The CLDNN API

The PowerVR CLDNN API enables fast, efficient development and deployment of convolutional neural networks on PowerVR devices. The API can be used to generate highly optimised graphs and OpenCL™ kernels based on your network architecture. This means that developers can focus on fine-tuning the network architecture without the need for in-depth OpenCL knowledge. The API also performs low-level hardware-specific optimisations, enabling the generation of more efficient graphs than a custom user OpenCL implementation leading to higher performance (inferences per second).

CLDNN sits on top of OpenCL – but does not obscure it. It makes use of OpenCL constructs so that it can be used alongside other custom OpenCL code. It also uses standard OpenCL memory, so it can be used alongside standard OpenGL ES™ contexts.

a neural network

The CLDNN SDK

Our PowerVR CLDNN SDK is used to demonstrate how a neural network can be deployed to PowerVR hardware through the PowerVR CLDNN API. It includes various helper functions such as file loading, dynamic library initialisation and OpenCL context management. There is also documentation in the form of a PowerVR CLDNN reference manual, which explains all of the CLDNN API’s functions.

We have included the source code for sample applications that show how to use the PowerVR CLDNN API. These include a simple introduction to the API, a more complex number classification example, and finally, an image classification example. The examples show how to deploy the “LeNet” and “AlexNet” neural network architectures, both of which are popular well-known neural network architectures, all using the PowerVR CLDNN API.

We have created an image that developers can flash to an Acer Chromebook R-13, which has a PowerVR GX6250 GPU. This is the only way to make full use of the SDK at this time.

Acer Chromebook R-13

You’ll need one of these using the image we provide to make use of the PowerVR CLDNN SDK 

Demo

We also have a demo available to run on the Acer Chromebook R-13 image. It takes a live camera feed and identifies the object the camera is pointing at. A camera frame is passed to the CNN, and a label is output on the screen along with a confidence percentage, indicating how sure the network is of its response to the input image.

The demo implements well-known network models including:

Each network has different characteristics, meaning different networks may perform better in different scenarios. The key high-level characteristics are the number of operations and memory usage, which directly influence the speed and accuracy of the network. All of the network implementations in use in the demo are Caffe models, trained against the ImageNet dataset and there is a benchmark function within the demo.

What’s Next?

Developers who get to grips with our PowerVR CLDNN will find themselves in a great position with the future release of our Series2NX hardware and associated APIs. Future APIs are likely to be very similar to CLDNN.

Further Information

For more information on PowerVR CLDNN, take a look at our CLDNN SDK page.

If you have any further questions, developers can join the PowerVR Insider programme and interact with our online community at www.powervrinsider.com. Also, visit our Contact page for further details on how to get in touch with us.

To keep up with the latest from Imagination, follow us on social media on Twitter @ImaginationTech and on LinkedInFacebook and Google+.

Comments