PowerVR Neural Network SDK

Downloads and installers

CLDNN SDK for developing Neural Network applications on PowerVR GPUs.

Our intention is to make it as easy and efficient as possible for developers to develop convolutional neural networks on our hardware. To that end, we have provided both an API and an SDK, and also provided an image for flashing onto an Acer Chromebook R-13 for hardware development.

PowerVR CLDNN API

The PowerVR CLDNN API is our first AI-orientated API. It provides several functions to create network layers for constructing and running a neural network on PowerVR hardware. By using specialist OpenCL™ kernels, it enables developers to focus on their neural network creation with fewer overheads. The API also performs low-level hardware-specific optimisations, enabling the generation of more efficient graphs than a custom user OpenCL implementation.

CLDNN sits on top of OpenCL but does not obscure it, and makes use of OpenCL constructs so it can be used alongside other custom OpenCL code. It uses standard OpenCL memory, so it can be used alongside standard OpenGL ES™ contexts.

Developers using the CLDNN API SDK will get a head start when we later release IMGDNN with our customised hardware, the 2NX Neural Network Accelerator, as the APIs are likely to be very similar.

PowerVR CLDNN SDK

The PowerVR CLDNN SDK is used to demonstrate how a neural network can be deployed to PowerVR hardware through the CLDNN API. It includes various helper functions such as file loading, dynamic library initialisation and OpenCL context management. We also provide documentation in the form of a PowerVR CLDNN reference manual.

You can also find the source code for sample applications that show how to use the PowerVR CLDNN API. These include a simple introduction to the API, a more complex number classification example, and finally, an image classification example. The examples show how to deploy both “LeNet” and “AlexNet” neural network architectures, both of which are popular well-known neural network architectures, using the CLDNN API.

The beta SDK is available for download now,  (API subject to change).

Chromebook Image

We have created an image that developers can flash to an Acer Chromebook R-13, which has a PowerVR GX6250 GPU. For full installation instructions, please read the Chromebook Image page.

If you just wish to see the documentation, you can download just the SDK with the link above as the documentation is included there.

You can also find the source code for sample applications that show how to use the PowerVR CLDNN API. These include a simple introduction to the API, a more complex number classification example, and finally, an image classification example. The examples show how to deploy both “LeNet” and “AlexNet” neural network architectures, both of which are popular well-known neural network architectures, using the CLDNN API.

The beta SDK is available for download now, (API subject to change).

CLDNN Demo

We have a demo available to run on the image as installed here. It takes a live camera feed and identifies the object the camera is pointing at. A camera frame is passed to the CNN, and a label is output on the screen along with a confidence percentage, indicating how sure the network is of its response to the input image.

The demo implements well-known network models including:

Each network has different characteristics, meaning different networks may perform better in different scenarios. The key high-level characteristics here are the number of operations and memory usage, which directly influence the speed and accuracy of the network. All of the network implementations in use in the demo are Caffe models, trained against the ImageNet data set.

While it may seem odd to be interested in models that are relatively computationally or memory heavy, it’s important to consider the use cases for each network. There may be applications where accuracy is critical, but new results are only needed very infrequently – for instance SqueezeNet offers AlexNet levels of accuracy but uses fifty times less memory, although it does take a bit longer to infer. Conversely, there may be cases where rapid inference is preferred at the expense of absolute accuracy. Networks such as VGG-16 are also used as benchmarks and stress tests, because of the volume of memory used and operations required.

There is a benchmark function within the demo as shown in this video:

Note: if running the demo on the device’s terminal (i.e. not over SSH), then the terminal will be lost and the device will need rebooting after quitting the demo.

Running the demo

Extract the demo package and navigate into the CNN_Demo folder.
tar -xzf /root/CLDNN_Demo.tar.gz

Run the demo using the following command. Note that the camid parameter corresponds to the file /dev/video4, and should correspond to the built in camera. If camid=4 does not work, then it may be another /dev/video* file. You may also plug in a USB webcam and change to the appropriate /dev/video* file to get the feed from the external camera instead.

./CNN -camid=4

The demo may take sometime to load, at least for the first time. Initially the demo loads into a standby mode. Use the menu to start appropriate mode.

Once the demo is loaded you may use the following controls:

Menu controls:

Touch screen: touch the IMG logo (bottom right)

Tap the < and > to cycle menu pages

Keyboard:

Space bar to open/close menu

Up & Down – navigate options

Left & Right – change setting on sliders

Enter – press button

[ & ] – cycle through menu pages

A, G, S, V – quickly switch between AlexNet, GoogLeNet InceptionV1, SqueezeNet and VGG-16

Esc, Q – quit the demo

“Start Live Mode” – Starts the live camera feed mode. Make sure objects are within the purple box, which can be moved/resized in the menu.

“Start Demo Mode” – Starts running the demo/benchmark mode. Processes 100 images with the current network and shows average inferences per second at the end.

Further Information

For more information on the demo, read the blog post released for the Embedded Vision summit. There is also another blog post that may be of interest related to running the demo on the X30.

If you have any further questions, developers can join the PowerVR Insider programme by joining our developer forums.