PowerVR and neural network acceleration: way ahead of the game

If you’ve been paying attention to the neural network scene recently, and really, why wouldn’t you be – you’ll be aware that it’s a market that’s really getting a lot of attention. In fact, machine learning and deep learning are threatening to become terms that even your mother will be familiar with. If, by the way, your mother doesn’t yet know what these are, then be sure to check out this blog post, which will explain everything in language even I can understand.

While several players have announced dedicated hardware to support the running of neural networks with the introduction of the PowerVR Series2NX Imagination believes it has moved the game on by an order of magnitude.

Very recently, a competitor announced a brand new SoC chipset that includes not only a CPU and GPU but a neural network accelerator as well, which we agree will become the standard for mobile devices in the near future.

This brand new hardware was declared to be ‘ultra-fast’ and to prove this it was demonstrating that it was able to process – that is recognise using a neural network – 1,832 images in one minute using their ‘NPU’ (2005 images when using their NPU, GPU and CPU), which it claimed was five times faster than the competition.

To put this to the test, we ran a similar benchmark on a currently available device powered by a MediaTek X30 chipset which features our Series7XT Plus GPU – the GT7400 Plus, to be precise. As you can see in the video below this was able to recognise almost 30 images per second which equates to 1800 images per minute.

Our test, run on the GPU, was able to recognise 1800 images, making it nearly as fast as their brand new, state-of-the-art, neural network accelerator. Their new SoC was just 1.8% faster than our GPU which shipped just a few months ago.

So that’s today. Looking to the future, our Series8XT will deliver around 50% more FLOPS at the same frequency, which will deliver a further healthy increase in performance.

OK, so we can handle 1,800 images per minute today. Now let’s bring our new Series2NX neural network accelerator into play. With a single core offering up to 2048 MACs/cycle, our NNA can handle – wait for it – 42,000 images per minute – that’s a 2,233% improvement over our new competitor. (Roughly).

The competitor claimed that their GPU was four times faster than a CPU for neural network image recognition and that their new hardware solution was 25x faster. The graph below shows how the performance of the competitor’s solution compares to a CPU for running neural networks – 4x for their GPU NPU and 25x faster for their NPU. Against this, we have plotted the relative performance of our PowerVR Series7XT GPU (inside the MediaTek X30 SoC) and our new PowerVR Series2NX NNA. As you can see, our currently shipping GPU only solution competes with their dedicated hardware, and our own dedicated hardware is an incredible 583x faster.

It’s not just in performance where we take a lead – it also does so for power efficiency. The Series2NX is a highly optimised solution for running neural networks and through its flexible bit-rate support, down to as low as 4-bit, its power consumption will be significantly lower than that of a GPU.

It also wins in terms of physical size. When it comes to silicon area the Series2NX is very space efficient. Where neural network performance is of importance rather than just graphics, a device manufacturer can specify a GPU that matches their requirements, such as our Series8XT, or the new Series9XE or 9XM, and pair them with the Series2NX – and all in a smaller footprint than in competing solutions.

Conclusion

Our customers, therefore, have a choice. They can stick with ‘just’ a GPU when building their SoCs, and be sure to have enough performance at their disposal to carry out AI and vision-based tasks at a level that competes with competitors current dedicated solutions.

However, the neural network landscape is rapidly increasing in significance, and without adequate performance, manufacturers will not be able to offer competitive devices. As applications make use of ever more complex neural networks, the PowerVR Series2NX NNA will provide our customers with a level of performance that will enable developers to create true next-gen applications, but within the power and bandwidth budget of a mobile device.

If device manufacturers want to distinguish their products and offer a true leading-edge solution, the powerful, efficient and highly scalable PowerVR 2NX is, demonstrably, the only solution on the market that’s up to the task.

1 thought on “PowerVR and neural network acceleration: way ahead of the game”

Leave a Comment

Search by Tag

Search for posts by tag.

Search by Author

Search for posts by one of our authors.

Featured posts
Popular posts

Blog Contact

If you have any enquiries regarding any of our blog posts, please contact:

United Kingdom

benny.har-even@imgtec.com
Tel: +44 (0)1923 260 511

Related blog articles

Image-based lighting

PowerVR Tools and SDK 2018 Release 2 now available

Here’s an early Christmas present for graphics developers – the release of the latest version of our PowerVR Tools and SDK! The headline features for this release include some exciting new examples demonstrating new techniques in our SDK, and some very

on stage in China

PVRIC4 a hit at ICCAD 2018 in China

Imagination’s PVRIC4 image compression tech garnered plenty of attention at the recent ICCAD China 2018 symposium, which took place on 29th and 30th November at the Zhuhai International Convention & Exhibition Centre, China. The annual event focusses on integrated circuit

The ultimate embedded GPUs for the latest applications

Introducing PowerVR Series9XEP, Series9XMP, and Series9XTP As Benjamin Franklin once said, only three things in life are certain: death, taxes and the ongoing rapid advancement of GPUs for embedded applications*. Proving his point, this week, Imagination has once again pushed

Opinion: the balance between edge and cloud.

Simon Forrest explains how embedded chips can meet the challenge of delivering true local AI processing. GPUs and NNAs are rapidly becoming essential elements for AI on the edge. As companies begin to harness the potential of using neural networks

Stay up-to-date with Imagination

Sign up to receive the latest news and product updates from Imagination straight to your inbox.

  • This field is for validation purposes and should be left unchanged.