Complete, standalone hardware IP neural network accelerator

Why Neural Networks?

Neural networks (NNs) are enabling an explosion in technological progress across industries. NNAs are a fundamental class of processors, likely to be as significant as CPUs and GPUs. Potential applications for NNAs are innumerable. The new PowerVR Series2NX Neural Network Accelerator (NNA) delivers high-performance computation of neural networks at very low power consumption in minimal silicon area.


  • 2x the performance and half the bandwidth of nearest competitor
  • First dedicated hardware solution with flexible bit depth support from 16-bit down to 4-bit
  • Lowest bandwidth Neural Network (NN) solution
  • Architected to support multiple operating systems, including Linux and Android
  • Includes hardware IP, software and tools to provide a complete neural network solution for SoCs
  • Efficiently runs all common neural network computational layers
  • Depending on the computation requirements of the inference tasks, it can be used standalone – with no additional hardware required – or in combination with other processors such as CPUs and GPUs



PowerVR 2NX is a completely new architecture designed from the ground-up to provide:

  • The industry’s highest inference/mW IP cores to deliver the lowest power consumption*
  • The industry’s highest inference/mm2 IP cores to enable the most cost-effective solutions*
  • The industry’s lowest bandwidth solution* – with support for fully flexible bit depth for weights and data including low bandwidth modes down to 4-bit
  • Industry-leading performance of 2048 MACs/cycle in a single core, with the ability to go to higher levels with multi core


Typical target applications

The PowerVR 2NX NNA is designed to power inference engines across a range of markets, with a highly scalable architecture designed to power future solutions across many others.

Companies building SoCs for mobile, surveillance, automotive and consumer systems can integrate the new PowerVR Series2NX Neural Network Accelerator (NNA) for high-performance computation of neural networks at very low power consumption in minimal silicon area.

Potential applications for NNAs are innumerable, but include: photography enhancement and predictive text enhancement in mobile devices; feature detection and eye tracking in AR/VR headsets; pedestrian detection and driver alertness monitoring in automotive safety systems; facial recognition and crowd behavior analysis in smart surveillance; online fraud detection, content advice, and predictive UX; speech recognition and response in virtual assistants; and collision avoidance and subject tracking in drones.

Making it easy for developers

Imagination is providing everything needed for developers to get their networks up and running quickly and easily, ensuring that compute and bandwidth can be well balanced against accuracy. PowerVR 2NX development resources include mapping and tuning tools, sample networks, evaluation tools and documentation. The comprehensive PowerVR NX Mapping Tool enables easy porting from industry standard machine learning frameworks such as Caffe and Tensorflow. Advanced network designers will be able to design and implement networks on the 2NX NNA that exploit all of its hardware features.

Imagination is also making available the common Imagination DNN (Deep Neural Network) API to enable easy transition between CPU, GPU and NNA. The single API works across multiple SoC configurations for easy prototyping on existing devices.

Ideal for use PowerVR GPUs

In devices such as mobile phones where a GPU is mandated, companies can use a PowerVR GPU to manage the classic vision processing algorithms and offload the neural net processing to the PowerVR 2NX NNA. Because of the performance density of the PowerVR NNA and the GPUs, companies can implement this combination in the same silicon footprint as a competing standalone GPU.