Complete, standalone hardware IP neural network accelerator


PowerVR AX2185

Optimised for performance efficiency, the PowerVR AX2185 is the highest performing neural network accelerator per mm2 in the market. Featuring eight full-width compute engines the AX2185 delivers up to 4.1 Tera Operations Per Second (TOPS). With smart, intelligent applications in demand, the AX2185 is a perfect partner for the premium and high-end smartphone, smart surveillance and automotive markets.

Key benefits

  • Brings new levels of intelligent applications to consumers through the addition of low-cost AI capabilities on edge devices
  • Enables SoC manufactures to offer a class-leading GPU and neural network accelerator in the same silicon footprint as competitor GPUs
  • Supports a wide range of network models to provide developer flexibility
  • Accelerates deep learning applications on Android with Android NN API support


PowerVR AX2145

Optimised for cost-sensitive devices, the PowerVR AX2145’s streamlined architecture delivers performance-efficient neural network inferencing engine for ultra-low bandwidth systems. This makes it the ultimate choice for entry-level and mid-range markets. Thanks to highly tuned tensor processing and convolution engines along with optimised core memory infrastructure, the PowerVR AX2145 perfectly matches the performance and low-cost implementation requirements for entry-level and mid-range smartphone, DTV, smart cameras and automotive markets.


Key benefits

  • Brings high-performing AI capabilities to entry-level and mid-range devices
  • Ultimate low implementation cost solution for embedded AI
  • Optimised for ultra-low system memory bandwidth
  • Accelerates deep learning applications on Android with Android NN API support


PowerVR 2NX NNA architecture and features

  • PowerVR 2NX NNA: Architecture and features
  • Read and write formats compatible with ISP, GPU and CPU
  • Built-in advanced layer fusion




Typical target applications

The PowerVR 2NX NNA is designed to power inference engines across a range of markets, with a highly scalable architecture designed to power future solutions across many others.

Companies building SoCs for mobile, surveillance, automotive and consumer systems can integrate the new PowerVR Series2NX Neural Network Accelerator (NNA) for high-performance computation of neural networks at very low power consumption in minimal silicon area.

Potential applications for NNAs are innumerable, but include: photography enhancement and predictive text enhancement in mobile devices; feature detection and eye tracking in AR/VR headsets; pedestrian detection and driver alertness monitoring in automotive safety systems; facial recognition and crowd behavior analysis in smart surveillance; online fraud detection, content advice, and predictive UX; speech recognition and response in virtual assistants; and collision avoidance and subject tracking in drones.

Making it easy for developers

Imagination is providing everything needed for developers to get their networks up and running quickly and easily, ensuring that compute and bandwidth can be well balanced against accuracy. PowerVR 2NX development resources include mapping and tuning tools, sample networks, evaluation tools and documentation. The comprehensive PowerVR NX Mapping Tool enables easy porting from industry standard machine learning frameworks such as Caffe and Tensorflow. Advanced network designers will be able to design and implement networks on the 2NX NNA that exploit all of its hardware features.

Imagination is also making available the common Imagination DNN (Deep Neural Network) API to enable easy transition between CPU, GPU and NNA. The single API works across multiple SoC configurations for easy prototyping on existing devices.

Ideal for use PowerVR GPUs

In devices such as mobile phones where a GPU is mandated, companies can use a PowerVR GPU to manage the classic vision processing algorithms and offload the neural net processing to the PowerVR 2NX NNA. Because of the performance density of the PowerVR NNA and the GPUs, companies can implement this combination in the same silicon footprint as a competing standalone GPU.