With Artificial Intelligence (AI) we are on the verge of a huge shift towards smarter and more autonomous computing devices – from cameras to cars, drones and robots.

Our solutions for Artificial Intelligence already cover everything from smart sensors through to fully unmanned cars, and we are developing tomorrow’s technologies for even more complex intelligences.

Whether you want smartness residing in local devices like cameras, consumer products or robots, or enabled by powerful servers in the cloud, we can help you achieve your vision.

We at Imagination exist to enable our customers to create amazing products using the compute, acceleration and connectivity technologies fundamental to applying their algorithms and domain expertise.

Our partners possess the algorithms and domain knowledge to understand sensor inputs and apply them to applications. Our role is to create processors and accelerators to drive that learning and understanding, and enable real-time response to the huge amounts of data coming from those inputs. For example, in autonomous vehicles, it’s of no use for our partners to have advanced collision avoidance systems if we cannot provide real-time response.

Our roadmap across all our technologies is being designed to ensure its suitability for supporting or delivering Artificial Intelligence applications.


We are using our PowerVR GPU compute technology to help drive the machine learning revolution.

Our mission is to enable machine learning in low-power, cost-effective platforms where to-date it has not been possible to support the type of computational load necessary. These applications include autonomous vehicles, more intelligent security systems, and context-aware consumer goods.

PowerVR is focused on applying technologies like neural networks. Our GPS are highly applicable to this sort of highly-parallel data-driven computation, while our dedicated neural network accelerator IP will enable our customers to offers new levels of performance in this category. Huge samples of data  stored and trained in the cloud can then be applied at the level of devices such as cameras, robots and drones to recognise, manipulate, avoid and engage with objects in the world.

In autonomous vehicles, the ability to recognise a very large number of objects, from road signs to cats, and to understand them in a range of contexts, has to happen instantaneously within the system. In those devices we need to enable comprehensive real-time in-system intelligence, which is something a GPU’s highly parallel nature makes it ideal for.

This applies not just to auto, but also to embryonic markets such as security and retail cameras; we are enabling the intelligence as close to the edge as possible, at low power and low cost.


In distributed systems, sensory and data inputs are likely to be widely distributed and must be connected across networks for processing in the cloud.

For example, consider intelligent household assistants. They drive our connected lives, although today largely through their understanding of a single sense: audio. Voice is very much entering the mainstream of man/machine interaction; today we can talk to our phone, car, and home.

But those interactions are all enabled by the connection of local devices to central intelligence, typically in the cloud; whether it is our cars’ ability to understand local traffic conditions, our phones’ ability to identify our voice commands, or systems in our homes helping us to set tomorrow’s thermostat temperature by understanding the weather.

Ensigma delivers the broadest spectrum of Wi-Fi IP available and has been proven across a wide class of devices from TV and radio to connected speakers, IoT devices and more.