For many years, the semiconductor industry has strived towards tightly integrating more and more components into a single system-on-chip (SoC). After all, it is an entirely practical solution for high volume applications. By optimally positioning the various cores, memories and peripherals chip manufacturers are able to minimise data pathways, improve power efficiency and optimise for high performance, while significantly reducing costs. The industry has very much succeeded with this approach and the SoC is now a standard component of almost all our consumer electronics.

AI as standard

As companies begin to understand the potential of using neural networks for various tasks such as natural language processing, through to image classification the number of products introducing some element of artificial intelligence is steadily increasing. Meanwhile, processing for these tasks is migrating from cloud-based architectures to into the device itself, with dedicated hardware-based neural network accelerators now embedded into the SoCs themselves.

An AI chip. Woh.AI is being integrated into many SoCs

From voice-activated consumer electronics products such as virtual assistants, through to advanced driver-assistance systems (ADAS), the opportunity for integrated neural network-based AI is expanding across several market segments. Undeniably, AI is anticipated to become an essential element in many solutions.

One size doesn’t fit all

However, although the number of applications for AI is increasing, this doesn’t necessarily mean that SoCs with integrated AI acceleration is the way forward for all scenarios. Indeed, if we are to consider AI reaching across the majority of market segments, then fragmentation will naturally occur due to the fact that products using the technology have vastly different processing requirements. Fragmented markets are challenging to serve with dedicated SoCs, so a ‘one size fits all’ approach isn’t always applicable. While some markets, such as mobile phones or ADAS, promise high-volume opportunities for SoC vendors, many markets targeting the use of AI will naturally present as low-volume prospects. For example, some products may require AI for voice processing or image recognition, but not both; likewise, a smart home vendor is unlikely to use an SoC originally designed for smartphones just to embed AI capabilities into their control panel, as this would not be cost-effective.

Meet the AI companion chip

Multi-core chips are these days commonly found in desktop CPUs and mobile SoCs as their scalable architecture enables them to deliver performance on demand. An AI ‘companion chip’ would take a similar approach. They would be designed with not just one, but several GPU-compute and neural network accelerator (NNA) cores to provide sufficient performance for specific applications while ensuring silicon area is optimised, keeping chip costs to a minimum. These processors would sit alongside the main application processor (SoC) as a companion chip, offloading the AI inference tasks that would normally be handled by an NNA core on the main application processor.

The SoC vendor is now afforded opportunity to create a conventional generic application processor capable of cost-effectively servicing multiple markets while turning to an AI companion chip to expand the AI capabilities for targeted or niche applications.

From the OEM standpoint, they now have options to scale product solutions appropriately, dependent upon the AI processing overheads they expect to have to handle throughout their application.

Potential AI companion chip block layoutAn example AI processor: the number of NNAs would scale depending on the use case.

A typical companion AI SoC might include a generic control CPU for housekeeping tasks, a GPU core specifically designed for high-performance compute – as opposed to one devoted to handling graphics and 3D transform operations – plus several NNAs that may be combined as necessary to handle different neural networks and inference engines simultaneously, each using different levels of precision dependent upon the task in hand. For example, in a dual-NNA system, one NNA could be executing an image recognition task identifying faces in a scene before conveying the results to another NNA simultaneously decomposing the faces into individual features to identify expressions.

Another example might be in automotive. A six-core AI companion chip could be partitioned to identify road signs using three NNAs (each performing a different aspect of that task); while at the same time the other three would be dedicated to pedestrian detection. The number of NNAs and the distribution of the tasks would depend on the requirements of the application. This concept could then be expanded into a family of dedicated AI processors, each with a number of NNAs to address different performance points.

Cloud to ground

We’re already seeing dedicated AI chips in the cloud, such as Google’s TPU alongside Microsoft and Intel’s Project Brainwave using Stratix FPGAs as a solution. Today, these are mainly used for machine learning and training algorithms for AI.

An example of cloud AIA typical cloud-based AI solution – it’s massive!

However, not all devices are connected to cloud-based servers, and across a plethora of different markets, the industry acknowledges that at least some of the AI processing must be done on the device itself. Those markets are complex to serve, and, as we’ve discussed, one SoC doesn’t fit all. Vendors across the industry are already engaged in utilising neural networks for their particular requirements and the move to companion AI chips promises to be an exciting new step in the evolution of AI processing solutions at the edge.

The end result is that companion AI chips just might become more ubiquitous than anyone anticipated. Imagination has more than 25 years of building innovative cores for the semiconductor industry making it a reliable partner for this type of task. To learn about how PowerVR’s advanced GPUs and Neural Network Accelerator technologies help you create your next AI SoC, refer to our website or contact Imagination for further details.

Comments