Opinion: the balance between edge and cloud.

Simon Forrest explains how embedded chips can meet the challenge of delivering true local AI processing.

GPUs and NNAs are rapidly becoming essential elements for AI on the edge. As companies begin to harness the potential of using neural networks for various tasks such as natural language processing through to image classification, the number of products introducing some element of artificial intelligence is steadily increasing. Meanwhile, the balance of processing for these tasks is migrating from traditional cloud-based architectures into the device itself, with dedicated hardware-based neural network accelerators now embedded into the silicon chips that enable local AI processing. From advanced driver-assistance systems (ADAS) monitoring the road ahead through to voice-activated consumer electronics products such as virtual assistants, the opportunity for integrated neural network-based AI is expanding across several market segments.

Imagination’s business is in supplying essential core building blocks for silicon chips.  We’re known predominantly for our embedded graphics (GPU) and neural network accelerator (NNA) technologies, licensing these to the world’s leading silicon vendors.  Their silicon chips are deployed widely across multiple products and services, therefore providing Imagination with a unique position in the market, because we enable entire ecosystems to participate in AI.

Undeniably, AI is now considered to be vitally important in many applications. But there are many challenges. One of these is balancing the processing load between edge and cloud and finding where best to place the AI inferencing task itself. For example, edge AI is used for local speech recognition on consumer devices; for example, identifying “wake words” or simple instructions, but it’s then necessary to achieve a large proportion of voice AI processing in the cloud in order to tap into the vast knowledge stores that simply cannot be stored locally on the devices themselves. The upshot is that many products today are being marketed with AI capability, but in fact, are only doing simple pattern matching and recognition locally, before relying on the cloud to deliver the impression of intelligence.

This situation will change. As silicon processes shrink further and embedded neural network accelerators (NNAs) become almost as ubiquitous as CPUs, this creates opportunity to increase AI processing capability at the edge. For example, expect to see smart security cameras proficient in monitoring for specific events. No longer limited to simply recording video, these will use edge AI processing to identify features within the field of view, such as vehicles on a road or faces within a crowd. This will trigger specific activities such as identifying the make and model of vehicles or enabling access to an authorized individual. The output may not be a recognisable video feed; instead it could simply be streams of metadata describing those activities. Embedding AI into security cameras will even save costs by reducing the number of “false positives” because edge AI within the cameras themselves can identify the difference between normal and suspicious behavior.

Although the number of applications for AI is increasing, this doesn’t necessarily mean that individual SoCs with integrated neural inference engines are the way forward for all scenarios. If we are to consider AI reaching across most market segments, fragmentation will naturally occur due to the fact that products using the technology have vastly different processing requirements. Fragmented markets are difficult to serve with generic application processors, such as those with integrated NNA and GPU; indeed a ‘one size fits all’ approach isn’t always applicable.

While some markets promise high-volume opportunities for SoC vendors, such as mobile phones or automotive ADAS, many markets targeting the use of AI will naturally present as low-volume prospects. Notably some products might require AI for voice processing or image recognition, but not both: a smart lighting vendor is unlikely to use an SoC originally designed for smartphones simply to introduce AI capabilities into their application; it cannot be cost-effective. The solution to this conundrum is to create specialized AI chips which are used alongside the main application processor as a companion chip. These offload the AI inference tasks that would normally be handled by an NNA core on the main application processor itself. This offers distinct advantages: SoC vendors can provide a range of edge AI chips with different performance levels; additionally, OEMs are afforded several options to scale product solutions appropriately, dependent upon the AI processing overheads they expect to have to handle within their specific application.

AI and car

So where is the AI market heading? In 2019 I expect interest in and demand for AI to continue; indeed, the technologies underpinning this will begin to mature. Conversely there will almost certainly be a realisation that AI isn’t the answer to everything, the hype will probably disappear somewhat, and many companies will shift focus. They will harness the potential of AI to enhance system capabilities, but AI may not necessarily remain central to the operation of those systems.

Further out, true AI – where machines have awareness and can take decisions based upon cognitive reasoning – is still a decade or more away. This implies that cloud connectivity will remain crucial for many years, affording access not just to the massively parallel compute resource necessary – possibly via quantum machines – but also the immense knowledge store and databases that AI relies upon to make sense of the world around it. New communication technologies promising higher bandwidth are on the immediate horizon for 2019, notably 5G and 802.11ax, so expect cloud AI architectures and the connectivity bandwidth to scale accordingly.

PowerVR Series2NX architecture

For true AI at the edge, we shall need to conceive innovative methods to improve the packing density of transistors onto silicon chips, and entire new ways of constructing SoCs that have both the capacity to acquire knowledge through learning alongside the necessary reasoning skills to adapt.

Imagination designs essential core technologies for silicon vendors wishing to build world-leading edge AI solutions. PowerVR GPUs provide high-performance GPU-compute capabilities necessary for processing visual elements in AI, such as image recognition and sorting, gesture driven interfaces, or live video analytics. PowerVR NNAs (neural network accelerators) create the heart of any leading edge AI solution, supplying the requisite hardware acceleration for advanced inferencing and edge data processing. When combined in silicon chips, our GPU and NNA technologies provide everything necessary for high-performance AI processing at the edge.

The future of AI is becoming clear… but don’t be surprised when it takes longer than anyone anticipates to reach the destination.

Leave a Comment

Search by Tag

Search for posts by tag.

Search by Author

Search for posts by one of our authors.

Featured posts
Popular posts

Blog Contact

If you have any enquiries regarding any of our blog posts, please contact:

United Kingdom

benny.har-even@imgtec.com
Tel: +44 (0)1923 260 511

Related blog articles

Image-based lighting

PowerVR Tools and SDK 2018 Release 2 now available

Here’s an early Christmas present for graphics developers – the release of the latest version of our PowerVR Tools and SDK! The headline features for this release include some exciting new examples demonstrating new techniques in our SDK, and some very

on stage in China

PVRIC4 a hit at ICCAD 2018 in China

Imagination’s PVRIC4 image compression tech garnered plenty of attention at the recent ICCAD China 2018 symposium, which took place on 29th and 30th November at the Zhuhai International Convention & Exhibition Centre, China. The annual event focusses on integrated circuit

The ultimate embedded GPUs for the latest applications

Introducing PowerVR Series9XEP, Series9XMP, and Series9XTP As Benjamin Franklin once said, only three things in life are certain: death, taxes and the ongoing rapid advancement of GPUs for embedded applications*. Proving his point, this week, Imagination has once again pushed

Opinion: the balance between edge and cloud.

Simon Forrest explains how embedded chips can meet the challenge of delivering true local AI processing. GPUs and NNAs are rapidly becoming essential elements for AI on the edge. As companies begin to harness the potential of using neural networks

Stay up-to-date with Imagination

Sign up to receive the latest news and product updates from Imagination straight to your inbox.

  • This field is for validation purposes and should be left unchanged.