AI Automotive

The dawning of the AI age

Picture of David Harold
Jul 19, 2017  |  6 min read

I don’t think many would disagree that the key technology trend of the 21st century so far has been the advent of mobile. However today I think we are on the verge of a huge shift towards another driving paradigm – smartness.

Mobile has had huge social benefits – from connecting us to help and community, to enabling access to rich information everywhere. Yet there are also downsides – over stimulation and over-reliance perhaps being two. What will the effect of the next wave of devices be?

Already I’m sure you’ve heard of the prediction that cars intelligent enough to drive themselves will lay waste to professions that involve human drivers. But the knock-on effects will be even greater. The insurance industry, car-parking providers, car manufacturers, leasing companies… all will change. And what will drivers do when they no longer drive? The rise of in-car entertainment and lifestyle options will surely follow.

Driving is just one part of our ‘work’ that autonomous systems will affect. Science-fiction writers like Iain M. Banks talk about a post-scarcity future where lives are driven by the pursuit of purpose rather than work or money. Will that be the end result of our smarter machines? In Banks’ future, even government has been replaced by AI.

the dawn of AI

Artificial intelligence is a very broad topic. It covers everything from smart sensors and fully unmanned cars through to complex intelligences we can’t yet create – but to which we can already see a potential path.

Consider AI as being a set of component technologies: machine learning, deep learning, CNN etc. But whatever you call them, they add up to smarter machines, with that smartness residing to some extent in local devices like cameras, consumer products or robots, and to some extent in the cloud, depending on the application. Whether it’s platooning, crowdsourced parking, smart things, Massive Machine Type Communications, IoT, SLAM and more – AI is a component today and will be more so tomorrow.

Welcome to the machine

Today, we can easily talk to a machine and expect intelligent, albeit not Turing-test passing, answers. Voice-controlled “things” are now part and parcel of the everyday, bringing added convenience and increased productivity. By combining three technologies – IoT, AI and 5G, the transformed smart “things” can enable an experience which just a few years ago was the preserve of science-fiction.

Our devices will start to experience us – understanding where we are, what we are saying, what we actually want even when we express it imperfectly…

AI is real: machines can now drive like us, play games like us and see like us. But this isn’t necessarily like the dark future we see in the TV show Westworld; this is about how to make our day to day lives better.

While the self-awareness experienced by the machines of Westworld is still in the future, today we all need to be talking about the intersection of smartness and devices, whether it’s across robotics, autonomous devices, smart home, or the Internet of Things.

Inside the head of AI

There are many kinds of AI, and they each need different techniques and technologies. Reactive machines we can do well today using rules-based AI. A reactive machine perceives and reacts to stimuli but has no internal concept of itself or the world. Systems we are involved in today, like autonomous cars, go beyond this to have a sense of time and memory, which they need to keep track of real-world objects and events. But they don’t really have what we would call experience – they aren’t learning to handle new situations, just to improve their handling of known ones.

To go beyond this, AI needs not just a system of functions but a ‘Mind’ that can adjust and learn – and even ultimately achieve consciousness and have an original thought, a concept of others, and finally self-awareness. The technology for this is not impossible to conceive of today – it is basically advanced handling of memory, learning, and decision-making – all areas that AI researchers currently work in, though without so far the success and sophistication to build fully aware AI.

At Imagination, we are using our PowerVR GPU compute technology to help drive the machine learning revolution.

Our mission is to enable machine learning in low-power, cost-effective platforms where to-date it has not been possible to support the type of computational load necessary. We see two models out there: the smartphone world where the GPU in these platforms is being used to enable smarter AI apps; and autonomous “loosely” connected sensors with AI at both the sensor level and in the cloud.

With applications such as autonomous vehicles, more intelligent security systems, and context-aware consumer goods, this technology is rapidly emerging

PowerVR is focused on applying technologies like CNN because GPUs are highly applicable to this sort of highly-parallel data-driven computation. CNN is very real and here to stay. Huge samples of data being stored and trained in the cloud can then be applied at the level of devices like cameras, robots and drones to recognise, manipulate, avoid and engage with objects in the world.

In autonomous vehicles, the ability to recognise a very large number of objects, from road signs to cats, and to understand them in a range of contexts, has to happen instantaneously within the system. In those devices, we need to enable comprehensive real-time in-system intelligence, which is something a GPU’s highly parallel nature makes it ideal for.

This applies not just to auto, but also embryonic markets like security and retail cameras; we are enabling the intelligence as close to the edge as possible, at low power and low cost.

Accelerated and connected minds

We humans have limited inputs: our eyes, our ears, perhaps taste and smell. We have minds that wander but are in one place. The intelligence of our future machines will be very different. For a start, we would like them to be focused on the job at hand and yet, architecturally in embedded, they are likely to be a distributed architecture with parallel, loosely-coupled processors at nodes across the system; we don’t want them to expend energy when they don’t need to and we will rely on them to be safe and – highly reliable–especially if they are our acting as our future chauffeurs.

They will have different priorities than us, with their minds being layered with core tasks at the base, then a hierarchy of options in the subsumption architecture model proven already in robotics. But like us, they will think about some common concerns; are they safe? Are they healthy? How can they adjust to make their ‘lives’ simpler and more efficient?  They will become increasingly self-organising.

Our partners possess the algorithms and domain knowledge to understand these inputs and apply them to applications. Our role is to create hardware accelerators for that learning and understanding and ensure real-time response to the huge amounts of data coming in from those inputs.

For example, in autonomous vehicles, it’s of no use for our partners to have advanced collision avoidance systems if we cannot provide real-time response. This is an area in which our multi-threaded MIPS CPUs excel.

And there are other kinds of minds coming into play; for example in IoT we see many small minds, perhaps based on MIPS MCUs, adding together to become a big mind with multiple sensors connected to the cloud bringing data from hundreds or even thousands of sources; all of which must be consolidated, understood and acted on, with some decisions being made locally and some in the cloud.

This is extremely significant for our Ensigma connectivity IP. Our senses are all around our body, but for the minds of our future machines, those sensory (or data if you like) inputs are likely to be widely distributed and connected across networks.

You can already see this through intelligent household assistants such as Google Home, Amazon’s Alexa and Apple’s HomePod. They drive connected lives, although today largely through their understanding of a single sense: audio.

Voice is very much entering the mainstream of man/machine interaction; today I talk to my phone, to my car, and to my home and think nothing of it.

But those interactions are all enabled by the connection of local devices to central intelligence, typically in the cloud; whether it is our cars understanding local traffic conditions, our phones telling us the colours of the flag of Bolivia, or our homes helping us to set tomorrow’s thermostat temperature by understanding the weather.

The AI age

I think we are all agreed that AI is a real thing. But we have some doubts – indeed I know some people think that AI is over-hyped – and I think that’s understandable. But AI is data, math, patterns and iterations – all things that we work with today. The intelligence is in the application, and the application is a creation of humans.

The age of AI will soon dawn for many industries: customer support, banking, sales automation, shipping and logistics, agriculture, transportation, security analytics, health care, gaming – really too many to count.

And now is the time to grasp this. It’s a race for sure, and the starting flag has been waved with some companies, and countries, racing ahead.

Today we’re seeing the emergence of entirely new safety critical systems designed with intelligence built in. From autonomous cars to industrial IoT to robotics and beyond, these systems need CPUs designed with a combination of high performance and compliance with functional safety standards. If you’re looking at CPU IP for these applications, your options are limited – most embedded functional safety CPUs today have limited performance. The new MIPS I6500-F is a highly scalable 64-bit MIPS multiprocessing solution that has been stringently assessed and validated to meet functional safety (FuSa) compliance for ISO 26262 and IEC 61508 standards. This makes it ideal for handling the compute-intensive tasks in emerging safety-critical intelligent systems.

Our roadmap, across all our technologies, is being designed with suitability for supporting or delivering AI applications as a key concern.

I’d like to make a final point. I see this era, not as the rise of the machine; but the rise of the engineer. This is a period of intense change and invention. Yes, it is enabled by trends such as the amount of data, storage and compute capability available, added to the new capabilities for machine learning. But without the human element, it would be going nowhere. We are the intelligences that matter and I’m delighted to hear from across the technology industry that this is an area most of us are engaging with. It isn’t just for a couple of huge companies.

We at Imagination exist to enable that engagement by first enabling our customers with the compute, acceleration and connectivity technologies fundamental to applying their algorithms and domain expertise.

The questions around AI are exciting and sometimes scary. Last year we saw a computer beat a Go grandmaster and we also saw an autonomous car crash into a bus. Are these the triumphs and failures of AI or of those of us who are programming and designing those systems?

Can we really know what is going on inside a deep learning machine such as AlphaGo? No, the system has rapidly become too complex to describe. What does that sound like? It sounds like us. And today all of us in the technology sector are making a world in which there will be a third level of growing and aware minds alongside us humans and other animals. I think it’s exciting, I think posterity will look back on this moment as a huge disruption on par with the invention of the first computers. I’m excited that we are all sharing in this and thrilled to be engaged with so many of its brightest minds – both the ones made of flesh and the ones made of silicon.

Share this post

About the Author
Picture of David Harold

David was responsible for marketing and communications across all Imagination’s business units. He joined Imagination to lead its communications in 1998. During that time he: developed the PowerVR GPU brand (with some help from Sega and many games developers — thanks!); promoted the successful VideoLogic and Kyro PC GPU board brands; launched PURE Digital, a leading UK consumer audio brand; helped transition Imagination to an IP business model; and managed corporate and marketing communications, internal communications, digital transformation, and analyst relations. David is a fellow of The Chartered Institute of Marketing. David left Imagination in 2023.

More from David Harold

Read Next