Imagination Technologies is a company with a vision to help our customers create innovative products that will change the world. We are always thinking ahead to see how we can best deliver that future – a future that’s bright, bold and empowering. At the core of this vision are our employees. They are key in bringing this future to fruition.
We, therefore, present a series of interviews with some of our key thinkers, where we’ll learn what makes them tick and then find out what they see coming down the road.
This time we speak to Russell James, Vice President of Vision & AI, talking to Benny Har-Even, Technical Communications Specialist at Imagination.
With AI taking up a lot of column inches in the mainstream press we ask Russell about what his thought are on the technology. He gives us his outlook on what the future may hold and how Imagination Technologies will help us get there.
Russell, tell me a little about your background. What’s led you down the path to becoming a technologist?
I come from a family of electronic engineers. My dad’s an electronic engineer and both my brothers followed in his footsteps. I diverged slightly in that I went into the field of ASIC design in the semiconductor industry, starting at Hitachi. Two years after I moved to Imagination Technologies and I was there for 10 years, until 2011. Next was Altera, moving into FPGAs where I focussed heavily on video codecs and video processing.
I returned to Imagination into what was the Video & Vision group as director of camera ISP. At the start of 2017, I moved into the VP role for the group and changed the name to Vision & AI to more accurately describe the direction in which we wanted to go – the emerging computer vision and the AI space. To begin with, this was convolutional neural networks and then more into the wider deep neural network space. Now the focus has expanded further to cover many different products in computer vision and AI.
So what has led to this new focus on AI within Imagination? What has changed?
What’s interesting is while things have changed, the technology is not actually new. A lot of the algorithms and research used today in products branded ‘AI’ go back 30 or 40 years. I remember doing a course on fuzzy logic at university and neural networks have been around for a very long time. However, it’s only recently that the performance levels in embedded devices have reached a point where it’s become feasible to run them. A supercomputer with a thousand-core cluster machine is one thing – but what if you want to do it on your mobile phone? That’s where Imagination’s technology comes into play. The combination of the hardware that we have today and the hardware that’s being built for tomorrow signifies a step change in the performance levels that are possible, especially when you have dedicated hardware for things like neural networks being put into everyday devices.
In terms of creating IP for AI-powered products, there are many challenges. How is Imagination looking to address these?
What Imagination can bring to this space is the right technology, optimised in the right way and compressed down to a minimum set of features that enable these applications to run on small, embedded devices, such as smartphones. Imagination’s expertise and skills in the area of low-power, high-performance embedded IP is going to be intrinsic to the success of our customer’s products.
Our first IP in this area is a dedicated neural network accelerator called the PowerVR Series2NX. The first configuration is the X2180. Imagination has stamped its technical authority on this device, making it much more flexible than what’s achievable with a DSP, for example.
This is done is by enabling low-precision data types at a level of granularity all the way down to four bits. However, we can also support five or seven bits and not pay the padding cost that an 8-bit only engine would require. This really enables the network designers to optimise their network models for power, bandwidth, and performance, and also the accuracy of the result.
We’ve done a lot of research into low-precision neural networks and optimising them for the quality of the result. That research has gone into the NNA and the tools used to map networks onto its hardware. That’s where our benefit really lies in that we’ve used those decades of work, of person-years, in terms of the research into low-precision to make the IP as flexible and powerful as possible for execution on an embedded device.
How will that come into play into the real world? What will the benefits be of having this technology in embedded devices?
It will be of benefit in all sorts of ways. Suddenly, it’s possible that your doorbell will recognise you, security cameras will recognise threats and your smartphone will intelligently sort images, and classify objects. And crucially, all of this will be done in real-time without relying on cloud processing. That’s just the start. I’ve no doubt that the human mind will think up many different ways in which this technology can be used, some of which will no doubt surprise.
Do you see this as having a transformative effect on society? How might it affect the world in 30 years?
In terms of how I think the world would look in 30 years, it may not be as significantly different as Hollywood sci-fi films like to portray, but I think changes such as self-driving cars will have a major impact on the way the world works. For example, you can easily see automotive transport becoming a service rather than the car ownership model that we have today. Why would you need to own a car if you can just call one to come to your door? I see there being quite a change in business models around how some things are owned and used.
I also see AI becoming more prevalent in certain types of decision-making as in it can rapidly assess and sort a large amount of information without needing a lot of supervision. There are concerns there about mistakes being made, and concerns over bias, so you do still need a human element to this – someone to supervise. But sorting big data is a very slow process and using AI to enable more data to be processed, sorted, and understood will be transformative in terms of the rate of progress of society.
I see use-cases in medical, space exploration, astronomy, financial fraud detection, general banking, and stock market analysis. There’s an endless supply of big data applications out there where AI could be used.
Tell me more about your concern over mistakes by AI? How would these manifest themselves?
If we take neural networks and the way they behave, they’re less deterministic compared to a sort of fixed function algorithm that you may deploy for doing a certain task. But that doesn’t mean to say that they’re less accurate in their result, but you may have unknown outliers in those results. They will not be 100% perfect in the answers they give.
Depending on how good your AI application is at doing a data sort and classifying information into one bucket or another, it won’t always make the right choice. But on the whole, or on average, the number of right choices it makes will be better than what a human could do, or what existing algorithms deployed in software or on discrete devices could achieve.
But there are always mistakes and errors in life – it’s a balance of good and bad, really. So while we’ll need to work towards a reduction in these mistakes, overall, it will enable a lot more to be achieved in the future than can be done today.
Let’s talk about shorter to medium term benefits. What impact will AI have on the day-to-day, within a five-year timescale?
Initially, in the short-term, I think the impact will be superficial. There’s not going to be a lot that makes a huge impact in terms of the world and how it works. The consumer-based products that we have now – the smart assistants, the voice-powered speakers – they are not AI in the classical understanding of what AI means. Today, AI really is a marketing buzzword used to sell more products. But it’s not actually self-aware or intelligent in its own right. It’s just seemingly more intelligent than we’ve seen before.
However, within five years I think AI will enable autonomous driving, which I think will be a very significant benefit to society. The reality is that most accidents are caused by human error – very few are random events, so we’ll be able to drastically reduce those.
Security and surveillance is another area. Clearly, there are privacy concerns of which we need to be mindful, but there are many benefits as well. It’s impossible to have enough people monitoring cameras and feeds, to pick up on all the potential dangerous events or crimes that might be committed and caught on those cameras. Having something attached to all of the cameras that possesses a level of smart processing that’s capable of detecting certain events and then flagging these back to a control centre will very much help to reduce crime or spot crime as it’s occurring.
We’ve seen illustrious figures such as Stephen Hawking and Elon Musk expressing grave concerns over the development of AI. What’s your take on their views?
It’s impossible to predict the future but given where we’re at now, I don’t think we’re in any danger of developing Skynet in the near future. What we have got though, is something that’s new and hasn’t been seen before. If you went back five years from now and said I’m going to build a computer that can beat the world champion at Go, then you wouldn’t have been taken seriously.
In essence, what that machine has done is played many millions of games against real opponents, using matches that have been recorded in the past, but mainly against itself or different versions of itself. It’s done that over and over again, but at the speed of a high-performance computer. It learns as it goes and it gets extremely good at what it’s doing. But it’s learning a single task – it’s not learning anything else. Therefore, it’s not a general AI, it’s very specific and I don’t think that makes it dangerous because it doesn’t have enough awareness or knowledge of anything other than this one task.
However, if you were to instead try and build something that was more of a model of the human brain and teach it life experiences; that would be a different thing. But in terms of the technology of the minute – neural networks – they are tiny in comparison to what the human brain would develop over its lifetime. Our PowerVR 2NX AX2180, super smart as it is, has the approximate compute performance equivalent to the brain of a honey bee.
Do I think that these are going to grow at an exponential rate enough to be capable of something more dangerous? I’m not sure. I think these technologies will be focused on very specific tasks and doing those tasks as well as they can do it, and that’s it. I think there is a desire to make better generic artificial intelligence, but I don’t think we’re really a lot closer now than we were five years ago.
But are you still looking forward to how far this technology can take us?
Absolutely. Personally, this is the most exciting area that I’ve worked on for a long time. I think what will have a transformative impact is human nature’s drive to really expand how this technology can be used. In five years’ time, there will be use-cases that we never even considered today. I’m eagerly waiting to see what these might be.
Follow Imagination on social media on Twitter @ImaginationTech and on LinkedIn, Facebook and Google+.
Look out for our other Visionary interviews:
The Visionary interview series: Jim Nicholas on next-level tech: when human intelligence and AI combine: Part 1
The Visionary interview series: Chris Longstaff, Senior Director of Product & Technology Marketing for PowerVR
The Visionary interview series: Bryce Johnstone, Director of Ecosystems, Segment Marketing, Automotive
The Visionary interview series: Simon Forrest, Director of Segment Marketing, Consumer Electronics, Imagination
The PowerVR SDK 2019 Release 1 is now available for download.