Imagination Technologies is a company with a vision to help our customers create innovative products that will change the world. We are always thinking ahead to see how we can best deliver that future – a future that’s bright, bold and empowering. At the core of this vision are our employees who are key in bringing this future to fruition and have an interesting take on the world.

We, therefore, present a series of interviews with some of our key thinkers, where we’ll learn what makes them tick and then find out what they see coming down the road.

Here we speak to Jim Nicholas, the executive vice president of the MIPS business unit, talking to Benny Har-Even, technical communications specialist at Imagination.

With a strong presence in the networking and automotive space, MIPS is the leading alternative architecture for embedded processors and delivers the best performance and lowest power consumption in a given silicon area. 

In part one of our interview with Jim Nicholas, he spoke about how he sees AI and human intelligence combining to create transformational change in the near future. In part two he explains how Imagination Technologies will play a role in making this vision of the future happen.

 

Jim Nicholas 2Jim, when we last spoke you talked about how AI and human brain power will fuse to generate the compute power needed solve problems that we currently can’t conceive as having realistic solutions. Can you talk a bit more about that?

This is one of the most interesting things moving forward. How do human and artificial intelligences fuse?

At the moment our interaction with compute is significantly by touch – we type for example. Of late, oral communication is becoming habitual with things like Siri and Alexa. There’s also a lot of stuff going on with vision processing so that you’ll actually be conveying information through gestures and facial expressions.

The other thing that’s going to be interesting is how do our thought processes merge with artificial processing? Already, today we can use brain scans to relate behaviours in the brain to specific stimuli or particular emotions. That’s just a short step away from being able to look at the brain and understand streams of thought. It’s just a matter of technology.

The technology is based on the idea that there’s a lot of data, but what does that data mean? One of the fantastic things about artificial intelligence is that if you’ve got enough compute, you can actually churn through that data and over a period of time, learn and understand what all the different patterns mean. You can potentially reconstruct the words that correspond to the stream of thoughts that a person has, and you might even be able to convert it into the person’s vision, what they were visualising, whilst they were talking.

These seem to be pretty far-fetched concepts. If you had to nail it down to a timescale when would you hazard a guess as to when you think this might become a reality?

I know it sounds fantastic, but the world is accelerating faster and faster. I would say that it would be likely within five years, at least for the elite. I always make a distinction between what the population sees and to what institutions have access. Large companies, the ilk of Google, Apple etc., who have access to massive resources, would be able to incubate that kind of capability. In terms of mass deployment, I think you could add another couple of years onto that. My estimate would be within ten years.

voice assitant

But what is intelligence? Do you think the voice assistants we have now could be classed as true artificial intelligence?

Well, things like Alexa or Siri are examples of services that have some basic learning and inference capability. Strictly speaking, there are two aspects of machine learning devices.

One is the actual training; where you subject a neural network to a lot of data and then it extracts from that some common features that enable it to easily understand if something new it experiences is part of the dataset that it was subjected to or something new.

Then you’ve got inferencing, where, because you’ve already got that experience so it’s able to say OK; I see these four, five dots spaced in this way so that’s going to be a cat. That’s inferencing; you already have done all the effort to work out what the characteristics are to define a cat – the learning. The inferencing then is being able to use that database or what you’ve reduced it to, to basically say, “Okay, I can spot a cat, or Benny, or Jim.”

I can do that because there’s a machine learning part that has made all the effort to understand what a person reduces down to, and the inferencing device has now just got to see whether it can spot the four or five things that define them in a visual sense.

My belief is that these current voice assistances are a crude implementation of intelligence. The flaw is that from time to time they don’t work all that reliably but that’s more of a feature of the learning part. The learning maybe didn’t reduce the characteristics of what you were saying as accurately as possible, down to the salient factors. That’s more of an issue of the effectiveness of the machine learning and the influencing algorithm than it is a feature of whether it was an intelligent device. I would say they are, basically, intelligent.

I think that during the next one or two years there’s a huge opportunity for IP and SoC companies to make a big push to invest in developing hardware and software that enable machine learning to be done much more reliably.

Can you tell me more how Imagination is moving towards building these futures?

At Imagination, we are focused on the building blocks. We’re addressing compute in a variety of forms, whether it’s GPU, CPU, DSP, ISP, and so on. These different classes of processors are doing the computation – the thinking, as it were. We’re also addressing connectivity. It’s these two elements with which Imagination is preoccupied – compute and connectivity.

So at Imagination, we’re making sure these processors are designed with a holistic view, which then plays into the concept of collective intelligence. Communication and collaborative processing are also supported by technologies such as heterogeneous compute. This is a way of enabling multiple different classes of processors to work together. This is absolutely key to the fusion of human and artificial intelligence.

A CPU is typically referred to as a scalar processor; a GPU is another class of processor, typically referred to as a vector processor. But, what you do is you create a backplane that enables them both to coexist and work on the same data and therefore collaborate, to actually produce an overall very effective computation platform.

At Imagination, we work on the inter-processor collaboration and compute and we address this by having a heterogeneous compute architecture. We address the inter-compute subsystem through the MIPS heterogeneous compute architecture.

Power VR addresses things like vision, video and graphics processing and it can collaborate with the MIPS CPU through the heterogeneous compute backplane. Then, the ability of that subsystem to communicate with another subsystem on another chip can be facilitated very easily by using our Ensigma wireless IP.

The wireless standards are the ones we all know. A flavour of the Wi-Fi standard, whether it’s 802.11ac, 802.11ad 60GHz, or 802.11ah Wi-Fi HaLow. Or it could be Bluetooth Low Energy for short distances or even LTE to address substantial ranges of several kilometres.

And so, this isn’t an abstracted activity. We have demonstrated heterogeneous compute technology to one of many AI applications, such as Advanced Driver Assistance Systems (ADAS), through our design win with Mobileye. They are using our heterogeneous compute platform, whereby our MIPS processor is integrated with their own propriety accelerators that sit on our heterogeneous compute infrastructure and for vision-based ADAS.

ADAS provides advice or warnings that enable the driver to take action, but from an autonomous vehicle point of view it goes one step further and makes decisions based on the system’s actual observations.

The basic system of observing, analysing, and making a decision about what you’re seeing and then taking an action based on what you believe you’ve seen, is an application of artificial intelligence.

And this also plays into the idea that if vehicles are operating autonomously you have the means by which those vehicles can talk to each other so you can start to do interesting things. If a car encounters a hazard, for example, a sinkhole in the road, the car can communicate that information to any other vehicle in the vicinity so that they can automatically avoid it. You have the combination of the intelligence to interpret what you see and the communications to convey the hazard.

Finally, are you excited about the possibilities of helping to make this a reality?

The thing I really want to stress is that the fundamental thing that’s going to transform all our lives is the fact that there’s going to be more brain power as a result of the combination of human and artificial intelligence than we can possibly envisage and this will solve problems that we can’t even imagine today.

A lot of the stuff that Imagination is doing is helping to make sure that this revolution has the foundations to flourish.

I think the reason why I’m so excited about what I do is because I know that in one way or the other, what we’re involved in, the ubiquity of the technologies that we’re developing, is going to play a fundamental role in transforming people’s experiences.

As an optimist in humanity, I think it’s going to substantially improve the health and well-being of people across the world.

Go here for part one of our interview with Jim Nicholas.

You can also follow Imagination on social media on Twitter @ImaginationTech and on LinkedInFacebook and Google+.

Look out for our other Visionary interviews:

The Visionary interview series: Bryce Johnstone, Director of Ecosystems, Segment Marketing, Automotive

The Visionary interview series: Chris Longstaff, Senior Director of Product & Technology Marketing for PowerVR

The Visionary interview series: Simon Forrest, Director of Segment Marketing, Consumer Electronics, Imagination

 

Comments