Recently, Imagination returned from the AutoSens show held at the Michigan Science Center in downtown Detroit. Aimed at engineers and other stakeholders in the automotive sensor industry, we first attended last autumn and suitably impressed by the quality of attendees, this time we decided to both attend and exhibit to show off Imagination’s on-going presence in the automotive space.
The show consists of one set of morning keynote speech and then two parallel sessions held in the afternoon with presentations and panels. The exhibition area was split into two levels, one outside the conference halls and the other on the mezzanine where we were for the three days.
Day one (or afternoon one) was devoted to a set of tutorials, while days two and three were the conference. We had a table-top demonstration area on the Mezzanine showing off an array of giveaways – the Imagination ‘ducks’ are always popular as are the smartphone stands.
Our focus was on showing our neural network acceleration on both our GPU and on an FPGA of our recently announced PowerVR Series 2NX cores.
First, we had a real-time demo running live on a Chromebook showing our ability to quickly detect and classify objects such as faces using a single shot detector (SSD) – impressive as it’s running on the GPU that is now several years’ old and still achieving excellent performance.
We also showed a video of a similar demo running on our FPGA system of our new PowerVR Series2NX. Check out the object detection, and object classification as well.
The other demo was GPU focussed and was running on an Acer Iconia tablet. We have already written a blog post that demonstrates that our low part and relatively small Series8XE cores and sufficient at handling good looking 3D models for use in-car surround images.
The car model is shown at various levels of complexity, starting at one million triangles down to 500,000 to 250,000 and showing that the lower count model gives away very little in terms of visual fidelity to the higher count model to be the benefit of frame rates.
The conference itself covered many topics. Day 2 concentrated on enhancements around the latest developments in sensing technologies, while Day 3 had a focus on AI and neural networks, how they work, and their effectiveness in their part of assistive and autonomous driving.
A common theme for deep learning is the requirement of large datasets for training these networks so that cars can gain experience of as many different scenarios as possible. According to this report, an autonomous car needs to drive billions of miles to be able to show it can demonstrate reliability, but what and where those billion miles are matter as well – they can’t be all just around the block!
There is also a strong consensus that autonomous cars driving around with, effectively, servers in the boot will not be practical. You could put a high-end desktop or server-grade GPUs in the car to do all the required processing for autonomous driving but the power usage would be so high and the range so poor that the mileage would be impractical. You wouldn’t have space for your shopping either. (Yes, we know your groceries will come to you, but you’ll still want space to carry stuff).
At the show, the need for processors capable of high-performance neural networks acceleration was clear and with its high inference/mW our PowerVR NNA is a perfect candidate to address this market.
We look forward to attending more automotive shows such as AutoSens so we can demonstrate to the industry in person why Imagination IP is a great fit for the automotive space.
Thanks also to Stephen Alderman for contributing to this report.
Today we’ll be looking at how environment maps are used in image-based lighting to implement physically-based rendering on PowerVR