This week Imagination is once again on its travels as we set down in Santa Clara, California for this year’s Embedded Vision Summit (EVS). This event is dedicated to one of the hot topics of the moment – vision and AI in embedded and mobile devices – or as the EVS website describes it, “bringing visual intelligence to products.”
Our devices are getting smarter all the time, and we are gradually getting used to the idea that our smartphones, smart speakers and smart cars will be able to identify us, understand us and recognise the world around them. However, learning how to implement this well in power constrained devices is challenging and requires leading-edge knowledge.
As with last year, our expert on all things neural network related – Paul Brasnett, Senior Research Manager for PowerVR Vision & AI – is giving a talk. Last year, the topic in question was how to train your neural network for efficient inferencing (inferencing being the business of identifying objects that your neural network has been trained to recognise). This year, the topic of the talk is ‘Traditional Vision on DNN IP’, and Paul will explain how the industry can build on its years of experience in traditional vision techniques to take advantage of new efficient neural network accelerating hardware such as our PowerVR Series2NX NNA, which we will be demonstrating at the show for the first time.
Speaking to Paul before he hopped on the plane he explained why it was the Imagination always made a point of heading to the show. “EVS is the only show dedicated to the embedded vision and AI space. Every year the EVS team puts together an excellent program of speakers, demos and tutorials covering a range of highly relevant topics. It is a great place to learn about the latest developments in this very rapidly evolving sector. We’re excited to be demonstrating our PowerVR Series2NX for the first time at the show.”
Naturally, we are presenting a range of demos, including the new Sereis2NX demo, and it will be ideal if can come and see us in person (booth 404), where we can talk you through them. In addition to that, we have our object recognition demo, using a number of different network models, such as GoogLeNet, AlexNet and others.
We also have our number recognition demo, showing off our implementation of OpenVX 1.1
FaceID is also now a common feature on premium smartphone devices and our demo from late last year revealed this in action before it hit the high street.
To see all these in person, plus our brand new Series2NX demo in action, head over to our booth number 404.
Paul’s talk at EVS is part of the Technical Insights II track and he will be speaking on May 23rd at 2:50 pm local time, in room 203/204. We look forward to seeing you there!
Inside every A-Series, from the smallest to the very largest are eight individual hardware control lanes, each isolated in memory, enabling different tasks to be submitted to the GPU simultaneously, for fully secure GPU multitasking. We call it HyperLane Technology.