Imagination at the Embedded Vision Summit 2017

The Imagination PowerVR team are busily preparing for the Embedded Vision Summit 2017 (EVS), taking place from 1-3 May in Santa Clara. EVS is a great industry event for all those involved with vision and surrounding technologies. From the IP creators like Imagination, through the semiconductor companies, algorithm developers and equipment OEMs; everyone who has an interest in computer vision is likely to be in attendance.

Embedded Vision Summit

PowerVR continues to develop its vision processing capabilities, both in terms of GPU-based vision processing and dedicated hardware, including our ISP family. Many of our customers are seeing it as a real benefit to be able to perform vision processing tasks on the GPU, thus avoiding the need for separate, expensive, dedicated hardware. In many applications, the GPU would typically be underutilised while performing vision tasks, such as in the case of computational photography where the GPU may only be required to draw a simple UI.  In these cases, the spare GPU capacity can be put to work to provide the main vision processing capability of the SoC. This year’s launch of the PowerVR Series8XE Plus GPUs gave a significant boost to this vision processing capability in the entry- to mid-range by increasing the FLOPs, while still offering a significant area saving vs. competing solutions.

Just recently, we have also announced our new PowerVR Furian architecture, which has many architectural enhancements for better vision processing, such as the multi-thread (scatter/gather) type access to local GPU memory, DMA access to the GPU, and lower overhead GPU compute access without requiring kernel mode and separate compute data path – to list but a few.  Of course, for some operations, such as processing the raw data from a CMOS sensor, this is better performed using dedicated hardware, and this is where we rely on our ISP family.

Furian Iguana

Imagination is again event sponsor of Embedded Vision Summit this year, and as such, we will be exhibiting in the Technology Showcase. Our engineers are creating some exciting new demonstrations of Imagination’s leading vision processing technologies. As ever with technology demos, these are likely to be only fully ready days hours before the event kicks off, and for that reason, I don’t want to give too much away in terms of spoilers. We are also working on having a third-party demo from a licensee of ours who have some exciting new silicon utilising multiple IP cores from Imagination. If we get confirmation in time, we’ll be sure to update this blog post and give some more details.

I can say that there will be several demos based around CNNs running efficiently on PowerVR GPUs. Imagination continues to be a great supporter of Khronos and its open standards and will be demonstrating OpenVX.  Imagination offered the first OpenVX 1.1 conformant solution late last year when the PowerVR GPU passed the conformance test. We are building on this implementation with the adoption of the OpenVX CNN extensions and will be showing this in action at Embedded Vision Summit. OpenVX is gathering momentum as the preferred framework for the real-world deployment of vision applications, and the adoption of CNN extensions will further this.  There is a full training day on 3 May during which I will give a short implementer presentation.

The presentations are always highlights of Embedded Vision Summit.  This year’s program looks to be very interesting, with a wide variety of topics covered.  It’s difficult to highlight the most interesting from the comprehensive list, but here’s a couple that I am really looking forward to:

The first presentation I’ve highlighted is by Jeff McVeigh of Intel. He will address the important parts of where the vision algorithms/apps come from – will they be off the shelf, or in house?  I’m interested to see what Jeff’s perspective is, given the recent massive investments into vision (Altera, Movidius, Mobileye) that Intel has made and are continuing to make.

I’ve highlighted this second talk for a number of reasons, but primarily because it manages to combine computer vision with a traditional British pub activity – no, not beer drinking, but playing darts!!  This looks to be a really interesting talk in particular for the use of the vision only cameras. It also highlights the relative accessibility of computer vision compared to only a few years ago. If my few words have not sold this talk to you, then surely the YouTube video clip will.
Paul BrasnettAnd of course, I couldn’t leave the discussion on presentations without highlighting the presentation due to be given by Imagination’s Paul Brasnett. Paul’s talk will look at the choices we can make when training for neural networks, and how that can impact the performance, bandwidth and power for the inference of the network.

We would love to meet you and talk about your vision processing needs, chew the fat discussing OpenVX, or just generally talk about whether pool or darts is the greatest pub game in the world!  You can contact us and we will be pleased to set up a meeting, or just drop by the stand in the exhibition hall and say “hi”.

 

Leave a Comment

Search by Tag

Search for posts by tag.

Search by Author

Search for posts by one of our authors.

Featured posts
Popular posts

Blog Contact

If you have any enquiries regarding any of our blog posts, please contact:

United Kingdom

benny.har-even@imgtec.com
Tel: +44 (0)1923 260 511

Related blog articles

The ultimate embedded GPUs for the latest applications

Introducing PowerVR Series9XEP, Series9XMP, and Series9XTP As Benjamin Franklin once said, only three things in life are certain: death, taxes and the ongoing rapid advancement of GPUs for embedded applications*. Proving his point, this week, Imagination has once again pushed

Opinion: the balance between edge and cloud.

Simon Forrest explains how embedded chips can meet the challenge of delivering true local AI processing. GPUs and NNAs are rapidly becoming essential elements for AI on the edge. As companies begin to harness the potential of using neural networks

DJI Mavic 2 closed

Partner interview series: DJI – flying high

DJI is a name now synonymous with drones, thanks to an estimated 74% market share across consumer, professional and enterprise markets. In the second of our ‘quick chat’ interview series, we speak to we talk to Charlie Sun, R&D Director at DJI to find

PowerVR Series2NX collects another award at China tech summit

Following success at other award ceremonies this year, PowerVR’s Series2NX neural network accelerator has picked up another gong. This time, the panel of judges at the ASPENCORE Global Electronic Achievement Awards Annual EDA/IP award ceremony deemed it to be the most

Stay up-to-date with Imagination

Sign up to receive the latest news and product updates from Imagination straight to your inbox.

  • This field is for validation purposes and should be left unchanged.