Walking the halls of Mobile World Congress in Barcelona, you’d think that everything is (and should be) about smartphones, tablets or wearables. However, this year marked a change for MWC – and several other big trade shows: many exhibitors were there to talk about providing solutions for fast growing markets beyond mobile; some of the more interesting announcements came from players in the networking, IoT or automotive space for example.

This brings me to Mobileye, a true pioneer in designing solutions for ADAS (Advanced Driver Assistance Systems). The company has a tight grip on the market, having a client base that counts over 20 car manufacturers including Audi, BMW, Fiat, Ford, General Motors, Honda, Nissan, Peugot, Citroen, Renault, Volvo and Tesla.

Recently, a prototype of Audi A7 used a MIPS-based Mobileye chip to complete a fully autonomous 560 mile drive; the car averaged up to 70 mph and changed lanes without any human intervention.

The Audi prototype completed a 550 mile piloted drive from Silicon Valley to Las Vegas

After introducing the A7 prototype, Audi has announced that its self-piloting zFAS system (short for zentrale Fahrerassistenzsteuergerät) will be developed by Delphi using technology from Mobileye.

The new Mobileye EyeQ4 chip sports multiple MIPS CPUs

Last month, Mobileye announced EyeQ4, a new chip built on a many-core architecture designed for computer vision processing in ADAS applications (i.e. collision detection and avoidance). EyeQ4 delivers the kind of performance you’d expect from a super-computer (2.5 TFLOPS) at an amazingly low 3 watts of power.

Mobileye EyeQ4 - ADASMobileye’s new EyeQ4 vision processor targets autonomous driving

To achieve this impressive feat of engineering, Mobileye has designed a true heterogeneous architecture that combines several on-chip processors, including multiple MIPS CPUs:

  • A cluster of quad-core MIPS I-class CPUs clocked at 1 GHz and sporting our innovative multi-threading technology for superior handling of data control and management.
  • Multiple specialized Vector Microcode Processors (VMPs) that take care of ADAS-related image processing tasks (e.g. scaling and pre-processing, warping, tracking, lane markings detection, road geometry detection, filters and histograms, etc.)
  • A MIPS M-class Warrior CPU that sits at the heart of the Peripheral Transport Manager (PTM), taking care of on- and off-chip general data transactions.

The diagram below offers you an overview of the processor’s architecture; you can also read more about it in the press release here:

Mobileye EyeQ4 architecture - MIPS interAptiv M5150 CPUMobileye EyeQ4 uses multiple MIPS CPUs

MIPS M5150 is a top-class microcontroller for industrial applications

This announcement from Mobileye highlights two important advantages of the MIPS architecture.

First, it is a great example of how companies can use our multi-threading capabilities to boost performance in embedded applications that require superior compute capabilities at ultra-low power; you can read more about MIPS multi-threading (MIPS MT) here and see some relevant examples here and here.

However, for me personally, this product represents a very important landmark: it is the first instance of a MIPS Warrior CPU being announced publicly in a product. For those who’ve missed the MIPS M5150 launch, here’s a quick recap:

  • Based on the latest MIPS Release 5 architecture, delivers class-leading performance (e.g. 1.57 DMIPS/MHz and 3.44 CoreMark/MHz).
  • Superior virtualization support enables multiple guest operating systems to run in parallel on the same MCU.
  • Improved anti-tamper security feature provides resistance to unwanted access to the processor.
  • Optional FPU delivers high-performance support of both single and double precision instructions (IEEE 754).

MIPS 5150 CPUThe architecture of the MIPS M5150 CPU

These microcontroller-class MIPS processors are ideal for handling control functions in industrial applications, including automotive.

For example, MIPS MCUs can be deployed inside a car’s engine control unit (ECU); the processor gathers data from dozens of other sensors and calculates the best spark and fuel injector timing which leads to lower emissions and better mileage.

Final words

I’m very excited to see more and more partners using our technologies in innovative ways. For example, we’ve recently reported that MIPS CPUs are getting very good traction in networking processors, including 4G modems, mmWave communications or high-end 64-bit enterprise chips (see here, here or here, respectively).

Having a record number new agreements signed for MIPS Warrior and Aptiv CPUs (including a Tier 1 customer), we are definitely going to see some very impressive designs coming to market in the next year targeting a range of established and emerging markets.

For the latest MIPS-related announcements from Imagination, make sure you also follow us on Twitter (@ImaginationTech, @MIPSguru, @MIPSdev).

Comments

  • Dinka Doohickey

    why not have mips MSA available as an option across the board? I mean even very low end ARM cortex A7’s have NEON…

    • Hi,

      MSA is a new feature introduced in the new MIPS Warrior CPUs (Release 5 and upwards). Older MIPS CPUs feature various DSP extensions and other instructions suitable for multimedia processing.

      Regards,
      Alex.

      • dinka doohickey

        why isnt SIMD offered across the board in R5 as an option?

        • Mid-range (I-class) and high-end (P-class) Release 5 MIPS CPUs (P5600, I6400) implement MSA since it is a feature mostly requested by developers looking to accelerate multimedia processing or other compute tasks; architectural licensees also have access to MSA.
          https://www.imgtec.com/mips/architectures/simd.asp

          For M-class CPUs, we offer power and area-optimized DSP instructions which are better suited for the type of tasks a microcontroller would usually run (e.g. voice processing). A full blown SIMD engine in a microcontroller would incur a significant increase in power consumption and silicon area – two vital areas for system architects designing embedded processors.

          Regards,
          Alex.

          • Dinka Doohickey

            is there any MSA adoption yet?

          • There will be an article coming soon about MSA.

  • Dinka Doohickey

    why not have mips MSA available as an option across the board? I mean even very low end ARM cortex A7’s have NEON…

    • Hi,

      MSA is a new feature introduced in the new MIPS Warrior CPUs (Release 5 and upwards). Older MIPS CPUs feature various DSP extensions and other instructions suitable for multimedia processing.

      Regards,
      Alex.

      • dinka doohickey

        why isnt SIMD offered across the board in R5 as an option?

        • Mid-range (I-class) and high-end (P-class) Release 5 MIPS CPUs (P5600, I6400) implement MSA since it is a feature mostly requested by developers looking to accelerate multimedia processing or other compute tasks; architectural licensees also have access to MSA.
          https://www.imgtec.com/mips/architectures/simd.asp

          For M-class CPUs, we offer power and area-optimized DSP instructions which are better suited for the type of tasks a microcontroller would usually run (e.g. voice processing). A full blown SIMD engine in a microcontroller would incur a significant increase in power consumption and silicon area – two vital areas for system architects designing embedded processors.

          Regards,
          Alex.

          • Dinka Doohickey

            is there any MSA adoption yet?

          • There will be an article coming soon about MSA.

  • roninja

    Mobileeye on their website stated they were not looking at GPU compute solution as an additional SoC enabler. I think they use some other proprietary compute engine?
    Alex can you elaborate a GPU would not be beneficial for ADAS?

    • When it comes to implementing an algorithm, there are usually two approaches one can take: have a dedicated (fast, low power) piece of hardware to do the job or rely on a (flexible) software implementation ; each has its own trade-offs. This is how for example we’ve moved from software-only graphics tasks running on a bulky CPU to having a dedicated GPU running an optimized API.

      In the case of Mobileye, the ADAS functions described above are mainly handled by the VMPs while the main application processor (this is a separate chip in the example above) is freed up to run the operating system (UI, apps, etc.).

      However, not all vendors have the capabilities (whether it’s manufacturing cost or silicon area) to build/incorporate a dedicated ADAS processor. This is where a GPU compute approach is ideal since most – if not all – automotive application processors today already incorporate a graphics processor.

      So if you don’t have an ADAS chip on-board, you can take advantage of the hundreds of GFLOPS provided by a PowerVR Rogue GPU and implement those lane detect algorithms (and other ADAS functions) on the graphics engine. This is why we’ve created an ecosystem around GPU compute on PowerVR; here’s the latest example from CES 2015:
      http://www.luxoft.com/pr/luxoft-becomes-strategic-technology-partner-of-imagination-technologies/

      http://www.koreaittimes.com/story/43908/imagination-announces-powervr-imaging-framework-android

      At the end of the day, we provide a diverse portfolio of IP for many markets, including automotive. Our customers can then license that IP and work with us to decide what is the best choice for them.

      Hope this helps, please let me know if you have further questions.

      Regards,
      Alex.

    • AndrewW

      Great question! Why not let Mobileye Co-Founder, Chairman & CTO Prof. Amnon Shashua explain it? See here, https://www.youtube.com/watch?v=kp3ik5f3-2c .

      Basically GPUs are not really optimized for embedded computer vision. In this particular application they are plagued by relatively high power consumption and poor cpu utilization especially when you’re running deep learning convolutional neural networks.

  • roninja

    Mobileeye on their website stated they were not looking at GPU compute solution as an additional SoC enabler. I think they use some other proprietary compute engine?
    Alex can you elaborate a GPU would not be beneficial for ADAS?

    • When it comes to implementing an algorithm, there are usually two approaches one can take: have a dedicated (fast, low power) piece of hardware to do the job or rely on a (flexible) software implementation ; each has its own trade-offs. This is how for example we’ve moved from software-only graphics tasks running on a bulky CPU to having a dedicated GPU running an optimized API.

      In the case of Mobileye, the ADAS functions described above are mainly handled by the VMPs while the main application processor (this is a separate chip in the example above) is freed up to run the operating system (UI, apps, etc.).

      However, not all vendors have the capabilities (whether it’s manufacturing cost or silicon area) to build/incorporate a dedicated ADAS processor. This is where a GPU compute approach is ideal since most – if not all – automotive application processors today already incorporate a graphics processor.

      So if you don’t have an ADAS chip on-board, you can take advantage of the hundreds of GFLOPS provided by a PowerVR Rogue GPU and implement those lane detect algorithms (and other ADAS functions) on the graphics engine. This is why we’ve created an ecosystem around GPU compute on PowerVR; here’s the latest example from CES 2015:
      http://www.luxoft.com/pr/luxoft-becomes-strategic-technology-partner-of-imagination-technologies/

      http://www.koreaittimes.com/story/43908/imagination-announces-powervr-imaging-framework-android

      At the end of the day, we provide a diverse portfolio of IP for many markets, including automotive. Our customers can then license that IP and work with us to decide what is the best choice for them.

      Hope this helps, please let me know if you have further questions.

      Regards,
      Alex.

  • Younggi Song

    Hi,

    Based on EyeQ3, is there any possibility to sensor function with camera and radar for example?

  • Younggi Song

    Hi,

    Based on EyeQ3, is there any possibility to sensor function with camera and radar for example?