Over the years, our close partnership with MediaTek has resulted in the release of some very innovative platforms that have set important benchmarks for high-end gaming, smooth UIs and advanced browser-based graphics-rich applications in smartphones, tablets and other mobile devices. Two recent examples include:

MediaTek has been steadily establishing itself as an important global player for consumer products like smartphones, tablets and smart TVs, with a strong foothold in Latin America and Asia, and a rapidly growing presence in Europe and North America. Earlier this year, MediaTek introduced MT8125, one of their most successful tablet chipsets for high-end multimedia capabilities.

MediaTek MT8125 PowerVR SGX544MP

The MT8125 application processor from MediaTek has a PowerVR Series5XT GPU

While MT8125 has been extremely popular with OEMs including Asus, Acer or Lenovo, MT8135 has the potential to consolidate Mediatek’s existing customer base and open up exciting new opportunities thanks to the advanced feature set provided by Imagination’s PowerVR ‘Rogue’ architecture.

MT8135 is a quad-core SoC that aims for the middle- to high-end tier of the tablet OEM market. It supports a 4-in-1 connectivity package that includes Wi-Fi, Bluetooth 4.0, GPS and FM radio, all developed in-house by MediaTek. Miracast is another important addition to the multimedia package, enabling devices using MT8135 to stream high-resolution content more easily to compatible displays, over wireless networks.

MediaTek MT8135 PowerVR G6200

The MT8135 application processor from MediaTek has a PowerVR Series6 GPU

MT8135 incorporates a PowerVR G6200 GPU from Imagination that enables advanced mobile graphics and compute applications for the mainstream consumer market, including fast gaming, 3D navigation and location-based services, camera vision, image processing, augmented reality applications, and smooth, high-resolution user interfaces.

PowerVR Series6 | PowerVR G6200 MediaTek MT8135A block diagram of the PowerVR G6200 GPU core

As MT8135-powered mobile devices start appearing in the market, developers will have access to new technologies and features introduced by our PowerVR Series6 family such as:

  • our latest-generation tile based deferred rendering (TBDR) architecture implemented on universal scalable clusters (USC)
  • high-efficiency compression technologies that reduce memory bandwidth requirements, including lossless geometry compression and PVRTC/PVRTC2 texture compression
  • scalar processing to guarantee highest ALU utilization and easy programming

Thanks to the PowerVR G6200 GPU inside the MT8135 application processor, MediaTek brings high-quality, low-power graphics to unprecedented levels by delivering up to four times more ALU horsepower compared to MT8125, its PowerVR Series5XT-based predecessor. PowerVR G6200 fully supports a wide range of graphics APIs including OpenGL ES 1.1, 2.0 and 3.0, OpenGL 3.x, and DirectX 10_0, along with compute programming interfaces such as OpenCL 1.x, Renderscript and Filterscript. Future PowerVR GPUs will be able to scale up to OpenGL 4.x and DirectX 11_2.

PowerVR - Graphics and GPU compute performance of MediaTek platforms

A comparison of peak graphics and GPU compute performance for MediaTek MT8377, MT8125 and MT8135 processors (relative to a normalized frequency)

By partnering up with Imagination, MediaTek has access to our industry-leading PowerVR graphics, worldwide technical support, and a strong ecosystem of Android developers capable of making the most of our technology. We look forward to shortly seeing our brand-new PowerVR Series6 GPUs in the hands of millions of consumers, and see MediaTek as one of our strategic partners for our latest generation PowerVR GPUs moving forward.

For more news and announcements from our ecosystem and silicon partners, follow MediaTek (@MediaTek) and Imagination (@ImaginationPR and @PowerVRInsider) on Twitter and keep coming back to our blog.

Comments

  • ltcommander_data

    PowerVR G6200 fully supports a wide range of graphics APIs including OpenGL ES 1.1, 2.0 and 3.0, OpenGL 3.x, 4.x and DirectX 10_1, along with compute programming interfaces such as OpenCL 1.x, Renderscript and Filterscript.

    In the article, as quoted above, you state that the G6200 supports OpenGL 4.x and DirectX 10.1. Other Imagination documentation such as your official fact sheet linked below states that the G6200 supports OpenGL 3.2 and DirectX 10.0. I’m wondering which is correct?

    https://www.imgtec.com/factsheets/powervr/PowerVR_Series6_IP_Core_Family_-1.0-.pdf

    • Hi,

      Thanks for catching that. I was thinking of the whole PowerVR ‘Rogue’ family when I put together that list. I’ve now updated it with what PowerVR G6200 is designed to support today and what future PowerVR GPUs should be able to scale up to.

      Regards,
      Alex.

  • Roninja

    How does Rogue perform compared with up coming rival GPU architectures such as Mali T6xx or Adreno 330? Love to hear more in the perf/mm/mW analysis.

    • Hi,

      See the answer above. We have more on the topic in a series of upcoming articles focused on PowerVR Series6 and GPU compute.

      Best regards,
      Alex.

  • Roninja

    How does Rogue perform compared with up coming rival GPU architectures such as Mali T6xx or Adreno 330? Love to hear more in the perf/mm/mW analysis.

    • Hi,

      See the answer above. We have more on the topic in a series of upcoming articles focused on PowerVR Series6 and GPU compute.

      Best regards,
      Alex.

  • TWatcher

    Alex, can you confirm if the piece in the eetimes article is accurate, which cites the G200 implementation in the mediatek chip proving 80Gflops GPU compute performance.

    • Hi,

      I would not want to comment on the accuracy of that chart; instead, I’ll make a few points to briefly explain why simply thinking just about GFLOPS is a dangerous road to take. Let’s start with asking a few questions:

      – Is that raw (sometimes purely theoretical) performance achievable in real world performance or benchmarks?

      – How efficient is the architecture (i.e. can it actually feed enough data to keep ALUs busy or is silicon left unused)?

      – What about power consumption and thermal issues?

      Here are some answers that point out how Imagination’s GFLOPS are far more efficient than the competition thanks to:

      – High performance, low power-driven architecture designed for
      mobile: We support GPU compute across all PowerVR SGX and ‘Rogue’ GPUs vs. competitor’s approach: no or limited GPU compute on certain cores and brute force, power hungry desktop APIs for others. For example, on our hardware, OpenCL 1.x EP has a series of optimized extensions which lead to reduced power consumption in graphics and compute applications

      – More balanced ratio between ALUs and other units (e.g. TMUs) to prevent bottlenecks.

      – TBDR with very fast depth rates, probably faster than the competition even without overdraw (e.g. less time for the shader core to idle during shadowmap generation).

      – Mature compiler/driver: continued optimizations to reach and maintain peak performance

      The big jump in graphics and compute performance from PowerVR Series5, through Series5XT to Series6 can be seen in the chart above (100% accurate), where for roughly the same frequency, a dual-cluster ‘Rogue’ GPU is able to deliver ~16x the performance of a single-core SGX GPU.

      Best regards,
      Alex.

      • TWatcher

        Thanks for the reply. The 80Gflops figure in the eetimes table is tagged as being sourced from mediatek. but I respect that you can not comment.

        SGX has had class leading fill rates and triangle rates for quite some time, but it is clear that these alone are not enough to keep it ahead of some of the more recent competition. Adren330 for example is substantially inferior in implementation in both these areas, and yet in over all benchmarks, pulls way ahead of the ipad4 graphics. Thats a smartphone soc against a tablet soc, ie. the tablet should have a higher allowable power profile. It would appear that some of the competition have a better overall balanced graphics performance. I’m assuming this is shader/texture performance related , and as this capability has more dominance in applications, it must start to be an issue.

        Can you provide any thoughts on how shader performance has been enhanced in rogue vis-a-vis SGX to address this apparent imbalance.

        Regards

        • I cannot comment in great detail on our architectural implementation but tuning only for maximum benchmark performance can be a very misleading practice.

          There are some GPU architectures out there that follow this M.O.:
          – Run fast and hot (in overclock/turbo mode)
          – Check for thermal thresholds
          – Use DVFS and power gating to cool SoC down

          The positive result is indeed better benchmark performance; but the negative results greatly overcome the benefits.

          You might get all or some of the following:
          – variable performance (oscillations from high to low) and bad user experience (smooth to jerky),
          – limited device lifetime due to repeated overheating,
          – higher BOM cost for device (due to expensive extra cooling employed in some recently announced mobile devices).
          – poor battery life due to high power consumption

          PowerVR GPUs are instead designed to provide the best system performance based on real workloads. We can constantly monitor GPU and system workload (MicroKernel), use DVFS and power gating linked to workload (PowerGearing) and still manage to check for extreme thermal thresholds.

          All this, combined with the advantages mentioned above, offer PowerVR ‘Rogue’ GPUs a head start in the mobile space.

          Best regards,
          Alex.

          • Twatcher

            Thanks for the reply

            I understand that certain competitive products can not sustain benchmark performance in real life.

            However. The Adreno gpu powered qualcomm family are pretty much omni-present in just about every segment, sector and price point in the mobile arena. They clearly are a highly competitive product, not just in benchmarks, but in reality, and have not generally been remarked on for having either a bad real-life user experience or requiring extra cooling. I feel their benchmark performance is a good reflection of their capabilities, and IMG’s SGX shader performance does appear to be significantly behind them, in like for like products, hence my question regarding rogue shader improvements.

  • TWatcher

    Alex, can you confirm if the piece in the eetimes article is accurate, which cites the G200 implementation in the mediatek chip proving 80Gflops GPU compute performance.

    • Hi,

      I would not want to comment on the accuracy of that chart; instead, I’ll make a few points to briefly explain why simply thinking just about GFLOPS is a dangerous road to take. Let’s start with asking a few questions:

      – Is that raw (sometimes purely theoretical) performance achievable in real world performance or benchmarks?

      – How efficient is the architecture (i.e. can it actually feed enough data to keep ALUs busy or is silicon left unused)?

      – What about power consumption and thermal issues?

      Here are some answers that point out how Imagination’s GFLOPS are far more efficient than the competition thanks to:

      – High performance, low power-driven architecture designed for
      mobile: We support GPU compute across all PowerVR SGX and ‘Rogue’ GPUs vs. competitor’s approach: no or limited GPU compute on certain cores and brute force, power hungry desktop APIs for others. For example, on our hardware, OpenCL 1.x EP has a series of optimized extensions which lead to reduced power consumption in graphics and compute applications

      – More balanced ratio between ALUs and other units (e.g. TMUs) to prevent bottlenecks.

      – TBDR with very fast depth rates, probably faster than the competition even without overdraw (e.g. less time for the shader core to idle during shadowmap generation).

      – Mature compiler/driver: continued optimizations to reach and maintain peak performance

      The big jump in graphics and compute performance from PowerVR Series5, through Series5XT to Series6 can be seen in the chart above (100% accurate), where for roughly the same frequency, a dual-cluster ‘Rogue’ GPU is able to deliver ~16x the performance of a single-core SGX GPU.

      Best regards,
      Alex.

      • TWatcher

        Thanks for the reply. The 80Gflops figure in the eetimes table is tagged as being sourced from mediatek. but I respect that you can not comment.

        SGX has had class leading fill rates and triangle rates for quite some time, but it is clear that these alone are not enough to keep it ahead of some of the more recent competition. Adren330 for example is substantially inferior in implementation in both these areas, and yet in over all benchmarks, pulls way ahead of the ipad4 graphics. Thats a smartphone soc against a tablet soc, ie. the tablet should have a higher allowable power profile. It would appear that some of the competition have a better overall balanced graphics performance. I’m assuming this is shader/texture performance related , and as this capability has more dominance in applications, it must start to be an issue.

        Can you provide any thoughts on how shader performance has been enhanced in rogue vis-a-vis SGX to address this apparent imbalance.

        Regards

        • I cannot comment in great detail on our architectural implementation but tuning only for maximum benchmark performance can be a very misleading practice.

          There are some GPU architectures out there that follow this M.O.:
          – Run fast and hot (in overclock/turbo mode)
          – Check for thermal thresholds
          – Use DVFS and power gating to cool SoC down

          The positive result is indeed better benchmark performance; but the negative results greatly overcome the benefits.

          You might get all or some of the following:
          – variable performance (oscillations from high to low) and bad user experience (smooth to jerky),
          – limited device lifetime due to repeated overheating,
          – higher BOM cost for device (due to expensive extra cooling employed in some recently announced mobile devices).
          – poor battery life due to high power consumption

          PowerVR GPUs are instead designed to provide the best system performance based on real workloads. We can constantly monitor GPU and system workload (MicroKernel), use DVFS and power gating linked to workload (PowerGearing) and still manage to check for extreme thermal thresholds.

          All this, combined with the advantages mentioned above, offer PowerVR ‘Rogue’ GPUs a head start in the mobile space.

          Best regards,
          Alex.

          • Twatcher

            Thanks for the reply

            I understand that certain competitive products can not sustain benchmark performance in real life.

            However. The Adreno gpu powered qualcomm family are pretty much omni-present in just about every segment, sector and price point in the mobile arena. They clearly are a highly competitive product, not just in benchmarks, but in reality, and have not generally been remarked on for having either a bad real-life user experience or requiring extra cooling. I feel their benchmark performance is a good reflection of their capabilities, and IMG’s SGX shader performance does appear to be significantly behind them, in like for like products, hence my question regarding rogue shader improvements.

  • Hi,

    Thanks for catching that. I was thinking of the whole PowerVR ‘Rogue’ family when I put together that list. I’ve now updated it with what PowerVR G6200 is designed to support today and what future PowerVR GPUs should be able to scale up to.

    Regards,
    Alex.

  • faisal

    Alex, according to anandtech(http://www.anandtech.com/show/7160/asus-memo-pad-hd7-review/3) mediatek mt8135 gpu (544MP1) is clocked at 286MHz. GPU compute performance is 286/200*6,4=9,152 gflops. With up to four times alu horse power in chart above, G6200 in mediatek 8135 ~35 gflops(maybe in frequency around 200MHz). I’m waiting eagerly the implementation of g6430 or g6630 at clock 600 MHz.

    • Hi,

      The charts on Anandtech take into account the frequencies which those chipsets operate at. My chart is frequency agnostic and relative in performance; this means that I’ve normalized for a nominal MHz value and given you a comparison of peak ALU performance for the PowerVR GPUs inside the three MediaTek chipsets.

      I’ve done this because each application processor includes power management IC which adapts operating frequencies and voltages to workloads. Our PowerVR Series6 GPUs are able to scale up and down in frequency according to current workloads thanks to technologies like our MicroKernel or PowerGearing.

      We can make recommendations and we’re happy to provide tools and resources to optimize our IP for performance or power/area

      http://withimagination.imgtec.com/index.php/news/accelerate-design-closure-for-ip-cores-from-imagination-dok-design-flows

      but ultimately it’s up to our customers to decide what is the optimal frequency they would want to clock their processors at.

      Best regards,
      Alex.

  • faisal

    Alex, according to anandtech ( http://www.anandtech.com/show/7160/asus-memo-pad-hd7-review/3 ) mediatek mt8135 gpu (544MP1) is clocked at 286MHz. GPU compute performance is 286/200*6,4=9,152 gflops. With up to four times alu horse power in chart above, G6200 in mediatek 8135 ~35 gflops(maybe in frequency around 200MHz). I’m waiting eagerly the implementation of g6430 or g6630 at clock 600 MHz.

    • Hi,

      The charts on Anandtech take into account the frequencies which those chipsets operate at. My chart is frequency agnostic and relative in performance; this means that I’ve normalized for a nominal MHz value and given you a comparison of peak ALU performance for the PowerVR GPUs inside the three MediaTek chipsets.

      I’ve done this because each application processor includes power management IC which adapts operating frequencies and voltages to workloads. Our PowerVR Series6 GPUs are able to scale up and down in frequency according to current workloads thanks to technologies like our MicroKernel or PowerGearing.

      We can make recommendations and we’re happy to provide tools and resources to optimize our IP for performance or power/area

      http://withimagination.imgtec.com/index.php/news/accelerate-design-closure-for-ip-cores-from-imagination-dok-design-flows

      but ultimately it’s up to our customers to decide what is the optimal frequency they would want to clock their processors at.

      Best regards,
      Alex.

  • dr.no

    http://www.anandtech.com/show/7191/android-43-update-for-nexus-10-and-4-removes-unofficial-opencl-drivers

    what is the story of OpenCL support.
    Is any OEM supporting OpenCL or is it just
    marketing check point at this time since you
    are not providing drivers.

    • Hi,

      I would avoid relying on 3rd party sources for information regarding what we support.

      We were one of the first GPU IP suppliers supporting OpenCL back when PowerVR Series5 was launched. Over the past few years, we have been working closely with OEMs and select developers to integrate some of this functionality in certain devices.

      Just this week, we have publicly released OpenCL drivers for the Hardkernel ODROID-XU platform:

      http://withimagination.imgtec.com/index.php/powervr/hardkernel-releases-odroid-xu-development-board-with-a-powervr-sgx544mp3-gpu

      GPU compute (be it through OpenCL or Renderscript/Filterscript) is by no means a checklist item to us but instead a way forward for heterogeneous processing and a concept we put a lot of engineering effort into.

      Best regards,
      Alex.

      • dr.no

        If you read the comments on that article.
        It links to a Apple employee who is manager of
        IOS GPU drivers group and he said in his twitter feed.
        “No OpenCL driver. SGX does not natively support OpenCL.”

        • Hi,

          We’ve shown multiple OpenCL demos running on processors integrating PowerVR Series5/Series5XT and Series6 GPU cores.

          http://www.youtube.com/watch?v=dk6Nvqg6aCs

          OpenCL is fully supported in hardware on all PowerVR SGX and ‘Rogue’ cores. It is up to our customers if they wish to take advantage of that functionality or not.

          Regards,
          Alex.

      • Alexey

        Alex, is it possible to obtain OpenCL driver for Android for MT8135 at all? Does it exist or not?

        • There is an OpenCL driver for PowerVR Rogue GPUs. We are working with a few select partners to figure out which is the best platform to release them on.

          Stay tuned – there will be more info soon!

          Regards,
          Alex.

  • dr.no

    8125 is 544MP1
    while 8135 is 2 cores

    So only conclusion that seems reasonable is that there is increase of 2x
    between the generations.
    not 4x as you are trying insinuate.

    • Hi,

      First of all, PowerVR Series6 ‘Rogue’ GPUs do not implement the concept of ‘cores’. Instead, we’ve moved to a cluster-based architecture. One of the advantage of implementing clusters is to bring better performance at reduced area and power consumption. This is because when you scale up the number of clusters (i.e. scale raw GFLOPS performance), you only replicate shader resources, without the need for the extra logic to maintain coherency between traditional cores.

      Therefore, the architectural improvements and performance benefits of our PowerVR ‘Rogue’ architecture allows the MT8135 to achieve up to 4x the GFLOPS performance of MT8125, when clocked at a similar frequency.

      Best regards,
      Alex.

  • dr.no

    8125 is 544MP1
    while 8135 is 2 cores

    So only conclusion that seems reasonable is that there is increase of 2x
    between the generations.
    not 4x as you are trying insinuate.

    • Hi,

      First of all, PowerVR Series6 ‘Rogue’ GPUs do not implement the concept of ‘cores’. Instead, we’ve moved to a cluster-based architecture. One of the advantage of implementing clusters is to bring better performance at reduced area and power consumption. This is because when you scale up the number of clusters (i.e. scale raw GFLOPS performance), you only replicate shader resources, without the need for the extra logic to maintain coherency between traditional cores.

      Therefore, the architectural improvements and performance benefits of our PowerVR ‘Rogue’ architecture allow the MT8135 to achieve up to 4x the GFLOPS performance of MT8125, when clocked at a similar frequency.

      Best regards,
      Alex.

  • Hi,

    I would avoid relying on 3rd party sources for information regarding what we support.

    We were one of the first GPU IP suppliers supporting OpenCL back when PowerVR Series5 was launched. Over the past few years, we have been working closely with OEMs and select developers to integrate some of this functionality in certain devices.

    Just this week, we have publicly released OpenCL drivers for the Hardkernel ODROID-XU platform:

    http://withimagination.imgtec.com/index.php/powervr/hardkernel-releases-odroid-xu-development-board-with-a-powervr-sgx544mp3-gpu

    GPU compute (be it through OpenCL or Renderscript/Filterscript) is by no means a checklist item to us but instead a way forward for heterogeneous processing and a concept we put a lot of engineering effort into.

    Best regards,
    Alex.

    • dr.no

      If you read the comments on that article.
      It links to a Apple employee who is manager of
      IOS GPU drivers group and he said in his twitter feed.
      “No OpenCL driver. SGX does not natively support OpenCL.”

      • Hi,

        We’ve shown multiple OpenCL demos running on processors integrating PowerVR Series5/Series5XT and Series6 GPU cores.

        http://www.youtube.com/watch?v=dk6Nvqg6aCs

        OpenCL is fully supported in hardware on all PowerVR SGX and ‘Rogue’ cores. It is up to our customers if they wish to take advantage of that functionality or not.

        Regards,
        Alex.

    • Alexey

      Alex, is it possible to obtain OpenCL driver for Android for MT8135 at all? Does it exist or not?

      • There is an OpenCL driver for PowerVR Rogue GPUs. We are working with a few select partners to figure out which is the best platform to release them on.

        Stay tuned – there will be more info soon!

        Regards,
        Alex.

  • dr.no

    Can you please give us the peak GFLOPS. Every other vendor has no problem

    with giving that number. For example Intel gave this in OpenCL BOF pdf.

    Peak GFlops: #EUs x ( 2 x 4-wide ALUs ) x ( MUL + ADD ) x Clock Rate
    For Intel Iris Pro 5200: 40 x 8 x 2 x 1.3 = 832 GFlops!

    With CPU it is over 1 TFLOPS. Using 47 watts.

    You get 23 GLOPS/watt.

    I would like you prove the GFLOPS/watt for Series 5 and 6.

    Thanks.

    • Hi,

      I appreciate your desire to find out more about our GPU cores. But just like any other IP vendor, competitive data (revealing how we compute GFLOPS falls under this category) is something we have to release only under NDA.

      This is because of several legal and ethical reasons; mainly, it’s down to patents and ensuring confidentiality with our silicon partners and ecosystem.

      Furthermore, focusing simply on raw GFLOPS numbers can be very misleading for mobile, where power efficiency is the main focus. I can therefore assure you PowerVR ‘Rogue’-based cores will see visible performance efficiency improvements compared to competing architectures and previous PowerVR families thanks to a number of features and characteristics that we will reveal in a number of upcoming articles.

      So stay tuned for more info, there’s a lot of interesting stuff happening behind the scenes.

      Best regards,
      Alex.

      • Roninja

        Hi Alex,

        Appreciate the NDA etc
        What is interesting is that the competition across Qualcomm, Mali and Nvidia now are starting to quote benchmark test results which are proving ultra competitive. Will you in the near future do something which characterises the differences between benchmarks and efficiency and true Power(VR) in real world use cases?

        Secondly on the IMGtec website itself you will find marketing announcements which talk about 100Gflops to 1Tflops performance in the Rogue family. How should we read this marketing as it was written a long time ago.

        • Hi,

          We do not rely just on benchmarks to tell our story because some of them have atypical workloads for usual applications (i.e. certain scenes stress the front end of the GPU instead of the shader core, which is not how most games work).

          Instead, we optimize our hardware and software for the user experience first and foremost, because that is what is going to generate attach rate and make consumers come back for more.

          We have a dedicated team focused on efficiency and performance optimization of our driver and compiler technology which collaborates with our silicon and ecosystem partners to guarantee these updates are reflected in their platforms. Work by this team over the last couple of years has led to some significant performance improvements in the potential performance of all 3D graphics applications through specific general optimizations in the compiler. If you follow GFXBench closely, you will have noticed one such performance increase over the past few days.

          The shader compiler for PowerVR Series5XT GPUs in particular is now capable of using the available compute and register resources much more efficiently, as performance optimizations continue.

          At the end of the day, PowerVR GPUs fare better in real world situations than any competing multi-core solutions, when measuring performance on a clock for clock basis.

          The 100 GLOPS to 1 TFLOPS statement refers to the whole PowerVR ‘Rogue’ architecture, of which we’ve released six configurations so far, with more to come. As we continue to scale up in clusters and continue to improve ALU performance, you will see GPUs that can reach the upper limit of that interval for certain high-end processors.

          Best regards,
          Alex.

  • dr.no

    Can you please give us the peak GFLOPS. Every other vendor has no problem

    with giving that number. For example Intel gave this in OpenCL BOF pdf.

    Peak GFlops: #EUs x ( 2 x 4-wide ALUs ) x ( MUL + ADD ) x Clock Rate
    For Intel Iris Pro 5200: 40 x 8 x 2 x 1.3 = 832 GFlops!

    With CPU it is over 1 TFLOPS. Using 47 watts.

    You get 23 GLOPS/watt.

    I would like you prove the GFLOPS/watt for Series 5 and 6.

    Thanks.

    • Hi,

      I appreciate your desire to find out more about our GPU cores. But just like any other IP vendor, competitive data (revealing how we compute GFLOPS falls under this category) is something we have to release only under NDA.

      This is because of several legal and ethical reasons; mainly, it’s down to patents and ensuring confidentiality with our silicon partners and ecosystem.

      Furthermore, focusing simply on raw GFLOPS numbers can be very misleading for mobile, where power efficiency is the main focus. I can therefore assure you PowerVR ‘Rogue’-based cores will see visible performance efficiency improvements compared to competing architectures and previous PowerVR families thanks to a number of features and characteristics that we will reveal in a number of upcoming articles.

      So stay tuned for more info, there’s a lot of interesting stuff happening behind the scenes.

      Best regards,
      Alex.

      • Roninja

        Hi Alex,

        Appreciate the NDA etc
        What is interesting is that the competition across Qualcomm, Mali and Nvidia now are starting to quote benchmark test results which are proving ultra competitive. Will you in the near future do something which characterises the differences between benchmarks and efficiency and true Power(VR) in real world use cases?

        Secondly on the IMGtec website itself you will find marketing announcements which talk about 100Gflops to 1Tflops performance in the Rogue family. How should we read this marketing as it was written a long time ago.

        • Hi,

          We do not rely just on benchmarks to tell our story because some of them have atypical workloads for usual applications (i.e. certain scenes stress the front end of the GPU instead of the shader core, which is not how most games work).

          Instead, we optimize our hardware and software for the user experience first and foremost, because that is what is going to generate attach rate and make consumers come back for more.

          We have a dedicated team focused on efficiency and performance optimization of our driver and compiler technology which collaborates with our silicon and ecosystem partners to guarantee these updates are reflected in their platforms. Work by this team over the last couple of years has led to some significant performance improvements in the potential performance of all 3D graphics applications through specific general optimizations in the compiler. If you follow GFXBench closely, you will have noticed one such performance increase over the past few days.

          The shader compiler for PowerVR Series5XT GPUs in particular is now capable of using the available compute and register resources much more efficiently, as performance optimizations continue.

          At the end of the day, PowerVR GPUs fare better in real world situations than any competing multi-core solutions, when measuring performance on a clock for clock basis.

          The 100 GLOPS to 1 TFLOPS statement refers to the whole PowerVR ‘Rogue’ architecture, of which we’ve released six configurations so far, with more to come. As we continue to scale up in clusters and continue to improve ALU performance, you will see GPUs that can reach the upper limit of that interval for certain high-end processors.

          Best regards,
          Alex.

  • sikulas

    1. Do you know – will be (or not) G6x00 in MediaTek’s chips MT6592 (Octa) and MT6588?

    2. And what is frequency of GPU PowerVR G6400/6200/6100 now (for example G6200 in MT8135)?

    • Hi,

      1. You will have to confirm that with MediaTek.

      2. The peak frequency of any processor inside an application processor is usually something that the silicon vendor controls via DVFS and power management. Moreover, there might be different parts that have the same architecture but are clocked at different frequencies to fit certain power budgets related to either form factors (smartphone, tablet, etc.) or application area.

      We can definitely help our partners tune their design for certain PPA parameters. One example is the recently released DOK (Design Optimization Kit) for PowerVR Series6 GPUs:

      http://withimagination.imgtec.com/index.php/news/accelerate-design-closure-for-ip-cores-from-imagination-dok-design-flows

      Regards,
      Alex.

  • sikulas

    1. Do you know – will be (or not) G6x00 in MediaTek’s chips MT6592 (Octa) and MT6588?

    2. And what is frequency of GPU PowerVR G6400/6200/6100 now (for example G6200 in MT8135)?

    • Hi,

      1. You will have to confirm that with MediaTek.

      2. The peak frequency of any processor inside an application processor is usually something that the silicon vendor controls via DVFS and power management. Moreover, there might be different parts that have the same architecture but are clocked at different frequencies to fit certain power budgets related to either form factors (smartphone, tablet, etc.) or application area.

      We can definitely help our partners tune their design for certain PPA parameters. One example is the recently released DOK (Design Optimization Kit) for PowerVR Series6 GPUs:

      http://withimagination.imgtec.com/index.php/news/accelerate-design-closure-for-ip-cores-from-imagination-dok-design-flows

      Regards,
      Alex.

  • Jesse

    Now where are the Linux drivers for the PowerVR 6 series?

    • MT8135 has been designed to support Android. You can contact MediaTek to see if they have any plans for future platforms shipping with other operating systems.

      • Jesse

        The main problem with running Linux on SoC’s is usually the GPU. Most ARM and MIPS cpu’s already work on Linux is the GPU’s that usually don’t. They usually don’t have 3D H/W acceleration.

  • Jesse

    Now where are the Linux drivers for the PowerVR 6 series?

    • MT8135 has been designed to support Android. You can contact MediaTek to see if they have any plans for future platforms shipping with other operating systems.

      • Jesse

        The main problem with running Linux on SoC’s is usually the GPU. Most ARM and MIPS cpu’s already work on Linux its the GPU’s that usually don’t. They usually don’t have 3D H/W acceleration.

  • Sergey Fedorov

    I finaly got my first MT8135, it is Teclast P98 3G Tablet, I have found out that it has to support OpenCL (http://store.imgtec.com/product/teclast-p98-3g-tablet/). I got the latest firmware from Teclast v 1.10, but… there are no openCL libraries onboard. (libOpenCL.so, libPVROCL.so, etc.). Where can I get the openCL libraries?

  • Hi,

    If you want a device shipping right now with OpenCL drivers for PowerVR Series6 GPUs, then the new Dell Venue tablets are your best bet (Intel Atom Z3460/Z3480 SoCs)

    https://www.imgtec.com/blog/powervr/my-week-with-dell-venue-8-the-first-android-tablet-to-use-a-powervr-rogue-gpu

    We will make an announcement shortly regarding other devices. For now, there are no other platforms shipping with OpenCL drivers that I can think of.

    Best regards,
    Alex.

  • fez70

    wich one is better this or adreno 330?

  • fez70

    wich one is better this or adreno 330?