If you follow Imagination and what we do, you may have heard about PowerVR Ray Tracing, hybrid rendering and the potential that these technologies have to revolutionize the user experience from mobile and console gaming to virtual and augmented reality.

For the better part of the last eight years, we have been busy developing unique hardware and software technologies to radically lower the cost and dramatically increase the efficiency and performance of ray tracing.

This work culminates at GDC 2014 with the official launch of the PowerVR Wizard GPU family, a range of IP processors that offer high-performance ray tracing, graphics and compute in a power envelope suitable for mobile and embedded use cases. This opens up the potential of highly photorealistic, computer generated imagery to a host of new real-time applications and markets not previously possible.

PowerVR GR6500: ray tracing, graphics and compute coming together in one revolutionary GPU

The first member of the Wizard family is PowerVR GR6500. This ray tracing processor is based on a latest generation, quad-cluster PowerVR Rogue design, with all the bells and whistles of our recently-launched PowerVR Series6XT GPU processors, including:

  • Full-blown graphics and compute performance: four Unified Shading Clusters (USCs), with 128 ALU cores delivering more than 150 GFLOPS (FP32) or 300 GFLOPS (FP16) at 600 MHz
  • Unmatched real-world ray tracing performance: Up to 300 MRPS (million rays per second),  24 billion node tests per second and 100 million dynamic triangles per second at 600 MHz
  • PowerGearing G6XT for advanced power management and dynamic resource allocation
  • PVR3C triple compression technologies (PVRTC and ASTC for texture compression, PVRIC for frame buffer compression, PVRGC for geometry compression)
  • Deep Color support for very high image quality at Ultra HD resolutions and beyond

PowerVR GR6500 is designed to provide leading support for a range of APIs such as OpenGL ES 3.1/2.0/1.1, OpenGL 3.x, Direct3D 11 Level 10_0, OpenCL 1.2, and OpenRL 1.x.

 PowerVR GR6500 GPU - PowerVR Wizard GPUsPowerVR GR6500 is the first graphics processor in the PowerVR Wizard GPU family

On top of the features listed above, PowerVR GR6500 includes additional ray tracing-specific hardware that provides full hardware acceleration of the entire ray tracing flow, including model building and traversal:

  • a dedicated ray tracing data master that feeds ray intersection data to the main scheduler, in preparation for shaders to run, which evaluate the ultimate data contribution from the ray.
  • a specialized Ray Tracing Unit (RTU) which uses fixed-function math to perform ray tracing intersection queries, in addition to gathering ray coherency in order to reduce power and bandwidth consumption.
  • a scene hierarchy generator to speed up dynamic object updates.
  • a frame accumulator cache that provides write-combining scattered access to the frame buffer.

All of these ray tracing-specific elements have been designed to integrate efficiently and communicate perfectly with the rest of the GPU architecture and produce real-time, interactive ray traced graphics that can comfortably scale from mobile and embedded platforms to gaming consoles and mainstream gaming PCs.

2_PowerVR Ray Tracing - hybrid rendering (4)This night scene uses hybrid rendering running on PowerVR Ray Tracing hardware

More importantly, these blocks do not affect the shading performance of the traditional graphics pipeline because they do not use up shading GFLOPS to perform ray tracing-related tasks. Thus PowerVR GR6500 is able to perform ray tracing tasks 100x more efficiently compared to using GPU compute or other software-only approaches on traditional graphics architectures.

What is ray tracing and where can I use it?

Ray tracing is a technique for rendering 3D graphics with complex and more natural lighting models to achieve cinema-quality images and a level of near photographic realism that is impractical with traditional graphics techniques.

To-date, ray tracing has been used in specialised applications such as special effects and computer animated movies, industrial design, mechanical and architectural modelling to create lifelike and photorealistic imagery.

Caustic_boatThis is a model created by Mads Drøschler using our PowerVR Ray Tracing technology

For example, last month’s 3D World Magazine featured a cover story on Gavin Greenwalt of Straightface Studios using our Visualizer for Maya 1.3 software, powered by PowerVR Ray Tracing. You can view the new TV commercial for Jeweler’s Mutual Insurance Company, titled Crab – Perfect Circle by clicking right here.

Ray tracing is also used today by tools, to create the pre-baked lighting maps that are delivered with most games. The latest generation Unity 5 game engine announced at GDC 2014 embeds our PowerVR Ray Tracing software technology for interactive real-time lightmap previews. Ray tracing offers artists the ability to accurately simulate the behaviour of light and instantaneously visualize the subtle lighting effects of any tweak to their game assets.

_Unity 5 Lightmap Preview editor - PowerVR Ray TracingThe Unity 5 lightmap editor uses PowerVR Ray Tracing technology to improve pre-baked lighting

The ray tracing approach can provide substantial benefits when used for real-time rendering in game engines too. PowerVR Wizard GPUs allow the coexistence of traditional polygon based rendered objects (e.g. objects created using OpenGL ES) and lifelike ray-traced elements in the same scene.

4_PowerVR Ray Tracing - rasterized rendering (7)Traditional, rasterized-only rendering cannot model light transport accurately

09_Ray tracing in games_PowerVR Ray Tracing - hybrid rendering-1-label

PowerVR Ray Tracing GPUs offer ultra-realistic shadows, reflections and transparency at no extra cost

This hybrid rendering technique primarily provides high-quality, dynamic lighting and shadow effects but can improve other elements of the game engine as well (more about hybrid rendering in a separate article). Another example is improving game AI; for example, characters in a first-person shooter can start to see and understand the 3D environment around them, using the ray tracing to process spatial understanding opens up a new world for realistic behavior when in-game agents can make decisions based on direct line of sight calculations that model what they are able to see.

By combining the best of both worlds (traditional graphics and ray tracing-based rendering), the new PowerVR GR6500 ray tracing GPU supports all existing game engines and tools while allowing developers to bring a new level of quality and enhanced realism to apps and games using PowerVR Ray Tracing.

Setting the standard for real-world ray tracing performance

The ability of GPUs to claim millions of rays per second is not a new concept. For example a current-generation, desktop-class ray tracer using GPU compute and rendering a very simple scene, resident in a tiny cache, might be able to deliver millions of rays per second in a very simple shading scenario.

However, in 99% of real world cases, those aren’t useful rays; additionally, you would need a power budget upwards of 300W to achieve anything remotely close to the performance that PowerVR Ray Tracing-based hardware achieves.

5_ PowerVR Ray Tracing - hybrid rendering (1)PowerVR Wizard ray tracing GPUs provide the performance required to produce photorealistic effects in games

This is because all GPUs are parallel processing engines: multiple threads all execute the same instruction. This works great for vertex and pixel processing workloads. High coherency between threads is essential for efficiency and non-coherent execution paths (the appearance of divergent branches) results in masked out operations and lost GFLOPS of performance.

Realistic ray tracing typically results in non-coherent processing because of the way light behaves in the real world.  Because light scatters everywhere, it is very difficult to maintain coherency between threads performing ray tracing.  This is why ray tracing is not a good match for traditional GPU architectures.

This is where PowerVR GR6500 steps in to save the day: the PowerVR Wizard ray tracing GPU provides a budget of 300 million of rays per second in a mobile power envelope and is designed to comfortably sustain real-world applications with highly incoherent rays, enormous scenes, and shaders that execute hundreds of instructions to resolve each ray (not just technical demos.)

The hybrid rendering concept shown at GDC 2014 is the perfect example of how PowerVR Ray Tracing technology can deliver power-efficient, amazing visuals for mobile and embedded applications. We invite you to come and see it for yourself at our booth (#402) at the GDC Expo and future events.

Make sure you follow us on Twitter (@ImaginationPR, @PowerVRInsider) and keep coming back to the PowerVR blog for more articles on graphics and ray tracing.

 

Comments

  • MikeGraf

    “life like reflections” .. lets face it, no one has a car that clean / waxed 😛 Very nice post.

    • NoOne

      Did anyone call me? 😛

      • AnyOne

        Nope

        • Nope

          What?

          • MikeGraf

            confused

          • sds

            Look at the usernames…

          • MikeGraf

            haha thats awesome, i get it..

            Did this all spawn as an instantaneous joke, or did I mess up the grammar, or something else ?

          • What

            What what?

      • MikeGraf

        Yeah! They’re copying your car to advertise for their 3d magiks.

  • MikeGraf

    “life like reflections” .. lets face it, no one has a car that clean / waxed 😛 Very nice post.

    • NoOne

      Did anyone call me? 😛

      • AnyOne

        Nope

        • Nope

          What?

          • MikeGraf

            confused

          • sds

            Look at the usernames…

          • MikeGraf

            haha thats awesome, i get it..

            Did this all spawn as an instantaneous joke, or did I mess up the grammar, or something else ?

          • What

            What what?

      • MikeGraf

        Yeah! They’re copying your car to advertise for their 3d magiks.

  • MaqueGenio

    To be honest aside from that boat scene, which looks like it was done with a much more powerfull hardware, rasterized only looks better without those over the top reflections(that car….oh boy). Might be the art, RT treatment on top of ps2 quality assets looks awful.

    • The boat scene was generated using the dedicated Caustic R2500 ray tracing PC cards, which provide a much smaller ray budget than the ray tracing GPU we’ve announced today.

      https://www.imgtec.com/blog/caustic/caustic-previews-r2500-and-r2100-openrl-ray-tracing-pc-boards-at-ces-2013

      The screenshots above are from an in-house tech demo which is not production-quality but instead aims to show the potential of this technology. I’m sure game developers will come up with better and innovative techniques.

      Have a look at

      http://www.gamasutra.com/blogs/AlexandruVoica/20140318/213148/Practical_techniques_for_ray_tracing_in_games.php

      for a comparison between rasterized-only vs. hybrid rendering.

      Best regards,
      Alex.

      • tangey

        Alex,
        Can you give any sense at all as to extra power budget required by including the RT hybrid ? The caustic cards are relatively power mean compared to doing the same thing on standard GPU, but still probably a magnitude+ where you need to be for soc inclusion. To get down to that level plus also have extra Ray tracing performance, seems far fetched. besides the memory, is there a lot of circuitry on the caustic cards that is redundant when it comes to soc inclusion, and is that where a big power saving is seen ?

        • The Caustic cards were built on a 90nm TSMC process node and needed about 30W of power per core. So even in an older process and without being integrated or designed for mobile, they were orders of magnitude more efficient that GPU compute-based solutions from the desktop guys.

          Obviously, the dedicated ray tracing hardware has now been integrated directly into the Rogue graphics pipeline. We’ve lowered the power consumption dramatically so it can easily fit inside an SoC designed for a tablet or ultrabook if built on a 28nm or lower node.

      • AndrewJ

        Just looking at the comparison of rasterized vs hybrid, other that the reflective bonnet on the car there is no ultimate perceptual difference (in the context of a game). I would also like to understand why in the rasterized example shadows and reflections have not been added ? I mean you could very easily achieve the same effect of the hybrid example. It would be good if the comparison was fair one!

        • Hi Andrew,

          The main idea of that comparison is to show how you can drastically simplify the creation flow. The effects above (shadows, transparency and/or reflections) can be obtained with ray tracing by default with minimal costs for developers. Even better, these effects are achieved in a version of the Unity game engine, which means developers are able to reuse their previous code.

          Meanwhile, to achieve similar effects in a traditional rasterized GPU, developers have to invest a lot of time and effort in pre-baked lighting and complicated deferred rendering techniques which increase memory bandwidth traffic and power consumption.

          Regards,
          Alex.

          • AndrewJ

            Taking Unity as your example, The baking of lighting and shadows can very eaily be done in most workflows. Unity has a one click to bake . Infact, in the screenshot above you talk about using the Unity integration to visualise the pre-baking. I understand this technology from a content creation point of view.
            Yet to be convinced of it’s performance for real content and the perceptual benefit it actually brings… i.e. How many transparent glass balls do you really need in a scene ?

          • If you try to model multiple shadow casting lights (e.g. lights on a Christmas tree), traditional methods on mobile can be very complicated and introduce significant overhead.

            Ray tracing works wonderfully well in this example.

            Regards,
            Alex.

  • MaqueGenio

    To be honest aside from that boat scene , which looks like it was done with a much more powerfull hardware, the rasterized only pic looks better, without those over the top reflections(that car….oh boy). Might be the art, RT treatment on top of ps2 quality assets looks awful.

    • The boat scene was generated using the dedicated Caustic R2500 ray tracing PC cards, which provide a much smaller ray budget than the ray tracing GPU we’ve announced today.

      http://blog.imgtec.com/caustic/caustic-previews-r2500-and-r2100-openrl-ray-tracing-pc-boards-at-ces-2013

      The screenshots above are from an in-house tech demo which is not production-quality but instead aims to show the potential of this technology. I’m sure game developers will come up with better and innovative techniques.

      Have a look at

      http://www.gamasutra.com/blogs/AlexandruVoica/20140318/213148/Practical_techniques_for_ray_tracing_in_games.php

      for a comparison between rasterized-only vs. hybrid rendering.

      Best regards,
      Alex.

      • tangey

        Alex,
        Can you give any sense at all as to extra power budget required by including the RT hybrid ? The caustic cards are relatively power mean compared to doing the same thing on standard GPU, but still probably a magnitude+ where you need to be for soc inclusion. To get down to that level plus also have extra Ray tracing performance, seems far fetched. besides the memory, is there a lot of circuitry on the caustic cards that is redundant when it comes to soc inclusion, and is that where a big power saving is seen ?

        • The Caustic cards were built on a 90nm TSMC process node and needed about 30W of power per core. So even in an older process and without being integrated or designed for mobile, they were orders of magnitude more efficient that GPU compute-based solutions from the desktop guys.

          Obviously, the dedicated ray tracing hardware has now been integrated directly into the Rogue graphics pipeline. We’ve lowered the power consumption dramatically so it can easily fit inside an SoC designed for a tablet or ultrabook if built on a 28nm or lower node.

      • AndrewJ

        Just looking at the comparison of rasterized vs hybrid, other that the reflective bonnet on the car there is no ultimate perceptual difference (in the context of a game). I would also like to understand why in the rasterized example shadows and reflections have not been added ? I mean you could very easily achieve the same effect of the hybrid example. It would be good if the comparison was fair one!

        • Hi Andrew,

          The main idea of that comparison is to show how you can drastically simplify the creation flow. The effects above (shadows, transparency and/or reflections) can be obtained with ray tracing by default with minimal costs for developers. Even better, these effects are achieved in a version of the Unity game engine, which means developers are able to reuse their previous code.

          Meanwhile, to achieve similar effects in a traditional rasterized GPU, developers have to invest a lot of time and effort in pre-baked lighting and complicated deferred rendering techniques which increase memory bandwidth traffic and power consumption.

          Regards,
          Alex.

  • LaTrinius Washington

    Meh, I’ve seen better…

  • Guest

    Meh, I’ve seen better…

  • Sean Lumly

    I think that the choice to highlight hybrid rendering (rasterization and ray-tracing) is a very wise choice. The benefits that ray-tracing would have to a typical raster engine is potentially huge, especially for reflections, shadows, and potentially ambient-occlusion which could give mobile devices a clear win, even when compared against much more powerful (and power-hungry) console and PCs.

    But I’m most excited to see clever uses for ray-tracing outside of the traditional uses. This may include much more robust collision detection, pre-scene light baking, or perhaps extreme instancing (eg. blades of grass in a field). These are techniques that have been shoehorned into the modern graphics pipeline, that can now be done in a very straight forward way very efficiently, and without hacks or compromises.

    • Agreed, this is a huge deal. Combining what Imagination Technologies does very well with tile-deferred shading, and now hybrid ray-tracing, this levels out render performance and ups the quality a lot!

      This gets really interesting in the VR space with it heating up. Ray tracing lets you do physically accurate liquids and implicit surfaces fairly cheap. With HMDs doing ultra-fast low-latency onboard rendering we’ll be able to have some pretty exciting experiences soon.

      • Sean Lumly

        Your point is especially true with VR, where discrepancies in reflections will be very visible when viewing in stereo. Where cube-mapped reflections may work on a 2D screen, in VR it likely would look ‘strange’.

        The same can be said about ‘billboarding’ which is used to draw dense foliage like grass. In VR it is very apparent that there are flat billboards with a grass-texture, and it is less convincing than actual blades. Tracing actual grass-blades would be something that a ray-tracer could do quite convincingly.

  • Sean Lumly

    This is a BIG deal. I can’t stress this enough, and Imagination deserves huge kudos for making this happen. I only hope that OpenRL is adopted by the larger industry as a standard, and ray tracing sees mass adoption.

    I think power consumption is a large imperative for RT in hardware. Rasterizers flat-out suck at doing reflections and shadows, where the result is inaccurate, expensive computationally, and heavy on memory bandwidth. However reflection rays and shadow rays are basic operations for a ray-tracer.

    I think that Imagination’s choice to highlight hybrid rendering (rasterization and ray-tracing) is very wise. The benefits that ray-tracing would have to a typical raster engine is potentially huge and could give mobile devices a clear win, even when compared against much more powerful (and power-hungry) console and PCs.

    But I’m most excited to see clever uses for ray-tracing outside of the traditional reflection/shadow uses. This may include much faster and more robust collision detection, pre-scene light baking, visibility query for triangle level culling, non-screen space sub-surface-scattering, or perhaps extreme instancing (think blades of grass in a field). These are techniques that have been shoehorned into the modern graphics pipeline, that can now be done in a very straight forward way very efficiently, and without hacks or compromises.

    • Agreed, this is a huge deal. Combining what Imagination Technologies does very well with tile-deferred shading, and now hybrid ray-tracing, this levels out render performance and ups the quality a lot!

      This gets really interesting in the VR space with it heating up. Ray tracing lets you do physically accurate liquids and implicit surfaces fairly cheap. With HMDs doing ultra-fast low-latency onboard rendering we’ll be able to have some pretty exciting experiences soon.

      • Sean Lumly

        Your point is especially true with VR, where discrepancies in reflections will be very visible when viewing in stereo. Where cube-mapped reflections may work on a 2D screen, in VR it likely will look ‘strange’.

        The same can be said about ‘billboarding’ which is used to draw dense foliage like grass. In VR it is very apparent that there are flat billboards with a grass-texture, and it is less convincing as actual blades. Tracing actual grass-blades would be something that a ray-tracer could do quite convincingly.

  • Luca

    Hi Alex, I think this new gpu represents something great and indeed innovative. I know you guys have been working a lot to develop and improve this technology on small chips. I recon the success of this chip will depend on how developers will implement this technology forward with particular focus on games.
    And in this you will have a lot of challenge to convince the market to implement this raytracing. Since most software houses develop games for the existing consoles it won’t be easy to see same games with your features. But you have, as you know, a lot of potentiality in other markets!!
    Innovative technology is always welcome in this market, so I wish all the best of luck for your product!
    Regards
    L

    • MaqueGenio

      It is sad to think that all of these amazing achievements in mobile tech will go to waste because the market for bleeding edge graphics on mobile is almost non existent. Unfortunately for IMG the audience that goes crazy over graphics related stuff and so forth gravitate towards pcs/consoles.

      • Actually I tend to disagree. Right now, innovation is happening in mobile graphics. The amount of performance (100s of GFLOPS) we are delivering at sub-5W power consumption means we are really pushing the envelope in performance per mW and per mm2, while the desktop guys are more or less doing what they’ve been doing for years.

        Adding ray tracing into the mix is yet another example of how we continue to drive this industry forward. And, besides, scaling PowerVR GR6500 to multi-core configurations that would fit inside a very slick and small game console is not a problem; we have the technology to scale up very easily.

        Regards,
        Alex.

        • MaqueGenio

          You are actually agreeing with me (sort of) Maybe my point wasn’t clear enough because of my rough english. My concerns were not towards your end(as a gpu company,). The problem is the technologies you develop are not being deployed into software for consumers. Without content that takes advantage of you powerfull gpus your work goes to waste. To this day theres not a single game that comes remotely close to maxout the 5s g6430. The problem here is the “YAY look all the cool stuff our hardware can do” line of thinking. There are amazing things being done on the hardware side but ends up getting lost in the halfway because there is no sofware that can use your tech.

  • Luca

    Hi Alex, I think this new gpu represents something great and indeed innovative. I know you guys have been working a lot to develop and improve this technology on small chips. I recon the success of this chip will depend on how developers will implement this technology forward with particular focus on games.
    And in this you will have a lot of challenge to convince the market to implement this raytracing. Since most software houses develop games for the existing consoles it won’t be easy to see same games with your features. But you have, as you know, a lot of potentiality in other markets!!
    Innovative technology is always welcome in this market, so I wish all the best of luck for your product!
    Regards
    L

    • MaqueGenio

      It is sad to think that all of these amazing achievements in mobile tech will go to waste because the market for bleeding edge graphics on mobile is almost non existent. Unfortunately for IMG the audience that goes crazy over graphics related stuff and so forth gravitate towards pcs/consoles.

      • Actually I tend to disagree. Right now, innovation is happening in mobile graphics. The amount of performance (100s of GFLOPS) we are delivering at sub-5W power consumption means we are really pushing the envelope in performance per mW and per mm2, while the desktop guys are more or less doing what they’ve been doing for years.

        Adding ray tracing into the mix is yet another example of how we continue to drive this industry forward. And, besides, scaling PowerVR GR6500 to multi-core configurations that would fit inside a very slick and small game console is not a problem; we have the technology to scale up very easily.

        Regards,
        Alex.

        • MaqueGenio

          You are actually agreeing with me (sort of) Maybe my point wasn’t clear enough because of my rough english. My concerns were not towards your end(as a gpu company,). The problem is the technologies you develop are not being deployed into software for consumers. Without content that takes advantage of you powerfull gpus your work goes to waste. To this day theres not a single game that comes remotely close to maxout the 5s g6430. The problem here is the “YAY look all the cool stuff our hardware can do” line of thinking. There are amazing things being done on the hardware side but ends up getting lost in the halfway because there is no sofware making use of your tech.

  • Tarpor

    I’m guessing those quoted number are for coherent batched rays as opposed to incoherent rays? If so, they’re not that impressive.

    Also, I wish you’d use the term “pathtracing” instead of raytracing when talking about light integration and how accurate the lighting is – raytracing (other than in theory allowing more accurate shadows than deep shadow maps) in itself has no advantages over rasterization. It’s the physically-based shading and multiple-bounce (leading to global illumination) which gives the accurate lighting. And physically-based direct illumination can be done with rasterization.

    • Bram Stolk

      I don’t think they use path tracing in the sample images. If they did, the images would have a lot of noise. Path tracing is a stochastic process where rays fire in random directions after scattering. Here, the reflections are perfect. Ray tracing in itself is a big improvement, as it does hard shadows and reflections, which on rasterizers have to be kludged in and tend to look crappy/fake. That said, the ray tracing hardware can probably be used for path tracing. And better yet, a photon mapper could probably use the same hardware, although the photon map would be residing in CPU memory.

      • Jesper Mortensen

        Sharp reflections and hard shadows are easy to do with ray tracing it would be really hard to get noise there. Glossy reflections and soft shadows a little harder, but doable in realtime with 300 mrays. Realtime GI will be noisy still you need more rays for that.

    • gavingreenwalt

      Path tracing is a subset of raytracing techniques you could pathtrace with this chip… or not.

      As to “incoherent rays” the OpenRL model isn’t really affected by incoherence due to its non-blocking shader approach. The largest bottleneck I’ve encountered with the IMG Raytracing units is with shading. However tying this directly to a GPU for GPU shading should remove that bottleneck.

  • Tarpor

    I’m guessing those quoted number are for coherent batched rays as opposed to incoherent rays? If so, they’re not that impressive.

    Also, I wish you’d use the term “pathtracing” instead of raytracing when talking about light integration and how accurate the lighting is – raytracing (other than in theory allowing more accurate shadows than deep shadow maps) in itself has no advantages over rasterization. It’s the physically-based shading and multiple-bounce (leading to global illumination) which gives the accurate lighting. And physically-based direct illumination can be done with rasterization.

    • I don’t think they use path tracing in the sample images. If they did, the images would have a lot of noise. Path tracing is a stochastic process where rays fire in random directions after scattering. Here, the reflections are perfect. Ray tracing in itself is a big improvement, as it does hard shadows and reflections, which on rasterizers have to be kludged in and tend to look crappy/fake. That said, the ray tracing hardware can probably be used for path tracing. And better yet, a photon mapper could probably use the same hardware, although the photon map would be residing in CPU memory.

      • Jesper Mortensen

        Sharp reflections and hard shadows are easy to do with ray tracing it would be really hard to get noise there. Glossy reflections and soft shadows a little harder, but doable in realtime with 300 mrays. Realtime GI will be noisy still you need more rays for that.

    • gavingreenwalt

      Path tracing is a subset of raytracing techniques you could pathtrace with this chip… or not.

      As to “incoherent rays” the OpenRL model isn’t really affected by incoherence due to its non-blocking shader approach. The largest bottleneck I’ve encountered with the IMG Raytracing units is with shading. However tying this directly to a GPU for GPU shading should remove that bottleneck.

  • Bram Stolk

    I see easy parallelism in ray tracing that can be exploited by traditional GPU. You don’t want to do rays in parallel. Instead do tests for a single ray in parallel. You turn it around, and with SIMD you test a single ray against let’s say 64 triangles in one go. Even with incoherent rays, you still have plenty of useful work for the GPU.

  • I see easy parallelism in ray tracing that can be exploited by traditional GPU. You don’t want to do rays in parallel. Instead do tests for a single ray in parallel. You turn it around, and with SIMD you test a single ray against let’s say 64 triangles in one go. Even with incoherent rays, you still have plenty of useful work for the GPU.

  • GrangerFX

    The real power of this GPU is its ability to batch shaders and rays together. It starts by creating all the rays for the current anti-aliasing level. It casts them in a batch and collects the shaders they hit. These shaders are clustered together and executed in batches for coherency. The shaders generate rays which are again batched together and so on. This is the magic that turns non-coherent rays into coherent shading.

  • GrangerFX

    The real power of this GPU is its ability to batch shaders and rays together. It starts by creating all the rays for the current anti-aliasing level. It casts them in a batch and collects the shaders they hit. These shaders are clustered together and executed in batches for coherency. The shaders generate rays which are again batched together and so on. This is the magic that turns non-coherent rays into coherent shading.

  • keng123

    Hi Alex, despite some of the posts here this is very impressive indeed, especially getting this working for long periods (one presumes) without performance degradation in a mobile (28 nm and below process) SoC format.
    I was expecting there to be an upgrade to the Caustic add in cards around now, is there one due ?

    • Thanks, we are considering several options at the moment. An update will come soon.

      Best regards,
      Alex.

  • keng123

    Hi Alex, despite some of the posts here this is very impressive indeed, especially getting this working for long periods (one presumes) without performance degradation in a mobile (28 nm and below process) SoC format.
    I was expecting there to be an upgrade to the Caustic add in cards around now, is there one due ?

    • Thanks, we are considering several options at the moment. An update will come soon.

      Best regards,
      Alex.

  • dr.doom

    “Unmatched real-world ray tracing performance: Up to 300 MRPS (million rays per second), 24 billion node tests per second and 100 million dynamic triangles per second at 600 MHz”

    can you give comparison #s like above for Caustic R2500, R2100, Software and dedicated Renderman boxes.
    preferably in a table with TDP.

    • It is not an apples to apples comparison. The ray tracing PC boards from Caustic have on-chip RAM memory and used a PCI interface to communicate with the motherboard. The ray tracing GPU we’re announcing today uses unified system memory and is integrated on chip via a high-speed bus.

      Therefore, even if I would compare the two in terms of theoretical peak performance, it wouldn’t make sense since they were designed for very different markets and have different specifications.

      But, to give you an idea of the kind of performance/watt, the Caustic boards consumed about 30W per RTU + 4GB RAM in a 90nm process node and reached about 10-20% of the MRPS performance of PowerVR GR6500.

      The Visualizer could run on the CPU too but took a 3-5x performance hit, depending on the scene rendered in Autodesk Maya.

      Regards,
      Alex.

  • dr.doom

    “Unmatched real-world ray tracing performance: Up to 300 MRPS (million rays per second), 24 billion node tests per second and 100 million dynamic triangles per second at 600 MHz”

    can you give comparison #s like above for Caustic R2500, R2100, Software and dedicated Renderman boxes.
    preferably in a table with TDP.

    • It is not an apples to apples comparison. The ray tracing PC boards from Caustic have on-chip RAM memory and used a PCI interface to communicate with the motherboard. The ray tracing GPU we’re announcing today uses unified system memory and is integrated on chip via a high-speed bus.

      Therefore, even if I would compare the two in terms of theoretical peak performance, it wouldn’t make sense since they were designed for very different markets and have different specifications.

      But, to give you an idea of the kind of performance/watt, the Caustic boards consumed about 30W per RTU + 4GB RAM in a 90nm process node and reached about 10-20% of the MRPS performance of PowerVR GR6500.

      The Visualizer could run on the CPU too but took a 3-5x performance hit, depending on the scene rendered in Autodesk Maya.

      Regards,
      Alex.

  • Augure

    Great, RayTracing will greatly improve games and 3D modeling…

    …but now it’s matter of emulating dirt, dust, rust, water etc…on these surfaces so the result of reflection are accurate and not just over-glorified transparency/mirror effect everywhere.

    • Ray tracing is simply a means to an end. It can be used by developers to accurately simulate any type of effect that needs photorealistic quality.

      Regards,
      Alex.

      • Augure

        Ok, thanks. I figured since it’s call “ray tracing” and very cleverly based on what makes everything that is vision, light, it was mainly a matter of lights and reflections.

        But demos mainly showcase examples with too clear and clean reflective surfaces, which is not really realistic to me, but I guess it’s just a matter of adding the different textures/effects.

  • Augure

    Great, RayTracing will greatly improve games and 3D modeling…

    …but now it’s matter of emulating dirt, dust, rust, water etc…on these surfaces so the result of reflection are accurate and not just over-glorified transparency/mirror effect everywhere.

    • Ray tracing is simply a means to an end. It can be used by developers to accurately simulate any type of effect that needs photorealistic quality.

      Regards,
      Alex.

      • Augure

        Ok, thanks. I figured since it’s call “ray tracing” and very cleverly based on what makes everything that is vision, light, it was mainly a matter of lights and reflections.

        But demos mainly showcase examples with too clear and clean reflective surfaces, which is not really realistic to me, but I guess it’s just a matter of adding the different textures/effects.

  • mr_3

    Interesting…dedicating more hardware architecture specifically to raytracing could mean greater potential. At a minimum, bringing back the question of what’s the best approach for each aspect of visualization we care about is a great thing in my book, and starting with the question in hardware makes the most sense to me, even if the answer in a lot of cases eventually winds up being “little” in terms of direct architecture support for raytracing and the results. I’ll keep an eye out and keep rooting for you guys, it’s not a given to me that this kind of approach will yield better results overall, but I really like the common-sense and anti-“me too” approach.

    • Thanks, we believe the combination of dedicated ray tracing hardware in the GPU and the hybrid rendering methodology we’ve been pioneering with lead game engine partners will change the way people create and enjoy these applications.

      Regards,
      Alex.

  • mr_3

    Interesting…dedicating more hardware architecture specifically to raytracing could mean greater potential. At a minimum, bringing back the question of what’s the best approach for each aspect of visualization we care about is a great thing in my book, and starting with the question in hardware makes the most sense to me, even if the answer in a lot of cases eventually winds up being “little” in terms of direct architecture support for raytracing and the results. I’ll keep an eye out and keep rooting for you guys, it’s not a given to me that this kind of approach will yield better results overall, but I really like the common-sense and anti-“me too” approach.

    • Thanks, we believe the combination of dedicated ray tracing hardware in the GPU and the hybrid rendering methodology we’ve been pioneering with lead game engine partners will change the way people create and enjoy these applications.

      Regards,
      Alex.

  • tangey

    Alex,

    Leaving gaming aside, does RT bring any efficiencies or other benefits to UI graphics. With the increasing use of transparencies, shadows, blur etc on user interfaces, in particular on IOS, can RT be used to do these in a more efficient manner than rasterisation and hence improve general battery life ?

    On a separate but related question, If RT is redundant on UI, can the RT hardware be fundamentally powered down for UI use ?

    • Hi,

      Sorry for the late reply, I was busy with GDC. Ray tracing is a tool, it can be used for anything that developers want to use it for. In this article

      http://www.gamasutra.com/blogs/AlexandruVoica/20140318/213148/Practical_techniques_for_ray_tracing_in_games.php

      I highlight the most popular use cases but we’re sure that developers will find many more useful applications.

      Using ray tracing where it makes sense in hybrid rendering can improve battery life too since it uses the dedicated, hardware-optimized GPU vs. a complicated, computationally intensive software implementation.

      Best regards,
      Alex.

      • Tangey

        Well, I assume at all times developers can use any IP for whatever they want to ! My question is, there anything to be gained in using RT in UIs that implement transparency etc to enhance the user experience. Is it a power efficient way of doing it, or if you are already generating these things on the fly anyway, is it best to keep it within the standard graphics IP.

        Also if RT is not being used (whether in the UI or in a game), it would be crucial that the RT IP does not take power whilst not being used, so is the RT IP completely power gateable within the overall graphics IP.

        • We are currently focusing our efforts on improving effects in game engines.

          There are certain cases where using only the traditional graphics IP pipeline can be enough for simple effects. You can definitely use the ray tracing hardware to improve certain UI elements (reflections would make more sense) but the most power efficient way in your example would be to use rasterized blending.

          To answer your second question: since PowerVR Wizard GPUs are based on the Series6XT family, they include our PowerGearing tech which enables power gating, amongst other things.

          Regards,
          Alex.

          • Tangey

            thanks for the answers

    • Robert Kegel

      Why single out ios? Why not say on any mobile interface? Plus apple wouldn’t license this, they would either copy it, patent it then call it their own and sue PowerVR or they would buy PowerVR and not license out the technology.

      Lets hope PowerVR licenses this out to all companies that want to license it (probably companies that make Android handsets and tablets and Microsoft and maybe Nintendo and Sony for their handhelds). Maybe even Nvidia and/or ATI to put in their video cards.

  • tangey

    Alex,

    Leaving gaming aside, does RT bring any efficiencies or other benefits to UI graphics. With the increasing use of transparencies, shadows, blur etc on user interfaces, in particular on IOS, can RT be used to do these in a more efficient manner than rasterisation and hence improve general battery life ?

    On a separate but related question, If RT is redundant on UI, can the RT hardware be fundamentally powered down for UI use ?

    • Hi,

      Sorry for the late reply, I was busy with GDC. Ray tracing is a tool, it can be used for anything that developers want to use it for. In this article

      http://www.gamasutra.com/blogs/AlexandruVoica/20140318/213148/Practical_techniques_for_ray_tracing_in_games.php

      I highlight the most popular use cases but we’re sure that developers will find many more useful applications.

      Using ray tracing where it makes sense in hybrid rendering can improve battery life too since it uses the dedicated, hardware-optimized GPU vs. a complicated, computationally intensive software implementation.

      Best regards,
      Alex.

      • Tangey

        Well, I assume at all times developers can use any IP for whatever they want to ! My question is, there anything to be gained in using RT in UIs that implement transparency etc to enhance the user experience. Is it a power efficient way of doing it, or if you are already generating these things on the fly anyway, is it best to keep it within the standard graphics IP.

        Also if RT is not being used (whether in the UI or in a game), it would be crucial that the RT IP does not take power whilst not being used, so is the RT IP completely power gateable within the overall graphics IP.

        • We are currently focusing our efforts on improving effects in game engines.

          There are certain cases where using only the traditional graphics IP pipeline can be enough for simple effects. You can definitely use the ray tracing hardware to improve certain UI elements (reflections would make more sense) but the most power efficient way in your example would be to use rasterized blending.

          To answer your second question: since PowerVR Wizard GPUs are based on the Series6XT family, they include our PowerGearing tech which enables power gating, amongst other things.

          Regards,
          Alex.

          • Tangey

            thanks for the answers

    • Robert Kegel

      Why single out ios? Why not say on any mobile interface? Plus apple wouldn’t license this, they would either copy it, patent it then call it their own and sue PowerVR or they would buy PowerVR and not license out the technology.

      Lets hope PowerVR licenses this out to all companies that want to license it (probably companies that make Android handsets and tablets and Microsoft and maybe Nintendo and Sony for their handhelds). Maybe even Nvidia and/or ATI to put in their video cards.

  • If you try to model multiple shadow casting lights (e.g. lights on a Christmas tree), traditional methods on mobile can be very complicated and introduce significant overhead.

    Ray tracing works wonderfully well in this example.

    Regards,
    Alex.

  • Robert Kegel

    This is a nice step but revolutionary no. Evolutionary yes. People need to look up revolutionary in a dictionary and stop using it so much. If PowerVR invented the holodeck that would be revolutionary. Ray tracing processor is really nice and I commend them…I’m not trying to belittle what they came up with, but revolutionary it is not.

  • Robert Kegel

    This is a nice step but revolutionary no. Evolutionary yes. People need to look up revolutionary in a dictionary and stop using it so much. If PowerVR invented the holodeck that would be revolutionary. Ray tracing processor is really nice and I commend them…I’m not trying to belittle what they came up with, but revolutionary it is not.

  • GErorge

    only 300 Gflops card can compute 300 000 000 rays per secong? Imagine if nvidia or ati implemented similar technologies in their 7 – 10 teraflop gpus.

    • Jesper Mortensen

      GPUs aren’t really that well suited to traversing an acceleration structure let alone build one. So you can directly map flops to rays/sec

  • GErorge

    only 300 Gflops card can compute 300 000 000 rays per secong? Imagine if nvidia or ati implemented similar technologies in their 7 – 10 teraflop gpus.

    • Jesper Mortensen

      GPUs aren’t really that well suited to traversing an acceleration structure let alone build one. So you can directly map flops to rays/sec