ExtraSS: Intel uses extrapolation to improve graphics performance

Written by Guillaume
Publication date: {{ dayjs(1703869238*1000).local().format("L").toString()}}
Follow us
This article is an automatic translation

Intel is looking to take a different route from AMD/NVIDIA in boosting the performance of our graphics cards.

Generation after generation, graphics processors have seen their computing power increase in quite formidable proportions. However, these increases in power remain insufficient, while graphics rendering techniques are adopting ever more cumbersome innovations. Ray tracing, for example, has become very popular with developers for the quality and precision of its rendering. On the other hand, even the most powerful GeForce RTX 4090s are brought to their knees by ray tracing, so NVIDIA has come up with new techniques to put less strain on GPUs. This is DLSS, its super-sampling technique, which has been strengthened so that artificial intelligence can be put to greater use. So, instead of simply "enlarging" an image calculated by the GPU, the so-called frame generation technique asks AI to create the image in its entirety. Of course, to ensure that the result is as coherent as possible, the AI relies on a whole host of information, starting with the previous image and the next image, both of which have been calculated by the GPU.

When Intel compares different rendering techniques © Intel

This process really works wonders, and graphics performance can be doubled or even tripled, which is what prompted competitor AMD to come up with a similar technology a few months later. The problem is that in either case, latency is significantly increased: for "quiet" games, this is hardly a problem, but for the most competitive multiplayer video games, it's more complicated. Intel has therefore set its sights on a different direction. At SIGGRAPH Asia 2023 in Sydney, Intel presented its so-called ExtraSS technology and the whole process behind it. It's no longer a case of using the previous image and the next image as a basis for the AI to "imagine" an image. No, the AI relies solely on the previous image. In fact, it doesn't have to deal with this increase in latency, since it's no longer a question of waiting for the next image to be rendered.

The geometric buffer supports XeSS © Intel

On the other hand, and Intel readily admits this, there is a loss of precision. To remedy this, Intel has come up with XeSS which, as WCCFTech points out, uses a deformation technique with a geometric buffer to preserve quality as far as possible. On paper, Intel may well achieve an interesting compromise. In any case, it's interesting to see the company take a different path from AMD/NVIDIA. For us users, the most important thing will be to see for ourselves, and may the best man win!