batch renders on a server (e.g. for stills or cutscenes)
comparisons across GPU archs - it could essentially be the “standard” for how a scene should be rendered
And of course, maybe some CPU manufacturer will build in an accelerator so lower end GPUs (say, APUs) could have reasonable raytracing in otherwise GPU limited games (i don’t know enough about modern game pipelines to know if that’s a possibility).
Or the final reason, which may be the most important of all: why not?
I’ll add one to this - optimization. A lot of clever optimization techniques tend to come out of projects like this - necessity is the mother of invention.
If the CPUs get strong enough, they could run old raytracing games at some point … especially on hardware platforms that don’t have ray tracing GPUs available for them. Such stuff can also be helpful for hardware platforms that don’t have raytracing GPUs available for them.
Hmm, seems interesting. What’s the reason for computing the raytracing with the CPU?
I can see a few reasons:
And of course, maybe some CPU manufacturer will build in an accelerator so lower end GPUs (say, APUs) could have reasonable raytracing in otherwise GPU limited games (i don’t know enough about modern game pipelines to know if that’s a possibility).
Or the final reason, which may be the most important of all: why not?
I’ll add one to this - optimization. A lot of clever optimization techniques tend to come out of projects like this - necessity is the mother of invention.
If the CPUs get strong enough, they could run old raytracing games at some point … especially on hardware platforms that don’t have ray tracing GPUs available for them. Such stuff can also be helpful for hardware platforms that don’t have raytracing GPUs available for them.