How far from optimal do you think modern game engines are at utilizing the GPU generally speaking? Do you think this gap between potential and utilized performance grows or lessens with current API & platform explosion and hardware development?

I think GPU utilization is more up to the individual game, and less on the engine itself. For example, something like Unity allows a game developer to write their own shaders, compute shaders, make their own rendering effects and so on -- which means the game developer is fully in power to utilize whatever the GPU has (as long as Unity exposes access to that functionality).
The new graphics APIs (Vulkan, DX12, Metal) are first and foremost targeted at increasing CPU efficiency (via multithreading, lower driver overhead etc.). They do enable some more GPU features or more efficiency there too, but that's not their primary goal.
That said, there are very interesting "non-standard" ways to use the GPU these days (e.g. see Media Molecule Dreams or Q-Games Tomorrow Children). But I think that's more towards the "interesting use" axis, and not necessarily "use the GPU 100%" axis.
Using the GPU "to the max" is quite hard on the PC due to market realities -- you have to make sure your game works on GPUs that are easily 10x in terms of performance between low-end and high-end, and in some cases even more. And the fastest ones are typically quite a small part of the market share, making it not very viable to spend significant time developing something specifically for them. On mobile the differences between high-end and low-end are even more extreme. Even on consoles, were are no longer in a "fixed hardware for 7 years" cycle - see PS4 and PS4 Pro; that platform already has two quite different hardware configurations.

View more

  • 61
    Posts
  • 31
    Likes

About Aras Pranckevičius:

Graphics programmer and code plumber at Unity

Kaunas, Lithuania

#graphics #programming