Wednesday, December 21st 2016

Futuremark Readies New Vulkan and DirectX 12 Benchmarks

Futuremark is working on new game-tests for its 3DMark benchmark suite. One of these is a game test that takes advantage of DirectX 12, but isn't as taxing on the hardware as "Time Spy." Its target hardware is notebook graphics and entry-mainstream graphics cards. It will be to "Time Spy" what "Sky Diver" is to "Fire Strike."

The next, more interesting move by Futuremark is a benchmark that takes advantage of the Vulkan 3D graphics API. The company will release this Vulkan-based benchmark for both Windows and Android platforms. Lastly we've learned that development of the company's VR benchmarks are coming along nicely, and the company hopes to release new VR benchmarks for PC and mobile platforms soon. Futuremark is expected to reveal these new game-tests and benchmarks at its 2017 International CES booth, early January.
Add your own comment

29 Comments on Futuremark Readies New Vulkan and DirectX 12 Benchmarks

#1
looniam
serious question:

didn't time spy get criticized for its application of Async compute?

if so, i hope the vulkan bench gets better treatment.
Posted on Reply
#2
cdawall
where the hell are my stars
I'm super curious how Vulkan performs on android. That is interesting
Posted on Reply
#3
evernessince
looniamserious question:

didn't time spy get criticized for its application of Async compute?

if so, i hope the vulkan bench gets better treatment.
It did create an issue because ultimately the benchmark allowed the video card vendor to decide whether or not to use Async compute.

Read the summary from Futuremark themselves

www.futuremark.com/pressreleases/a-closer-look-at-asynchronous-compute-in-3dmark-time-spy

In other words, Nvidia video cards only had to use the Async feature if it increased performance, otherwise they can completely ignore it. It's very bad marketing by Futuremark and isn't a proper test of Async Compute performance.

Right now Doom is really the only game out designed from the ground up for Async and you can see the performance benefits from that.
Posted on Reply
#4
the54thvoid
Intoxicated Moderator
evernessinceIt did create an issue because ultimately the benchmark allowed the video card vendor to decide whether or not to use Async compute.

Read the summary from Futuremark themselves

www.futuremark.com/pressreleases/a-closer-look-at-asynchronous-compute-in-3dmark-time-spy

In other words, Nvidia video cards only had to use the Async feature if it increased performance, otherwise they can completely ignore it. It's very bad marketing by Futuremark and isn't a proper test of Async Compute performance.

Right now Doom is really the only game out designed from the ground up for Async and you can see the performance benefits from that.
I thought Doom simply had a far greater use of AMD extensions compared to very few from Nvidia.
Besides, it's wrong to say a game must use tonnes of Async compute simply because it's used by AMD's ACE units. Devs need to code for the market and putting in some Async is fine. Any Async helps GCN.
However, when it all comes down to it, it's a case of people constantly whining about one API over another and where it's somehow wrong if a suite doesn't try it's hardest to fully utilise AMD hardware at the expense of Nvidia, and vice versa.

You can't expect DX12 or Vulkan applications to simply use all of AMD's hardware when it disadvantages Nvidia. The software Devs have to code with ALL vendors in mind.

Edit: found this.
Various general purpose AMD Vulkan extensions were quickly finished to enable specific optimizations for DOOM. This effort at AMD involved first working with id Software to understand their need, writing extension specs, getting prototype glslangValidator.exe support for those extensions for GLSL to SPIR-V translation (later sending a pull request to incorporate into the public tool), implementation from the shader compiler and driver teams, and finally testing efforts from the driver QA team.
Posted on Reply
#5
evernessince
the54thvoidI thought Doom simply had a far greater use of AMD extensions compared to very few from Nvidia.
Besides, it's wrong to say a game must use tonnes of Async compute simply because it's used by AMD's ACE units. Devs need to code for the market and putting in some Async is fine. Any Async helps GCN.
However, when it all comes down to it, it's a case of people constantly whining about one API over another and where it's somehow wrong if a suite doesn't try it's hardest to fully utilise AMD hardware at the expense of Nvidia, and vice versa.

You can't expect DX12 or Vulkan applications to simply use all of AMD's hardware when it disadvantages Nvidia. The software Devs have to code with ALL vendors in mind.
No one's saying a game must use a ton of Async compute, what we're saying is that a benchmark that advertises Async as something it tests is wrong when one card isn't really doing Async at all. Why does the Nvidia card get a pass just because it can't do Async well but on other benchmarks that are heavy on tessellation (Nvidia's strength) AMD cards don't get a pass? So it's fair for a benchmark to essentially give one card an alternate rendering path to avoid giving everyone an accurate rating of it's Async compute ability?

FYI, Async compute does not disadvantage Nvidia hardware. It doesn't give it any performance loss or gain really so that whole argument that it hurts Nvidia's performance is out the window. Nvidia have had 2 generations of cards where they should have implemented Async but still have not. AMD have had Async in their cards since the 7000 series. At this point it's like having a processor without Hyper-threading, it's a huge feature of DX 12 and Vulkan.
Posted on Reply
#6
the54thvoid
Intoxicated Moderator
Async can be slightly detrimental to Nvidia. Enough benches show that. But while we're at it, Async isn't a game. It's a thing that can be utilised by AMD and Nvidia. AMD get a better performance from it. But still, it doesn't need to be heavily coded in. Also, the Vulkan quote I posted in my edit shows that AMD were very (rightly) keen to get Vulkan Doom working excellently with their hardware. They worked hard to develop the extensions to make it run as fast as it did. If AMD don't do the same work in other Vulkan titles, they won't have such a performance uplift.
It's all down to the same old story, if you put the work in (which requires resources) you benefit.

Edit: I'm at work so can't keep this discussion going :laugh:
Posted on Reply
#7
uuuaaaaaa
the54thvoidAsync can be slightly detrimental to Nvidia. Enough benches show that. But while we're at it, Async isn't a game. It's a thing that can be utilised by AMD and Nvidia. AMD get a better performance from it. But still, it doesn't need to be heavily coded in. Also, the Vulkan quote I posted in my edit shows that AMD were very (rightly) keen to get Vulkan Doom working excellently with their hardware. They worked hard to develop the extensions to make it run as fast as it did. If AMD don't do the same work in other Vulkan titles, they won't have such a performance uplift.
It's all down to the same old story, if you put the work in (which requires resources) you benefit.

Edit: I'm at work so can't keep this discussion going :laugh:
idSoftware wanted to get the most out of the consoles, so it made sense to optimize it for GCN... Anyway, the game runs great on both amd and nividia. I reckon async benefits amd's uarch more than nvidia's due to the massive parallel nature of GCN.
Posted on Reply
#8
bug
evernessinceIt did create an issue because ultimately the benchmark allowed the video card vendor to decide whether or not to use Async compute.

Read the summary from Futuremark themselves

www.futuremark.com/pressreleases/a-closer-look-at-asynchronous-compute-in-3dmark-time-spy

In other words, Nvidia video cards only had to use the Async feature if it increased performance, otherwise they can completely ignore it. It's very bad marketing by Futuremark and isn't a proper test of Async Compute performance.
You'd think that's how developers are supposed to use async (or any other feature) anyway. Forcing async on/off only makes sense if you're trying to benchmarks async specifically.
Posted on Reply
#9
the54thvoid
Intoxicated Moderator
bugYou'd think that's how developers are supposed to use async (or any other feature) anyway. Forcing async on/off only makes sense if you're trying to benchmarks async specifically.
Very good point. The relevant thing is, does the API run and how fast, irrespective of what features the vendor is or is not using.

Edit: As long as visual impact is not unduly lowered by omission of API features.
Posted on Reply
#10
FM_Jarnis
It did create an issue because ultimately the benchmark allowed the video card vendor to decide whether or not to use Async compute.
This is so grossly incorrect that I must hop in.

As the article you linked clearly states, the application cannot control if the card uses async compute in DirectX 12. Period.

There is no way in DirectX 12 to force a card to use async compute. All the application can do is to submit the work in multiple queues, with Compute work labeled as such, which in practice means "this work here, this is compute stuff, it is safe to run it in parallel with graphics. You are free to do so. Do your best!". The rest is up to the drivers.

With DirectX 12, video card driver always makes the decision as to how to process multiple DX12 command queues. Benchmark developer cannot force that, short of re-writing the driver - which is obviously somewhat beyond the capabilities of an application developer...

It is possible to force a system not to use Async Compute by submitting all work, even compute work, in a single DIRECT queue, essentially claiming that all this work can only be run sequentially, but 3DMark Time Spy does this only if you specifically turn on a setting in a Custom Run that is there so you can compare between the two. All Default Runs use Async Compute.
Posted on Reply
#11
efikkan
looniamdidn't time spy get criticized for its application of Async compute?

if so, i hope the vulkan bench gets better treatment.
It was critisized because it didn't "show similar gains as other (AMD optimized) games".
evernessinceNo one's saying a game must use a ton of Async compute, what we're saying is that a benchmark that advertises Async as something it tests is wrong when one card isn't really doing Async at all. Why does the Nvidia card get a pass just because it can't do Async well but on other benchmarks that are heavy on tessellation (Nvidia's strength) AMD cards don't get a pass? So it's fair for a benchmark to essentially give one card an alternate rendering path to avoid giving everyone an accurate rating of it's Async compute ability?
Stop this BS now. The API traces clearly show async in action.
evernessinceFYI, Async compute does not disadvantage Nvidia hardware. It doesn't give it any performance loss or gain really so that whole argument that it hurts Nvidia's performance is out the window. Nvidia have had 2 generations of cards where they should have implemented Async but still have not. AMD have had Async in their cards since the 7000 series. At this point it's like having a processor without Hyper-threading, it's a huge feature of DX 12 and Vulkan.
As I've told you guys dozens of times: Async works by utilizing idle resources. Nvidia has better scheduling and fewer bottlenecks, making the GPU more busy to begin with. That's why there is little gain in many games. If a GPU has 2-3 % idle resources then the overhead of multiple queues and synchronization is going to be greater than the benefits.

Many games are using async shaders for the wrong purpose to begin with. Async shaders was intended to utilize different hardware resources for different tasks, while many games (like AofS) use it for compute shaders, which mostly uses the same resources as rendering. So basically games are optimizing for inferior hardware. As AMD progresses with Vega, Navi and so on, they'll have to create better schedulers, and then there will be less and less gain from doing this, so there is no point in writing games or benchmarks targeting bad hardware.
Posted on Reply
#12
RejZoR
Well, isn't the point of benchmarks to be as taxing as possible? This means, also heavy use of ASYNC compute. If your card can't deal with it, tough F luck. Fix your lame hardware and try again. It's how benchmarks-hardware vendors always worked. Now they'll gonna pander to one or another. Just stop it.
Posted on Reply
#13
birdie
evernessinceIn other words, Nvidia video cards only had to use the Async feature if it increased performance, otherwise they can completely ignore it. It's very bad marketing by Futuremark and isn't a proper test of Async Compute performance.
Why would a hardware vendor use a feature which makes its products perform slower than without it? Can we stop with AMD/Asynchronous Compute fanboyism? D3D12 is not about Async Compute - it's just one of its features. And not the most crucial one. In fact D3D applications may run just fine when Async Compute requests are executed synchroniously. I vividly remember how everyone hated NVIDIA for using Gameworks which still used standard D3D11 features yet made a better use of NVIDIA hardware. Now we have the same situation with D3D12/AMD and everyone has suddenly forgotten this recent vendor specific "debacle" and praises AMD for basically becoming NVIDIA of the past. Ew!

I'm really glad 3DMark "noticed" Vulkan. Being them I'd even make it a primary benchmark, but then I understand they don't want to be enemies to Microsoft.
evernessinceNo one's saying a game must use a ton of Async compute, what we're saying is that a benchmark that advertises Async as something it tests is wrong when one card isn't really doing Async at all.
Pascal does supportAsync Compute. End of story.
Posted on Reply
#14
bug
RejZoRWell, isn't the point of benchmarks to be as taxing as possible? This means, also heavy use of ASYNC compute. If your card can't deal with it, tough F luck. Fix your lame hardware and try again. It's how benchmarks-hardware vendors always worked. Now they'll gonna pander to one or another. Just stop it.
Hm, you seem to be under the impression async is taxing by itself. It's not. It's just a different way of scheduling the actual work to be done.
FM_Jarnis posted a pretty clear explanation two posts above yours, but somehow you managed to miss it.
Posted on Reply
#15
RejZoR
I didn't miss anything. Benchmarks are benchmarks for a reason. ASYNC being the hottest stuff these days, so it's kinda expected from benchmarks to utilize this heavily in order to show hardware flaws. Which in this case means scheduling and branching capability. It's not a performance hit thing as much as it being performance benefit if done right.
Posted on Reply
#16
bug
RejZoRI didn't miss anything. Benchmarks are benchmarks for a reason. ASYNC being the hottest stuff these days, so it's kinda expected from benchmarks to utilize this heavily in order to show hardware flaws. Which in this case means scheduling and branching capability. It's not a performance hit thing as much as it being performance benefit if done right.
You've certainly missed this:
There is no way in DirectX 12 to force a card to use async compute. All the application can do is to submit the work in multiple queues, with Compute work labeled as such, which in practice means "this work here, this is compute stuff, it is safe to run it in parallel with graphics. You are free to do so. Do your best!".
Besides, async is not the hottest thing right now. Outside of AMD, async is a non-issue.
Async will be beneficial in the future, but today, off the top of my head, we have Nvidia, intel and consoles all doing just fine without async.
Posted on Reply
#17
RejZoR
Well, if async is not a hot stuff, then DX12 is pretty much irrelevant then. DX12 brings two important things, async and closer to the metal API. Why on earth would you sack 50% of most important features?
Posted on Reply
#18
bug
RejZoRWell, if async is not a hot stuff, then DX12 is pretty much irrelevant then. DX12 brings two important things, async and closer to the metal API. Why on earth would you sack 50% of most important features?
Because it's not 50%?
On one hand you have a whole new API, on the other you have async which (from an API point of view) is the overload of a function to accept a queue as an argument.
It's only 50% if you count those as words in the dictionary.
Posted on Reply
#19
birdie
RejZoRWell, if async is not a hot stuff, then DX12 is pretty much irrelevant then. DX12 brings two important things, async and closer to the metal API. Why on earth would you sack 50% of most important features?
You're an idiot. Sometimes it helps to at least Google for a minute or two to stop being arrogantly illiterate.

D3D 12 biggest feature is a completely new very close to metal API which allows to extract more performance from your GPU and always have expected results by running your 3D/Shader/Compute code directly on your hardware vs. D3D 11 and earlier which employ very complex OS drivers which translate all your API calls into hardware instructions.
Posted on Reply
#20
ADHDGAMING
uuuaaaaaaidSoftware wanted to get the most out of the consoles, so it made sense to optimize it for GCN... Anyway, the game runs great on both amd and nividia. I reckon async benefits amd's uarch more than nvidia's due to the massive parallel nature of GCN.
it benefits both Manus its AMD just did it better .. lets call it what it is because if it was the other way around AMD would never hear the end of it and the sky would be falling.
Posted on Reply
#21
efikkan
RejZoRWell, isn't the point of benchmarks to be as taxing as possible? This means, also heavy use of ASYNC compute. If your card can't deal with it, tough F luck. Fix your lame hardware and try again. It's how benchmarks-hardware vendors always worked. Now they'll gonna pander to one or another. Just stop it.
No, the point of benchmarks like the ones from Futuremark is to represent typical load, not obscure extremes the users never will run into. But still, sometimes Futuremark weigh certain new features too heavily compared to actual games.

Pure bottleneck benchmarks serve no purpose other than curiosity, like measuring "API overhead", GPU memory bandwidth, etc. Just a few years ago many reviews included benchmarks of 1024x768 screen resolutions, just to see CPU bottlenecks. But those kinds of benchmarks are worthless if no one cares about a high-end GPU in that screen resolution. As always; the only thing that matters is real world performance. It doesn't matter if AMD's comparable products have more Gflop/s, more memory bandwidth, or more "gain" from certain features, at the end of the day actual performance is the only measurement.
Posted on Reply
#22
RejZoR
That's why Futuremark benchmarks from the past ran at 10fps on higher end cards, right?
Posted on Reply
#23
Xzibit
birdiePascal does supportAsync Compute. End of story.
The 1060 isn't showing any gains. Does The Division even make use of async compute in the DX12 patch ?
Posted on Reply
#24
renz496
evernessinceNo one's saying a game must use a ton of Async compute, what we're saying is that a benchmark that advertises Async as something it tests is wrong when one card isn't really doing Async at all. Why does the Nvidia card get a pass just because it can't do Async well but on other benchmarks that are heavy on tessellation (Nvidia's strength) AMD cards don't get a pass? So it's fair for a benchmark to essentially give one card an alternate rendering path to avoid giving everyone an accurate rating of it's Async compute ability?

FYI, Async compute does not disadvantage Nvidia hardware. It doesn't give it any performance loss or gain really so that whole argument that it hurts Nvidia's performance is out the window. Nvidia have had 2 generations of cards where they should have implemented Async but still have not. AMD have had Async in their cards since the 7000 series. At this point it's like having a processor without Hyper-threading, it's a huge feature of DX 12 and Vulkan.
because Async is mostly AMD problem not nvidia. why did you think the first GCN have the hardware despite the API (DX11) did not support it's usage? why only in DX12 async compute being incorporate into direct x? that's because AMD has been pushing async compute to be part of the API since the very first iteration of GCN but it did not become a reality until DX12. so there is no such things such as implementing the proper hardware before it is part of the API for nvidia. also MS did not dictate exactly how async compute must be designed. this actually lead to different gpu maker have different way of implementing async compute in their gpu.
Posted on Reply
#25
evernessince
efikkanIt was critisized because it didn't "show similar gains as other (AMD optimized) games".


Stop this BS now. The API traces clearly show async in action.


As I've told you guys dozens of times: Async works by utilizing idle resources. Nvidia has better scheduling and fewer bottlenecks, making the GPU more busy to begin with. That's why there is little gain in many games. If a GPU has 2-3 % idle resources then the overhead of multiple queues and synchronization is going to be greater than the benefits.

Many games are using async shaders for the wrong purpose to begin with. Async shaders was intended to utilize different hardware resources for different tasks, while many games (like AofS) use it for compute shaders, which mostly uses the same resources as rendering. So basically games are optimizing for inferior hardware. As AMD progresses with Vega, Navi and so on, they'll have to create better schedulers, and then there will be less and less gain from doing this, so there is no point in writing games or benchmarks targeting bad hardware.
You need to read before you jump into a conversation and call BS on other people's comments. I never said Nvidia doesn't use Async.
Posted on Reply
Add your own comment
Jun 3rd, 2024 03:39 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts