Heads up to folks thinking of buying an X3D CPU. Thinks to think about, and beware of hype.

Little background- I professionally do performance turning for video game developers on the side. Something I have been doing for a few years now. My responsibility doing so usually involves running games on multiple different hardware profiles at varying in-game settings. There has been quite a bit of buzz around the x3d CPUs that have come out, and I am here to warn folks that as with any EXTREMELY buzzed products- you should be very wary of what claims are- and who makes those claims.

TL, DR; if you have a "modern" GPU like a 2070 or better, x3d is likely WORSE to have than not, buyer beware.

First a primer on X3d cache technology. Essentially, X3D cache suppliments a CPUs ability to process certain instruction sets by providing an additional layer of cache to allow for compression and decompression of data. It is incredibly difficult to explain how it works without getting into deep mathematics, but ultimately it provides a dedicated cache for handling computations for things your CPU is usually generally bad at. It does NOTHING for the vast majority of instruction sets a processor handles.

The sales behind x3d- AMD has done a great job hyping x3d. They show multiple benchmarks claiming increases in performance compared to "same tier" CPU. In fact the first CPU that AMD showed off with x3d technology is a CPU that we never got. AMD built a prototype 3950x3d and pitted it against a standard 3950x. Their results in benchmarks are pretty much in line with what their 3800x vs 3800x3d performance is. Loss of core voltage and CPU speed, but better "fps" in certain situations. But what are those "certain situations"? Well, you'll notice that all official AMD benchmarks are "same game with same settings" and an "unspecified but identical GPU". Those of you who are a fan of the scientific method will already be seeing some huge red flags with that. In a test comparing performance, they are leaving a very, very large number of variables on the table. And this is also the case with most sponsored reviews of the 5800x3d

There is something very interesting to note with the results, some games benefits more in these unspecified benchmarks… And pretty much across the board- it breaks down by engine. On top of that- you'll notice something VERY interesting… The 5800x3d often loses any edge it may have, and even has worse performance in benchmarks where two things are true- 1: graphics are set to high, very high or ultra, and 2: the GPU used is a more modern GPU, usually something like a 2070 or better.

Why is this you may ask? Well, it largely comes down to how the game engines work. Every engine is slightly different, but let's focus on what EFT uses- Unity. In Unity there are a few graphics options that are the absolute culprit at play. Object LOD, view distance, and shadow detail. An interesting behavior in many (but not all) game engines is that when you turn two of those settings down (LOD and Shadows), the developers MAKE THE ASSUMPTION that you have a lower power GPU, and the engine shifts certain instruction sets from the GPU bus to be handled by the CPU. These are the instructions that are handled within the new x3d cache layer on X3d chips. HOWEVER, no matter how good your CPU is at handling these instruction sets, a 2070 can do it far, far better. In fact, it takes a 2070 about 1/3 the time to process these instructions. So turning up shadow detail and object LOD, you shift away from using the CPU to using your GPU. In this case, a x3d CPU is actually WORSE than non x3d.

Well, what about view distance? Well funny thing about view distance and object culling in Unity. Your GPU only renders what is within "view distance" and is actually viewable- however you CPU knows where EVERYTHING in a map is always, all the time. Generally you turn down view distance because your GPU can't keep up with rendering objects, but with EFT this is rarely the case. This game is not graphically intense, the textures are fairly low resolution and objects are fairly low poly, so again- if you have a good GPU, there is no reason to set this low.

Additionally, every time an object enters your "view distance" sphere, the CPU has to send a signal to your GPU telling the GPU to render it. So when you set your view distance lower, you are likely to have more objects entering and exiting your area, meaning your CPU needs to do more work. Guess what… These instructions are also handled by x3d cache.

So what does all this mean? Well, a view distance of 2000 likely means your CPU almost NEVER has to actually send a signal to your GPU. So you CPU ends up doing less work. In this case, x3d is again WORSE than non x3d.

Why is x3d worse than non-x3d in these cases? Well… The extra layer of cache takes voltage to run. This voltage comes from your CPUs core voltage, which means you CPU runs at a lower frequency, which means you CPU can do less work.

X3d is good for two situations.

1: you have an underpowered GPU for what game you are playing. In EFT, this means you are probably running a 1080 or worse at 1080p, or a 2070super or worse at 1440p.

2: you have a good GPU, and you have bad graphics settings for LOD, shadows and view distance.

Is AMD lying to people? No. Their technology is designed to help people who have graphical bottlenecks. People running budget PCs with old GPUs. Are they intentionally being disengenuous with their benchmarking? Possibly. They are a business after all, and if they can market x3d to a larger audience to make sales because their customers have things configured poorly-, that is a huge win for them as well.

Source: https://www.reddit.com/r/EscapefromTarkov/comments/xg4it4/heads_up_to_folks_thinking_of_buying_an_x3d_cpu/

leave a comment

Your email address will not be published. Required fields are marked *