The worst-kept secret that is the Nvidia GeForce RTX 4070 (opens in new tab) is just around the corner. So, here comes AMD trying to get its punches in first by trash talking the new GPU's VRAM allocation. Not necessarily overtly, but by implication. And graphs.
AMD has a new blog post (opens in new tab) demonstrating how it sees the impact of 12GB of VRAM versus 16GB when gaming at 4K. Needless to say, AMD's published numbers for 12GB look ugly (via Tom's Hardware (opens in new tab)).
Among the GPUs AMD uses to make the comparison are its own RX 6800 XT (opens in new tab) and Nvidia's RTX 3070 Ti (opens in new tab). The former runs 16GB, the latter 12GB. But with the RTX 4070 supposedly days away from launch and every leak under the sun expecting it too to offer 12GB of graphics memory, the timing of this critical salvo from AMD is unlikely to be a coincidence.
Intriguingly and as is increasingly becoming clear, AMD shows how enabling ray tracing often sees a big jump in VRAM usage. The ironic consequence of that? Enabling what is traditionally a competitive advantage for Nvidia can see VRAM usage exceed 12GB and turn the tables, leading AMD GPUs to out-perform Nvidia GPUs with ray tracing enabled.
Just to absolutely hammer home the point, AMD also shows advantageous numbers for its new 20GB RX 7900 XT (opens in new tab) versus the 12GB RTX 4070 Ti (opens in new tab) and a few other GPU combos where the AMD board delivers a lot more VRAM for the money.
The joker card in all of this is, of course, upscaling. Running a lower resolution and upscaling reduces VRAM usage and if Nvidia's DLSS scaling tech wasn't already very important, it could become absolutely critical in the near future and not just for the RTX 4070.
Current rumours indicate the upcoming RTX 4060 and RTX 4060 Ti (opens in new tab) will offer just 8GB of VRAM. That could be a limitation for native resolution gaming in some titles, even at 1080p. In which case, the competitiveness of those GPUs could hinge on using DLSS by default.
If limited VRAM does prove a problem for Nvidia, the sticking point will be bus width. Nvidia is proving to be particularly aggressive this time around regarding bus width on its RTX 40-series GPUs with that rumoured step down to 128-bit for the xx60-series cards.
The problem with that is how it dictates memory allocation due to existing memory chip specs. The RTX 4070 Ti's 192-bit bus, for instance, effectively limits Nvidia to a choice of 12GB or 24GB. It can't do 16GB with a 192-bit bus.
Equally, the rumoured 128-bit bus of the RTX 4060 results in a choice between 8GB and 16GB. The latter is tricky for Nvidia both in terms of cost and the potential for awkward comparisons within its own GPU line up. Can it really offer more VRAM on the RTX 4060 than the RTX 4070?
It remains to be seen exactly what specs the RTX 4070 and RTX 4060 will have on desktop and indeed exactly how impactful VRAM sizes are going to be for this generation of GPUs.
But at this point it does seem quite likely that, more than in any other recent generation of GPUs from Nvidia and AMD, how much VRAM you get with your GPU could be major battle ground. Watch this space.