Tom's Hardware is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Here’s why you can trust us
By Jarred Walton published 9 June 22
But how will they fit together and perform?
In its financial analysts briefing today, AMD shared its GPU roadmap along with some additional details on its upcoming RDNA 3 architecture. While the company unsurprisingly didn't go into a lot of detail, what was shared provides plenty of food for thought. AMD says its RDNA 3 chips will debut before the end of the year, at which point at least some of the new cards will likely find a spot on our best graphics cards list and help restructure the top of our GPU benchmarks hierarchy. Let's dig into the information AMD provided.
AMD says RDNA 3 is projected to provide more than a 50% uplift in performance per watt. As usual, that has to be taken with a large dose of skepticism, as performance per watt is a curve with a lot of possibilities. AMD's RDNA 2 architecture was also supposedly a 50% improvement in performance per watt, and Nvidia claimed its Ampere architecture delivered 50% more performance per watt than its previous Turing architecture. While both claims were certainly true in some cases, many comparisons showed far slimmer gains. For example, based on our performance and power testing, the RX 6900 XT is up to 112% faster than the RX 5700 XT and uses 44% more power. That represents a 48% increase in performance per watt (perf/watt), which is close enough to 50% that we won't argue the point. But other comparisons yield different results. RX 6700 XT is about 32% faster than the RX 5700 XT while using basically the same amount of power, or if you want a really bad example, the RX 6500 XT is 22% slower than the RX 5500 XT 8GB while using 29% less power. That's a net perf/watt improvement of just 10%. The story is the same for Nvidia. The RTX 3080 is about 35% faster than an RTX 2080 Ti (at 4K) but uses 28% more power. That means raw perf/watt only improved by 5%. Alternatively, the RTX 3070 is about 23% faster than an RTX 2080 Super, and it uses 11% less power, nearly a 40% improvement in performance per watt. And to give one final example, the RTX 3060 is 33% faster than the RTX 2060 while using 7% more power, giving an overall 24% increase in performance per watt. The point is, AMD's claims of 50% higher performance per watt are a best-case scenario. It could mean RDNA 3 is 50% faster while using the same power as RDNA 2, or it could mean it's the same performance while using 33% less power. Generally speaking, actual products are likely to land between those two extremes, and it's a safe bet that not every comparison will show a 50% improvement in performance per watt.
Other details for RDNA 3 consist of some things we've already assumed: It will use 5nm process technology (almost certainly TSMC N5 or N5P). It will also support "advanced multimedia capabilities," including AV1 encode/decode support. AMD said RDNA 3 includes DisplayPort 2.0 connectivity, which was already rumored. The architecture consists of a reworked compute unit (CU), the main building block for AMD's RDNA GPUs, and we've heard quite a few rumors about this. The most likely scenario is that AMD will be doing something similar to what Nvidia did with Turing and Ampere, adding additional compute pipelines. Turing has separate FP32 and INT32 pipelines, and Ampere added FP32 support to the INT32 pipeline, in effect potentially doubling the FP32 compute per streaming multiprocessor (SM) — that's Nvidia's equivalent of the CU. If that's correct, RDNA 3 will likely have a big jump in theoretical computational performance relative to RDNA 2. How that will affect actual performance remains to be seen. Not surprisingly, AMD also promises a next-generation Infinity Cache, which has proven to be quite effective in increasing overall performance on the various RDNA 2 GPUs. This could be a large cache, or perhaps it will be optimized in other ways. Given other expected architectural updates, we do expect AMD will move its top-tier RDNA 3 GPU to a wider memory interface, probably 384-bit, but the company hasn't commented on this yet.
Last and certainly not least is perhaps the most interesting aspect of RDNA 3. AMD says it will use advanced packaging technologies combined with a chiplet architecture. We've seen a somewhat similar move with the X670/X670E chipsets, and AMD has proven it can do some very interesting things with its CPU chiplet architecture. What this means for GPUs however isn't entirely clear. One option would be to have a main hub with the memory interface, display and media capabilities, and other core hardware. Then the GPU chiplets would focus on raw compute and would link to the root hub through an Infinity Fabric. There could be one to perhaps four GPU chiplets, offering a great range of scalability. However, AMD would likely need multiple different root hubs with this approach, for the different product tiers, and that would increase cost and complexity. What’s more likely is that AMD will have a base GPU chiplet that targets the midrange performance segment, and it will be able to link two and perhaps as many as four of these chiplets together via a high-speed interconnect. It would be sort of like SLI and CrossFire, but without having to do alternate frame rendering. The OS would only see one GPU, even if there are multiple chiplets, and the architecture would take care of the sharing of data between the chiplets. AMD has already done something like this with its Aldebaran MI250X data center GPUs, and Intel seems to be taking a similar approach with its Xe-HPC designs. AMD also revealed details on its data center CDNA 3 GPUs that suggest those will have up to four GPU chiplets. However, those are compute-focused architectures rather than designs focused on real-time graphics, so there are different hurdles that will need to be overcome. Regardless, AMD has plenty of experience with chiplets, as it has been multiple chips in a package since the first EPYC CPUs back in 2017, and chiplet designs since Zen 2 in 2019. It should have ample knowledge about how to make such a design work for GPUs as well.
Finally, AMD shared the above GPU roadmap, which officially shows RDNA 4 with a target release of some time before the end of 2024. As you can imagine, AMD didn't provide any other information on RDNA 4 other than its existence on the roadmap and the "Navi 4x" naming scheme. It will likely use a 3nm process technology, or some equivalent with a different name. Based on previous launches, late 2024 is a safer bet than early 2024 for the actual launch. AMD also reiterated its plans for a 2022 launch of RDNA 3, which is good to see, though it could be a very limited launch and still technically qualify. Hopefully AMD pushes out multiple graphics card tiers before the end of the year, but we'll have to wait and see. There's plenty we still don't know about RDNA 3, like core counts, clock speeds, and other architectural enhancements. AMD has another GPU family in its CDNA architecture, which we've covered separately, but so far there hasn't been a lot of overlap between RDNA and CDNA. The GPU cores might be similar, but RDNA has ray accelerators while CDNA includes matrix cores. RDNA 3 might follow the examples of Intel Arc and Nvidia RTX and include some form of tensor/matrix core, but AMD hasn't suggested any such changes so far. We still have plenty of questions, chief among them being the actual launch date and actual performance for RDNA 3. We'd love to see AMD deliver a similar performance boost to what it brought with RDNA 2 (up to double the performance compared to RDNA), even if that means higher power consumption. Imagine if the base GPU chiplet could match the performance of an RX 6900 XT, and then AMD links two or more chiplets together to double or potentially quadruple performance. That would be awesome! But dreams are free, so until AMD details the exact architectural changes, we'll try to wait patiently.
Jarred Walton is a senior editor at Tom's Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.
Get instant access to breaking news, in-depth reviews and helpful tips.
Thank you for signing up to Tom's Hardware. You will receive a verification email shortly.
There was a problem. Please refresh the page and try again.
Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site (opens in new tab) .
© Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.