RTX 5080 'Real World' Tests: AM4/DDR4 (5900X) & AM5/DDR5 (7600)
I recently had my NVIDIA RTX 5080 Founder’s Edition delivered, and I wanted to see how my two computers compared in gaming to the big tech reviewer’s – who all use AM5/DDR5 based systems with the fastest gaming CPU (the 9800x3d).
Can my 12-core AM4 system (Ryzen 5900X) and 6-core AM5 system (Ryzen 7600) deliver decent gaming results at 4K? And do I get similar gaming performance with AM4/DDR4 than my AM5/DDR5 PC? Let’s find out!
I played the following games during my testing:
- Hogwarts Legacy
- Metro Exodus
- Battlefield 2042
- Assassin’s Creed Mirage
- Cyberpunk Native 4K
- Cyberpunk 4K DLSS
- Guardian’s Of The Galaxy
- Red Dead Redemption 2
I also ran through 3DMark’s Steel Nomad tests. The specs of my two PCs are:
AM4 Specs
- CPU: Ryzen 5900X (12C/24T)
- Mobo: MSI MAG Tomahawk X570
- RAM: 64GB DDR4 (2x32GB 3200Mhz)
AM5 Specs
- CPU: Ryzen 7600 (6C/12T)
- Mobo: ASRock B650M Pro RS
- RAM: 32GB DDR5 (2x16GB 6000Mhz CL30)
If you prefer text over video, please read on for the guide/transcript version of this video.
Video Transcript And Guide
Hey everyone, I recently upgraded my graphics card to the Nvidia RTX 5080, which is a brilliant graphics card in my testing so far. But the first thing I actually wanted to do was test out on my two normal computers—whatever “normal” is, of course.
And that’s because a lot of the tech reviewers use the most powerful hardware possible, so a 9800X 3D CPU, the fastest RAM possible, and a Gen 5 motherboard. But of course, that makes sense because it’s eliminating bottlenecks. But many of us don’t have hardware quite that powerful.
For example, my main workstation computer, which I mainly use for video editing, has a 5900X, which is stuck on AM4. And yes, it’s got 64 gigs of RAM, but again, that’s DDR4 running at 3,200 MHz, so it’s not as powerful, or anywhere near as powerful actually, as what the tech reviewers are using.
And my other computer, which is AM5, is more of a budget build. It has a B650 motherboard, it’s got a Ryzen 7600 CPU, which is just six cores, and it’s got 32 gigs of RAM. And yes, it’s 6,000 MHz and CL30, but again, that’s slower than actually the RAM that the tech reviewers use.
So I’m kind of interested to actually test my new RTX 5080 on both of those systems and see which one actually performs best.
3DMark Steel Nomad – RTX 5080 AM4 and AM5
So, on to start out with 3DMark, the Steel Nomad test. And in general, I’m doing gaming tests and putting them side by side, so you don’t need to go back and forth. But here, you can actually see my AM4-based system gets a score of 7,875, which is less than the 8,153 score that you actually see with my AM5-based system.
So even though there’s half the cores and also half the RAM, you know, you’re actually getting better results here. So that probably is the benefit of having AM5 and DDR5.
Hogwarts Legacy
But now, let’s jump into gaming tests with Hogwarts Legacy.
Then, I did this at 4K, rendering internally at 1440p via DLSS Quality. Everything else is completely maxed out, so completely Ultra, as you can see there, and ray tracing is turned on.
And in-game itself, there’s no real difference whatsoever. Obviously, the FPS rates are going to jump around. You can see the average FPS rates are pretty similar on both systems. And I’ll always have my DDR5-based system on the left, my DDR4 one on the right, as you can see there.
Unfortunately, this doesn’t show minimum FPS rates because I was having loads of issues actually showing that. But later on, my tests do actually show the, you know, minimum and maximum FPS rates as well, so don’t worry about that.
But in Hogwarts Legacy, no real difference.
Metro Exodus
Next, we come to Metro Exodus. And if I go into the settings, the graphical settings, I went for 4K Extreme quality, no VSync or anything like that.
And on both systems, it was again fairly similar. I think the 7600 sometimes edged it slightly by one or two FPS, but really, they were both fairly similar. The GPU was, you know, maxed out or close to being maxed out—above 95%—and the CPU was not. So we wouldn’t be CPU bottlenecked or anything like that.
But in general, they both performed fairly well. As you can see there, I had no major issues with either. But I do think the average FPS rate was slightly higher on the 7600, by around four or five frames on average. But, you know, there was nothing significantly different either. I did repeat all these tests multiple times, but yeah, this is what I saw—it was fairly similar, with the DDR5 system being slightly ahead.
Battlefield 2042
Now, we come to Battlefield. And for this, I went with 4K resolution and everything completely maxed out at Ultra. DLSS was off, so we looked at native rendering. I did have ray tracing turned off, but other than that, everything was jacked up to the max.
And when we actually look at the game, unfortunately, I ended up with a different level, and I also couldn’t get the overlay working on my DDR5-based system, which was a complete pain due to all the DRM in Battlefield and things like that.
But in general, the average FPS rates were lower on my DDR5-based system, but that’s actually due to a CPU bottleneck. A game like Battlefield is very demanding on the CPU. As you can see, on my 12-core system, the CPU usage is often above 50%. That’s probably what was holding back the Ryzen 7600 build. But in general, still good FPS rates, considering this is native 4K.
Assassin’s Creed Mirage
And now, we come to Assassin’s Creed Mirage. Quite a good-looking game. I went with 4K rendering and maxed out all the graphics there. There’s no DLSS on or anything like that, just TAA for anti-aliasing.
And you can see it’s pretty much the same. Obviously, you’re always going to get very slight variation. But in general, yeah, things are looking pretty good here—the same FPS rate really on both of them. And the GPU is the thing that’s being pushed the most.
And while the CPU usage is sometimes a little bit high—you know, 60 or 70%—you know, even a six-core Ryzen 7600 will perform the same as the 12-core 5900X. You can see there, you know, things are GPU bottlenecked and not CPU bottlenecked.
If we skip forward a little bit, you can actually see some jittering and jumping there. And actually, I did notice that a little bit more with my DDR4-based system. I mentioned that at the start as well, but it’s something I’ll come back to throughout this video.
In general, if we look at the frame graph there, it’s not as smooth with my 5900X and my DDR4-based system. I get more peaks and troughs, lower 1% lows, and lower 0.1% lows as well. So in the end, while the average FPS rate is fairly similar, I was getting a slightly smoother and better experience with my Ryzen 7600 DDR5-based system.
Cyberpunk Native 4K
Next up, I wanted to try out Cyberpunk 2077 at 4K native resolution. So if I go over to the video tab, it’s 4K there, and I’ve got a high preset. And then I’ve turned off DLSS and frame generation, so everything is just high.
And what we’re going to see in a minute is actually the native rendering on 4K. And actually, they’re both pretty similar. The GPU usage is 95% or above, so that’s kind of the thing that’s being maxed out. The CPU usage isn’t holding us back, as you can see there. And often, the FPS rates are around the same.
I got the feeling that the 7600 system, my DDR5-based one, was maybe slightly better and slightly smoother. But there wasn’t much in it at all.
Cyberpunk 4K DLSS
If we now look at Cyberpunk with everything maxed out, what I did here was stay at 4K but then ultimately went to the ‘Ray Tracing Overdrive’, so the maximum possible settings. But I had DLSS on and also the frame generation too. But actually, it looked pretty amazing, and it performed really well on both systems.
So yes, this is being upscaled from 1440p, but it looked absolutely brilliant. And again, there’s not much in it in terms of the actual FPS rate. I did again get the feeling that maybe my Ryzen 7600 system was slightly better, but there wasn’t much in it.
But obviously, my CPU usage was higher with my six-core system. So there might be some times in Cyberpunk where that could become an issue when you start getting worse FPS rates and worse 1% lows. But certainly, in this bit of the game, things were fairly similar.
Guardian’s Of The Galaxy
Next up, Guardians of the Galaxy.
So with this one, I also went with 4K. I whacked everything up to the max, went with ray tracing and everything else. And this is, again, at 4K.
And you can see here, both systems are performing fairly well. You know, the GPU is the main bottleneck here, and the FPS rates as a whole are pretty similar.
So again, no major issues, although I think probably, again, my AM5-based system is doing slightly better, but nothing really to write home about either.
If we skip ahead a little bit now, you can see again that the GPU is the main thing that’s maxed out. And the FPS rates are still better on my AM5-based system, and that’s borne out in the actual benchmarks as well.
As you can see there, the average FPS rates are better. There was a little spike at the start, and hence the minimum FPS rates are looking a lot lower. But I usually get that on all of my systems, to be honest.
In general, though, the DDR5-based system did better.
Red Dead Redemption 2
Now, let’s finally look at Red Dead Redemption 2.
And for this, I went with 4K. DLSS was off, and everything was maxed out to Ultra, other than particle quality and tessellation quality, because I forgot to hike them. But there’s no DLSS or anything else.
And we can see here, they both perform fairly well. But again, it was a case that my AM5-based system just seemed to be a little bit smoother.
And we’ll see that in a minute when we actually look at the final benchmark results. But the average FPS rate here does look like it’s pretty much the same. And again, the GPU usage is above 95%, so the GPU is the limiting factor here, really.
But the game, again, performed really well, and this is native 4K. So a very good result.
Let’s skip forward now, though, because I don’t like the next bit.
But this is the actual end of the benchmark results, and you can see that although the average FPS rates were very similar, the AM5-based system did perform better when it came to 1% lows and also the maximum FPS rate.
Final Thoughts
So there’s no major difference, really. I mean, things were very slightly smoother with my Ryzen 7600 system, the DDR5 one, which is kind of interesting because everything is half. You know, I’ve got half the CPU cores, I’ve got half the RAM, and I also have two NVMe drives in my AM4-based build. But I’ve only got the one NVMe drive in my budget AM5 build.
But actually, the AM5 build did really well. And you could see often, while the average FPS rate was the same, things were often a bit smoother. So it had better minimum FPS and better maximum FPS rates as well.
So that’s actually pretty interesting, at least to me.
But other than that, there was no real difference.
So I hope you found this video useful! If you did, please click the thumbs-up button and please subscribe to see more videos like this.
Thanks for watching!