GDDR6 or GDDR6X For Gaming (Plus Is Less GDDR6X Worth It?!)

There’s a new standard when it comes to the memory in your graphics card: GDDR6 (well, technically GDDR7 is due out ‘soon’ – but with ongoing chip shortages, ‘soon’ might mean in a few years!). This means people are asking whether or not GDDR6 or GDDR6X is better?

At the end of the day, both GDDR6 and GDDR6X are going to be fully capable of handling everything from 4K gaming to demanding video editing work. GDDR6X is more powerful on paper, but only the most demanding video game setups and graphics editing workflows are going to make use of this extra strength.

Let’s get your PC supercharged with the best possible graphics technology—or maybe there’s another option to consider.

What’s the Difference Between GDDR6 and GDDR6X?

Every single computer, from the most powerful servers all the way over to the small computer inside your microwave, uses memory in order to carry out every single task. This memory is temporary and when a task is complete the memory can be cleared out and made ready for the next operation.

This isn’t the same memory as the storage that we’re used to. Hard drives and SSD storage use a slower, but more long-lasting type of memory. DDR and GDDR memory are fast-acting types of memory designed to help a computer facilitate a specific task.

GDDR is the type of memory used by graphics cards. This is a bit of an oversimplification, but you can think about this as RAM for your graphics card.

Different iterations of GDDR are known by their numbering and lettering systems. The number at the end is more recent the higher it gets. Any letters after that number indicate iterations on that generation. To date, the most advanced type of GDDR is GDDR6X – with GDDR7 being unveiled, but not ready to hit the mass market.

Let’s look at the big differences between the two most advanced types of graphics card memory out there.

GDDR6

A AMD RX6700XT GPU with 12GB of GDDR6 memory
A AMD RX6700XT GPU with 12GB of GDDR6 memory

You might have already guessed this, but GDDR6 is an acronym. It stands for: Graphics Double Data Rate 6 Synchronous Dynamic Random-Access Memory—but we’re going to stick to GDDR6 to keep things much easier!

GDDR6 is the latest standard for graphics memory. Your graphics card is doing some of the hardest work in your PC and that’s why it needs its own, dedicated memory. This memory isn’t something you can change manually, but rather it’s built right into your graphics card.

GDDR6 was finalized in 2017 and updated things from GDDR5. It has a max per-pin bandwidth of 16 Gbits per second and increased power performance from the previous generation.

GDDR6 is the latest in graphics. It’s in everything from high-end PCs to the PS5 and the latest Xbox console. However, GDDR6 has recently been outclassed by GDDR6X.

GDDR6X

GDDR6X is an update to the GDDR6 standard. GDDR6X is 15% more power efficient per pin per gig transferred, but it actually uses more power overall since that efficiency translates to more data transferred per second.

Essentially, GDDR6X is more power efficient, but it’s also faster to the point of using more power than GDDR6.

GDDR6X also has a better data transfer rate coming in at 21 GB/s. It has, roughly, a 43% data increase over GDDR6.

GDDR6X also runs much hotter than GDDR6. In general, the more recent and more powerful a graphics memory update is, the hotter it’s going to run.

One thing to keep in mind about GDDR6X is that it is not yet standardized – both Micron and Samsung produce their own slightly different versions of it. Since GDDR6X is fairly new, it will be a while yet before a set of universal specs emerge as the standard for this new iteration of graphics memory.

Why Do Some High-end GPUs Have Less GDDR6X?

A Gigabyte GeForce RTX 3080 Ti card
The Gigabyte GeForce RTX 3080 Ti card has 12GB of GDDR6X memory

When GPU shopping, there’s a weird quirk you might come across: you can easily get 12GB of GDDR6 memory in a graphics card, but the more expensive graphics cards above this level then drop down to offering 8GB of GDDR6X memory.

For example, the NVidia RTX 3060 Ti offers up to 12 GB of GDDR6 memory (as does the AMD RX 6700 XT), but then the Nvidia RTX 3070 offers GDDR6X but drops back down to 8GB of memory. In other words, the more expensive RTX 3070 card gives 4GB less memory (side note: that’s why I purchased the RX 6700 XT, not the RTX 3070, when I purchased a graphics card earlier this year). Why is this?

There’s a few reasons why high-end graphics processing units haven’t fully adopted the new memory tech yet. This has to do with how emerging technology rolls out and what we’re capable of even using at this current point.

Firstly, GDDR6X is simply so powerful that we don’t really have too much software that can make full use of what it’s capable of. You can easily run 4K gaming and video editing technology on a GDDR6 or even a GDDR5. A GDDR5 graphics card is still good enough to see you through everything but the highest-end gaming and most demanding graphics software out there.

This leaves GDDR6X in an odd place. There aren’t many good ways to use this power on a consumer-level machine.

Think about it this way: If you had a lawnmower that could mow 1,000 square feet a minute, would that be worth it? Sure, you could mow your lawn in a matter of seconds, but unless you’re mowing every lawn in the city, it might be too much power—and cost—for your needs.

Secondly, unless you’ve been living under a rock, you will have heard that there’s a chip shortage – affecting everything from cars to computers. Whilst GDDR6 was being mass produced before this chip shortage (meaning that supplies are relatively plentiful), GDDR6X was released later – meaning that it’s in shorter supply.

This naturally drives up the per-GB price of GDDR6X memory, meaning that often higher end GDDR6X graphics cards don’t offer much more than 8-10GB of GDDR6X memory because the already-expensive cards would be even more expensive.

This is why GDDR6X isn’t very common in graphics cards yet. It’s an emerging technology that, even though it’s the strongest, doesn’t yet have enough practical uses to warrant ‘too much’ GDDR6X being soldered onto high-end graphics cards.

Is GDDR6 or GDDR6X Better for 1080p and 1440p Gaming?

GDDR6 text
GDDR6 text

It’s a little difficult to decide whether GDDR6 or GDDR6X is better for 1080P and 1440p gaming. There’s a technical answer and then there’s the practical, real-world answer.

Since technically correct is always best, we’re going to start with the technically correct answer. GDDR6X is better in virtually every condition for 1080p on 1440p gaming. It’s simply got more memory, speed, and power which means it’s going to be able to easily handle gaming with these graphic standards.

However, there are considerations beyond tech sheet specifics that we’re going to have to look at. The first is that GDDR6X graphics cards are incredibly expensive since they’re the highest of the high right now. This might make them too expensive for your average, and even some high-end, PC builds.

Then there’s the fact that even GDDR5 can handle 1080p on 1440p gaming without too much trouble.

Is GDDR6X better, yes definitely. Do you really need the power that GDDR6X has to offer, not yet especially when money is still a factor.

4K Gaming: GDDR6 or GDDR6X?

A new 4K HDR computer monitor from electriQ
My new 4K HDR computer monitor

When we talk about 4K gaming, we have to look at the same considerations that we just looked at 4 1080p on 1440p gaming.

The same general wisdom applies. There are GDDR5 graphics cards that are more than capable of taking 4K gaming in stride. GDDR6 is ideal for 4K gaming and can even handle that on some complicated monitor setups.

However, we’re starting to get to those high-end corner cases that make GDDR6X even more impressive. If you’re 4K gaming on multiple monitors or extended monitors, GDDR6X might be the right decision for you.

The one thing to always keep in mind when deciding to adopt the latest in any tech standard is what your use cases actually are. While it would always be fun to have the flashiest graphics card updates, the money spent on making those upgrades might be better spent in upgrading other aspects of your PC like your cooling or RAM.

GDDR6 VS GDDR6X for Video Editing?

CPU memory and GPU usage when rendering a video
CPU, memory and GPU usage when rendering a video

Here’s where the argument for GDDR6X becomes the strongest of all.

Gaming demands a lot from our graphics cards, but nothing demands as much from a graphics card as video editing. This gets even more true the more complicated your video editing becomes.

Gaming also has a hard ceiling for how much a better graphics card can help out. If you’re playing a game from 4 or 5 years ago, the latest graphics card isn’t going to give you that big of an advantage compared to what was available at the time.

However, video editing simply just gets faster the more graphics processing power you have. It doesn’t matter whether you’re editing a video you shot yesterday or two decades ago. The more graphics power you have, the faster your video editing is going to go.

So, if you’re a professional video editor and you want to get that roughly 40% efficiency boost in your graphics power, GDDR6X is exactly what you need. You’ll see a marked improvement in how quickly you can process video.

Just be sure to double check your expected graphical memory usage, before buying a lower VRAM GDDR6X option. The last thing you want to do is purchased a fancy new GDDR6X graphics card, and see that you’re constantly being capped out by a lack of graphical memory.

An Exception: When You Get A LOT More GDDR6 For Your Money

If you had to choose between (say) a 12GB GDDR6X graphics card and a 16GB GDDR6 graphics card, the boost in performance (of the GDDR6X version) would probably warrant having a few gigabytes less of graphical memory – especially since only a few applications use more than 12GB of VRAM (video RAM/memory) currnetly.

But what about if your options are 8GB GDDR6X or 16GB GDDR6 – for the same money? I’ve had this question a few times, and it’s a hard decision to make. In this case, I think that 8GB just isn’t enough for future-proofing. Some 4K games are already requiring close to (or more than) 8GB of VRAM, and video editing often uses more than this when rendering.

So even though GDDR6X is more efficient than GDRR6, I would personally always choose 16GB of GDDR6 over 8GB of GDDR6X – assuming the rest of the graphics card (it’s clock speeds, cooling setup etc) are the same.

In short, I wouldn’t purchase a GDDR6X card with 50% less memory than GDDR6 – the performance boost just doesn’t make it worth it for me. But 25% less memory for 30-40% more performance? Yes, that might well be worth it.

Which is Better Overall: GDDR6 or GDDR6X?

Deciding which is better overall between GDDR6 or GDDR6X comes down to what your needs are and how much money you have to throw into your PC build.

If you have the extra cash and you’re looking to squeeze out maximum performance in either the highest end gaming or video editing, then you definitely want to pick up a graphics card with GDDR6X. It’s simply the best on the market right now and if the best is what you want then this is what you need.

However, GDDR6 is more than capable of handling just about anything you throw its way. Outside of the most extreme circumstances or the strongest desire for the utmost power and efficiency, GDDR6 is more than capable of handling the job and saving you a few dollars on the way.

cropped A picture of me Tristan
About Tristan Perry

Tristan has been interested in computer hardware and software since he was 10 years old. He has built loads of computers over the years, along with installing, modifying and writing software (he's a backend software developer 'by trade').

Tristan also has an academic background in technology (in Math and Computer Science), so he enjoys drilling into the deeper aspects of technology.

Tristan is also an avid PC gamer, with FFX and Rocket League being his favorite games.

If you have any questions, feedback or suggestions about this article, please leave a comment below. Please note that all comments go into a moderation queue (to prevent blog spam). Your comment will be manually reviewed and approved by Tristan in less than a week. Thanks!

10 thoughts on “GDDR6 or GDDR6X For Gaming (Plus Is Less GDDR6X Worth It?!)”

  1. Good morning my friend, your article is nice and quite detailed. but I have a question: if you could buy 16GB GDDR6 and 8GB GDDR6X for the same money, which would you choose?

    Reply
    • Thanks Alex, and that’s a great question. It’s not an easy choice but I would say the 16GB GDDR6 option. 8GB (even of GDDR6X) just isn’t enough for me, for future-proofing purposes. Of course, if you mainly game in 1440p and don’t plan on going up to 4K gaming (or video editing), you’re unlikely to need more than 8GB of video RAM – in which case, the 8GB GDDR6X option will probably be fine. I just think that it might limit you in 3-4 year’s time.

      Reply
      • Great article man, I have a question. I bought the 3070 Ti and I was disappointed that it only has 8 GB after seeing some test against the 3070 which is an almost identical card. I saw a bottleneck on 4k on the 3070: when the game used the total 8 GB on both cards, the 3070 was struggling and doing some stuttering, while the 3070 Ti was still capable of running the game. Is there any explanation for this?

        Reply
        • Thanks Andres. That’s an interesting question and observation. It might be because GDDR6X memory is extra-efficient in that particular game (plus generally!). After all, we know that GDDR6X is better than GDDR6. So if a game maxes out 8GB GDDR6 and 8GB GDDR6X, the latter (the 8GB GDDR6X) will be providing more performance.

          It’s similar to comparing two CPUs – let’s say the Ryzen 5600X and the 5500. Both are 6 cores, but the 5600X will provide better performance than the 5500 – even if both have 100% usage.

          Reply
  2. Hello, dear author, I recently bought a Colorful iGame Geforce RTX 3070 W OC LHR from a friend. In your opinion, it is capable of 2K (2560*1440). I mainly play COD/BF/Horizon.

    Reply
    • Hi Alex, thanks for the comment. An RTX 3070 should be fine for 2K gaming, especially with the games you mention. You might need to dial down the graphical settings a tiny bit, but overall it should be fine – yes 🙂

      Reply
  3. Great article man, very detailed. Since the new RTX 3060 Ti with GDDR6X came out I was considering it. It has 8GB 256 Bit. Reading your article I was thinking, that maybe it is to powerful of a GPU (I don’t any GPU at this moment, just an integrated one)? With this my CPU will become the bottleneck (i5 4590), so would it be a better option to get RTX 3060 Ti with GDDR6 12GB 192bit or RTX 3060 Ti with GDDR6 8GB 256Bit? I am mostly using this for playing games like Lost Ark, Phantasy Star Online and Monster Hunter.

    Reply
    • Thanks, glad it helped you out. That’s a good question: yes it’s unfortunately likely that you i5 4590 would become the bottleneck first. To be honest, for the games you mention, any of the GPUs will work fine – I would probably go with the best card for your budget. Naturally the 12GB GDDR6 and 8GB GDDR6X options give you a bit more future proofing, but with the games you mention, the 8GB GDDR6 would work out absolutely fine.

      Sorry that’s a bit vague. I guess I would recommend the 12GB GDDR6 or 8GB GDDR6X if you’re planning on upgrading your CPU sometime in the next year or two. But go with the 8GB GDDR6 version if you’re not planning any further upgrades.

      Reply
  4. Hi, I examined something similar to what another commenter described. When trying to run The Last of Us Part 1 (PC) with my RTX 3080 10gb. On ultra with everything textures pushing the VRAM usage to its limit on native 1440p without upscaling. The game even warns me of running out of VRAM. Yet the game runs fine at about 70-80 fps. I guess the VRAM is fast enough to make up for the game settings exceeding certain VRAM amounts.

    Reply

Leave a comment