Bottleneck is a very common term when it comes to computers. We can refer to it as "one component limiting the performance of another". As one can imagine, it's a situation you generally do not want to occur as it means wasting your hard earned money on parts that cannot achieve their full potential. It's also why computer builds are balanced to a degree and you generally won't see a $1000 video card bundled with a $100 processor.

In this article I will go over multiple video games in order to showcase different ways of bottlenecking, trying to quantify them. Hopefully this data will prove useful to you as it will give you an ability to better understand what kind of system you require for your specific needs and how to spend your money best.

Testing platform

  • Ryzen 7 2700 cooled by SilentiumPC Fortis 3 HE1425
  • Asus Prime X470-Pro
  • 16GB RAM (3000 MHz, CL16, 2x 8GB)
  • Nvidia GeForce RTX 2080 Founders Edition
  • ADATA XPG SX8200 PRO 512GB
  • EVGA G2 750W 80Plus Gold

This choice of parts gives me a fair amount of flexibility, especially when it comes to CPU. As it comes with 8 physical cores (and 16 threads) that are somewhat susceptible to overclocking and I am also offered an ability to turn off any individual core, essentially being able to emulate most of AMD consumer class processors.

RTX 2080 on the other hand is second best consumer class GPU available right now (disregarding Titan family). It is capable of handling latest productions at 4k resolution while pushing 100+ fps at 1080p in most titles.

CPU and GPU are primary components I am trying to test hence the rest of them has been chosen mostly with the intent to avoid their influence on these two. That's why power supply is significantly more powerful than necessary and I opted for a high speed NVMe drive that should minimize waiting times and possible FPS fluctuations caused by in-game loading. 16GB of 3GHz RAM is standard nowadays - it would be possible to squeeze extra bit of performance by using 3200 MHz CL14 but RAM speed vs CPU performance is a topic for a completely different article.

Witcher 3

Witcher 3 is an RPG released in 2015 by CD Projekt Red. It's not a particularly intensive game when it comes to CPU load but even 4 years after it's release it can push most video cards to their limits.

Tested Area, Skellige
min fps avg fps
3440x1440 64 68
2560x1440 77 83
1920x1080 102 110
1280x720 133 143
1024x768 146 157

This first test is done on standard CPU clocks. Details are all set to maximum, the only thing that differs is resolution. FPS also varies and goes up once resolution decreases. I can consider this to be my baseline so to speak.

So let's now try and emulate a dual core Athlon class CPU. For that I will shut down all but first 2 cores:

Disabling 6 cores

Now this is how this change has affected FPS results:

min fps avg fps
3440x1440 64 68
2560x1440 60 78
1920x1080 75 95
1280x720 75 101
1024x768 80 102

As stated before, Witcher 3 is not a particularly CPU heavy game. Doubly so when it's tested in forest areas far away from human settlements. This is why you see no difference at 3440x1440 - video card is a bottleneck at this resolution. However once I move down differences become more and more visible as fps stops changing despite lower visual quality. The primary reason is that amount of work CPU has to put into generating image is roughly the same regardless of the resolution. On the other hand video card has to work more and more once resolution increases. Ultimately moving from 8 to 2 cores can cause up to a 33% performance deficit when bundled with a high-end card but it's only visible at very low resolution, one that you probably wouldn't want in the first place.

That's why Witcher 3 is a good place to start this article and show the primary point of bottlenecks - how badly they affect you depends on many factors. Frankly speaking you could happily play this particular game on just about any modern processor.

Total War: Warhammer II

Total War games are well known among fans of strategy games. It offers combat on a truly epic scale with thousands of units on each side.

Scale of battle

It's also a very CPU intensive game. In a 4v4 game with close to 10,000 units fighting at once you will notice two interesting things.

First, that your GPU activity starts high (especially as you look away from your army to gaze at the backgrounds) but gets lower and lower once the fight truly begins. In fact, this is what roughly happened on my computer:

GPU activity over time

Second, on the other hand CPU load remained consistent, hitting around 35%.

Yet FPS count only went down and down as the fight went on.

min fps avg fps
Beginning of the battle 91 103
Start of the battle 39 42
Total war 19 23

What gives? This is an example of a bottleneck caused by an underperforming processor. Or, more specifically, of a processor's single thread underperforming. Total War Warhammer II is not particularly multithreaded - it doesn't care whether you have 16, 12 or 8 threads and performs almost the same in each scenario (I did check it on 4 cores, we are talking 10-15% drops). Well, with one exception - going below 8 threads can quickly cause framerate to drop to single digits rendering the game completely unplayable.

And on that note, one more interesting chart. What will happen if in place of RTX 2080 a 2060 shows up? Well... this:

min fps avg fps
Beginning of the battle 60 69
Start of the battle 37 40
Total war 18 23

The only visible difference is at the beginning of the fight where CPU does not have much work yet. Later on it your experience is identical. Effectively meaning that you have put your money into a trashbag and set it ablaze if you bought a more expensive card.

This title also showcases yet another point as CPU activity charts you can find in your task manager lie. Not even a single core was maxed out during the game yet fps suffered.

So what should you do if you are an avid strategist? You go for a processor with the highest single threaded potential there is that also has at least 8 threads. In this case a perfect choice would be a Core i7-9700k or i9-9900k bundled with fastest possible RAM. Unfortunately I do not have any of these units available right now for testing but they do perform up to 35% better in such scenarios over a Ryzen 2700. This should be enough to break the 30 fps barrier.

Civilization VI

Civ series makes for an interesting case study for several reasons:

  • first, the franchise is very popular
  • second, it is cpu heavy
  • it also supports multithreading

For this particular game I have gone with medium details (3440x1440 resolution), DX12 (it gives a roughly 15% performance boost over DX11) and 3 variants of Ryzen 7 2700:

  1. 8x 3.9 GHz
  2. 4x 3.9 GHz
  3. 4x 3.2 GHz

This lead me to the following results:

8x3.9GHz 4x3.9GHz 4x3.2GHz
Average FPS 141 114 101
Time per turn 7.92s 8.23s 8.57s

This game definitely scales past 4 cores as you can see, there's a 23.6% improvement by choosing 8 over it. Frequency also affects framerate to a non negligible margin (however it's less than you would think considering it's a 22% decrease in clockspeeds).

Not nearly as much changes in terms of time per turn however - despite more than halving available compute power all that occurs is a mere 8.2% drop. This particular aspect seems to be primarily single threaded.

Results analysis

So I have tested these 3 very different video games and every single one showcased a different CPU and GPU requirements level. This leads to a simple yet important conclusion - no matter how you build your computer, it won't be perfectly balanced and it WILL have bottlenecks. Sometimes in the CPU department (in particular when playing strategies), sometimes in the GPU department (Witcher 3 is a tested example but this will also apply to games like Metro Exodus, Doom or Tomb Raider).

Hence the point is not to worry about there being a possible bottleneck but merely about reducing it. As you have seen from this test for instance - when it comes to a CPU clockspeed is still a #1 concern, number of cores not so much so. You do want at least 4 cores (and 8 threads), you might want to go for 6 as well to allow for some futureproofing. But 8 is currently underutilized for most video games and it significantly raises the price of a CPU. Even when bundled with an $800 RTX 2080 there was not that much of a difference after all.