You’d have to be living under a rock to be into the gaming space and not have heard about both Sony and Microsoft announcing the specifications of their new consoles, the PlayStation 5 and Xbox Series X, respectively. There has been a lot of discussion and speculation regarding this facet of the consoles in the lead up to these announcements, with the gaming community devouring any and all information spread by the platform holders, even going back as far as February 2019 with a patent that could have meant the PS5 would be backwards compatible. These specs include some pretty impressive tech, with the use of current hardware like Zen 2 processors and even some unreleased tech like RDNA 2 GPUs. While this all seems pretty good on paper, what it means in practice could be entirely different, so let’s take a look at what these powerful specs might mean in terms of gameplay/performance improvements.
Note that none of this is hard fact about what the consoles’ performance may be like but rather speculation based off tech that is currently available.
To kick things off we’ll start with the CPU – the brain of any system. Both next-gen systems feature a custom eight-core processor running on the Zen 2 microarchitecture. The PS5’s CPU runs at a variable clock speed but has been specified to run at 3.5GHz. The Xbox Series X also features a custom eight-core processor using the Zen 2 microarchitecture, however it goes up to 3.8GHz and features simultaneous multithreading (SMT), a feature which will slightly drop the clock speeds down to 3.66GHz. Pretty impressive stuff given that this current generation started with clock speeds on significantly weaker CPUs. The PS4 and PS4 Pro had CPU clock speeds of 1.6GHz and 2.1GHz on their eight-core Jaguar processors. In comparison, the Xbox One (which was notably more lacking in the specs department at launch) shipped with a 1.75GHz clock speed on its eight-core Jaguar processor, with the Xbox One X coming in at 2.3GHz. Now one of the biggest things to take away from all of this is the Zen 2 architecture itself. Zen 2 is AMD’s most recent CPU architecture and it’s arguably one of the most impressive CPU launches that the company has had. The original Ryzen CPUs were divisive and left the market in this very weird spot where if you wanted a computer solely for gaming, you’d opt for something like an Intel i7-7700K, a four-core/eight-thread processor (4C/8T) that was overclockable. If you wanted to do literally anything else, you’d be better off going for a Ryzen 5 1600, a six-core/twelve-thread (6C/12T) processor which was also overclockable. The general rule remained that Intel CPUs were better for gaming and security flaws lol where AMD CPUs were better for productivity, but I digress.
In the Zen 2 (also known as 3rd gen Ryzen) product stack there is the 8C/16T processor, known as the Ryzen 7 3700X. It has a base clock speed of 3.6GHz, with the potential to boost all the way up to 4.4GHz. Looking at launch period reviews and benchmarks, the results for the improvements made from the 10nm+ to the 7nm lithographies (the average space between the CPU’s logic gates/transistors. The smaller, the better) are quite impressive and Zen 2 managed to basically match, if not outperform, Intel’s competing processor, the i9-9900K (8C/16T). The 9900K costs an amusing A$899 (people bought this?) while the 3700X sits at A$479, a stark price difference for similar performance.
|Model||Architecture||Core/Thread Count||Clock Speed||Lithography|
|Ryzen 7 3700X||Zen 2||8C/16T||3.6GHz (4.4GHz boost)||7nm|
|PlayStation 5||Zen 2||8C/16T (SMT not confirmed)||3.5GHz (variable frequency)||7nm|
|Xbox Series X||Zen 2||8C/16T||3.8GHz (3.66 w/SMT)||7nm|
We’ll look at Hardware Unboxed’s review to see the various strengths of the 3700X to get a gauge at how both these consoles will perform on the CPU side of things. There is a lot in this review but there is one major point which we are going to briefly look at.
File Compression and Decompression
This is a bit of a strange thing for most people to measure but it is actually an incredibly vital part of running a PC, let alone a console. I’m sure a majority of the people reading this would have at some point in their life had to open a zip file in some form. Pretty standard stuff and it’s a something that consoles can adopt without realising it. One of Steam’s biggest advantages compared to a lot of other PC platforms (and consoles) is its advanced file compression system that allows for downloads to be impressively small and then expand to install the game itself. Anecdotally speaking, I remember when Cuphead came out and WellPlayed Managing Editor Zach Jackson was telling me about his 10GB download where my Steam download of the game was only 1.6GB. The download was almost a tenth of the size yet the final product was the same nice reference. As consoles move to a more digital approach to game distribution, with a large number of games only shipping as digital games, it makes sense that these platform holders would like to minimise download sizes. While the onus is usually on the developers themselves to use these compression methods, whoever holds the key to the best compression method in the console space can actually win huge favour amongst the gaming community, especially in countries that have less-than-desirable internet speeds (read: Australia). With good compression, however, must also come decompression and this is one strength that Hardware Unboxed was able to test for the 3700X, the most comparable CPU to the next-generation consoles’ offerings.
Sourced from Hardware Unboxed
Ignoring the 3900X results, as that is a 12C/24T CPU and so it’s quite distant from what we are trying to get a gauge of, the 3700X does an impressive job of file decompression. While it’s more than likely that the Xbox Series X and PS5 will use different decompression methods, this can give us a rough idea of what to expect out of the CPU itself. PS4 users will be aware of how the OS was recently changed so that updates now copy instead of just straight installing to reduce the risk of software corruption, imagine if it was all changed to a decompression method instead, so the download was smaller and the install was faster?
Using Hardware Unboxed’s review of the 3700X is going to be a bit unfair as, for the purposes of benchmarking, it was paired with an RTX 2080 Ti, Nvidia’s flagship GPU. While the 5700XT can give the RTX 2080 a run for its money, it often struggles to really compete with the 2080 Ti (unless it’s Forza Horizon 4, where it trades blows with the 2080 Ti). However, if we want to see improvements based on the CPU alone, it’s still full of some relevant information. Amongst Hardware Unboxed’s lineup of CPUs for these benchmarks is the Ryzen 7 1800X, a notably more powerful CPU than any of the Jaguar CPUs that the current consoles use but it is the weakest 8C/16T CPU in the lineup.
In the Assassin’s Creed: Odyssey benchmark, the 3700X was able to achieve an average framerate of 102 frames per second (frametime average of 9.8ms) while running at Very High graphics quality. This is a 21.4% increase from the 1800X whose average framerate is 84 frames per second (frametime average of 11.9ms). While the 1800X’s performance is still remarkably better than the Jaguar CPUs’ performance would be, both in terms of clock speeds and IPCs (instructions per clock), this increase from a CPU upgrade alone should have fans of either console whet with anticipation. These results were all at 1080p however, and we all know that both Xbox and PlayStation are targeting consistent 4K gaming. At 1440p, a nice middle ground between 1080p and 4K, the results are still very impressive. 82 FPS and 73 FPS for the 3700X and 1800X, respectively. These performance numbers are all well and good, but the real kicker is in the 1% lows.
For the uninitiated, 1% and 0.1% lows are a measurement taken when looking at performance, which takes the lowest 1% and 0.1% of frames rendered, based off of their frame time, and informs the tester on the overall stability of a system and/or game. The higher these values are, the better. At 1080p, the 3700X had a 1% low of 77 FPS whereas the 1800X had a 1% low of 62 FPS. So at its perceivable worst, Assassin’s Creed: Odyssey would render at 77 FPS at 1080p. 1440p is a little slower, with 1% lows of 59 FPS and 52 FPS, respectively. The lowest points of performance are often more important than the average or highest points because the lowest points are where you will notice the most difference.
Now for The Division 2, I’ve specifically chosen this over their other gaming benchmarks as shared-world games like this are increasingly popular so looking at how a game like this could be potentially handled by the next-gen hardware might be a little more informative and relevant. At 1080, ultra-quality using DirectX 12, the 3700X achieved an average framerate of 158 FPS and a 1% low of 107 FPS (really good results). This is an increase of 21.2% and 32.1%, respectively when compared to the 1800X. At 1440p, 3700X achieved an average framerate of 126 FPS and a 1% low of 101 FPS, an increase of around 0.08% and 32.9%. The reason for the difference in average being so low is at 1440p, for The Division 2, the load becomes a little more GPU bound, which is where the next-gen consoles’ GPUs will really come into play. The main take away from this result is the almost 33% increase in frame stability which is where Zen 2’s strength lies.
Now, there are a few things to keep in mind here. As I’ve said before, the current consoles do NOT use the 1800X as their CPU, they use an eight-core processor from the Jaguar line of CPUs, a line which was used for AMD’s APUs – a component which blends CPU and GPU into one package. The next-gen consoles will not be using a 3700X but rather a custom eight-core Zen 2 CPU, what improvements or compromises have been made are yet to be seen. Boost clocks are also something to factor in here. Both the 1800X and the 3700X feature base clock speeds and boost clock speeds, which means that if their power/thermal budget allows for it, these CPUs will clock higher than their base clock to achieve higher performance numbers. The console CPUs don’t appear to be doing any of that. In saying all that, if we were to hypothesise that the Xbox Series X and PS5 use CPUs that are akin to a modified 3700X, the gains compared to this current generation would be significant as there are improvements to clock speeds and IPCs, with the Xbox even going as far as stating support for hyperthreading, a whole different beast in and of itself.
Now we’ll step away from the CPU side of things and look at the GPU, which for the initiated is the part that . This one is much harder to try and get a baseline for purely because both the Xbox Series X and the PS5 use RDNA 2 GPUs – a line of GPUs which, currently, are not commercially available. The closest thing we can go off is the Radeon RX 5700XT.
|Model||Architecture||Compute Units (CUs)||Clock Speed||Memory||Memory Bandwidth||Peak Single Precision Compute Performance||Peak Pixel Fill-Rate||Peak Texture Fill-Rate|
|Radeon RX 5700XT||RDNA 1.0||40||1605MHz (1905MHz Boost)||8GB GDDR6||448 GB/s||9.75 TFLOPs||Up to 121.9 GP/s||Up to 304.8 GT/s|
|PlayStation 5||RDNA 2||36||2230MHz (variable frequency)||16GB GDDR6||448 GB/s||10.28 TFLOPs||N/A||N/A|
|Xbox Series X||RDNA 2||52||1825MHz||16GB GDDR6||10GB at 560 GB/s, 6GB at 336 GB/s||12 TFLOPs||N/A||N/A|
One of the first things you’ll probably notice is that the Xbox Series X has a whopping 52 CUs packed inside it compared to the PS5 whose CU count matches the RX 5700 (non-XT). This is not to say that the PS5 has a 5700 as it’s probably more efficient than AMD’s current second-highest offering, it’s just something interesting to note. The other thing you will notice is that both consoles have double the VRAM (memory) of the 5700XT. While it has been confirmed that the Xbox Series X will use more than 8GB of VRAM, a large portion of this allotted memory actually goes to the CPU as well for system operations. So 16GB of GDDR6 may be correct on paper, in practice, it’s not quite that much for memory for the GPU to use, so try not to focus on that.
We’ll once again be looking to Hardware Unboxed’s as their review for the 5700XT for a will give us a very rough idea of performance increases compared to this current-gen. Their test system this time is an Intel i9-9900K paired with the Radeon RX 5700XT. So what GPU are we able to compare with? Well, the PS4 Pro and the Xbox One X have GPUs comparable to the RX 570 and RX 580, respectively. These performance charts don’t include either of those cards, but they do include the RX 590 which is a minor step above the RX 580, so we’ll be comparing to that. Please also note that these scores and based off pre-release drivers which were notably buggy. The drivers are still not perfect today but they have substantially improved since then.
We’ll begin with Battlefield V, where the 5700 XT managed to achieve an average framerate of 112 FPS and a 1% low of 86 FPS while running at ultra-quality and 1440p. Once again, what is important to note is the 1% low for performance stability – the higher and closer these two metrics are, the better. Compare this to the RX 590’s average framerate of 62 FPS and its 1% lows of 52 FPS, very tight performance but it still doesn’t hold a candle to the 5700XT which sees an increase of 80.6% for its average framerate and 65.4% for its 1% lows.
The 4K results, which were posted on Hardware Unboxed’s Patreon, told a similar story in terms of performance differential. The 5700XT was able to achieve an admirable 4K result of 64 FPS and a 1% low of 47 FPS. An improvement of 82.8% for the average framerate and 51.6% for the 1% lows. This is all while rendering at ultra-quality, mind you, which the console probably won’t opt for as they’ll sit in a nice middle ground in terms of graphical settings, if not a little higher than a middle ground.
Now, this next one is going to be a pure treat for Xbox fans specifically, as the Radeon 5700XT absolutely creamed the competition in Forza Horizon 4. The 5700XT, surprisingly, achieved an average framerate of 131 FPS and a 1% low of 113 FPS while running at 1440p ultra. Those are extremely tight timings and if every game treated the architecture as kindly as FH4 then the gaming space would probably be rife with incredibly well-optimised games. Even looking at the RX 590, the results are admirable for such a little card, coming in at a 73 FPS average and a 1% low of 60 FPS. Still, the 5700XT saw an average FPS increase of 79.4% and a 1% low increase of 88.3%. The 5700XT didn’t exactly keep up this trend as its 4K results, while still impressive, weren’t as crazy. An average of 85 FPS and a 1% low of 72 FPS, which is still an increase of 77.1% and 63.1%, respectively, when compared to the RX 590.
If you factor in the point that the Xbox One X, currently the most powerful console on the market, uses a GPU which is comparable to the RX 580 (not the 590), you can see that in the GPU space alone that we can expect significant performance increases with these next-generation consoles, just on raw numbers alone. This is without even discussing the points that the Xbox Series X uses 10GB of is GDDR6 memory for its GPU while also clocking said memory at 550GB/s, something which will become vital for 4K rendering as higher-resolution textures will require more memory and higher bandwidth will make that process even more efficient as games will be able to load and render faster due to this bandwidth increase. It’s an incredibly smart move on Microsoft’s part. Sony has also discussed the main point of concern about the PS5 for a lot of fans, the 16 fewer CUs in the GPU compared to the Series X’s. Sony’s Chief system architect claimed that a smaller and nimbler GPU can still pull surprising levels of performance, with the amount of TFLOPs almost being a deceptive representation of the GPU’s compute power.
Everyone was expecting both the Xbox Series X and the PS5 to come with some form of SSD. What we weren’t expecting was for that form of SSD to be NVMe drives. These incredibly small and nimble SSDs are some of the easiest storage methods to work with on a consumer front, only requiring the user to slot in the drive to the allocated position and secure it with the provided screw – no power cable and no data cable. These drives are by no means cheap however, which is why the consoles’ use of these drives is so astonishing. We’ll take a quick look as the difference between the standard HDD and the NVMe drives is so massive that putting into number just doesn’t really do it justice.
|Model||Form Factor||Storage||I/O Throughput (Raw)||I/O Throughput (Compressed)|
|PlayStation 5||NVMe SSD (custom)||825GB||5.5 GB/s||8-9 GB/s|
|Xbox Series X||NVMe SSD (custom)||1TB||2.4 GB/s||4.8 GB/s (with a custom decompression block)|
While the Xbox Series X does beat out the PS5 in terms of system storage, that’s about all it wins with for storage. It’s not even a fair competition when you look at the throughput. It’s like comparing a stock Audi R8 to a MiG-25 fighter jet, they’re just not even in the same class. I am thoroughly impressed with what Cerny and his team have managed to achieve with such a standardised form factor. The Xbox Series X also has the problem of using proprietary expandable storage (because that’s always gone down well – remember Sony’s Memory Stick tech?), whereas the PS5 allows users to add their own NVMe for expandable storage.
It will be really difficult to tell what the differences will mean, as the Series X seemingly packs more power in the other areas of the console compared to the PS5 so that brute force could potentially be enough to break even. In saying that, if Cerny and his team were able to pull this off with the NVMe, who knows what they have done with the rest of the system and its efficiency/speed?
With all this, let’s look at the console specs side-by-side:
|Model||CPU||GPU||GPU Compute Units||GPU TFLOPs||Memory||Memory Bandwidth||Storage||Expandable||Optical Drive||Performance Target|
|PlayStation 5||Custom Zen 2 8C Processor at 3.5 GHz (Variable)||Custom RDNA 2 at 2230MHz (variable)||36||10.28||16GB GDDR6/256-bit||448 GB/s||Custom 825GB NVMe||NVMe SSD Slot||4K UHD Blu-Ray Drive||4K 60FPS|
|Xbox Series X||Custom Zen 2 8C/16T processor at 3.8GHz (3.66GHz multithreaded)||Custom RDNA 2 at 1825MHz||52||12||16GB GDDR6||10GB at 550GB/s, 6GB at 336 GB/s||1TB NVMe||Proprietary NVMe SSD||4K UHD Blu-Ray Drive||4K 60FPS|
There are, of course, other facets to look at. Like the Xbox Series X’s Variable Rate Shading, which is a technology where the system can take a look at what is being rendered and make sure the important details are rendered in full detail, while also making less important things render in lower detail in order to conserve performance. This is something that you can already see in the form of deferred rendering within Warframe. It is in its infancy and is far from perfect, but the potential for even more stable performance and visuals is incredible.
Additionally, both consoles appear to support hardware accelerated ray tracing. It is almost impossible to judge how that will go given that Nvidia’s RTX line of cards are the only taste of ray tracing that we have gotten thus far and they are…well…not even remotely worth it, if I’m being blunt. There are also other technologies which could eventually make their way to the consoles, like AMD’s Radeon Image Sharpen tool which allows for games to have an extra layer of post-processing applied to them in the form of anti-aliasing, making games appear sharper with minimal performance decreases, especially when compared to Nvidia’s DLSS.
While all this speculation isn’t hard fact, when you look at the current numbers of standard consumer products which may be comparable to the consoles’ components, it’s very exciting to see the possible improvements from this generation to the next generation, especially when looking at the GPU improvements based on what is available (which RDNA 2 GPUs are not). I still maintain that whoever holds the method for efficient file compression/decompression holds the key to dominating the digital market, as I don’t think anyone will see smaller downloads and faster installs as a bad thing.