There's not a discussion thread on the PS4 from the perspective of a PC gamer/hardware enthusiast, so I thought I'd start one. From what I see, lots of boys and girls are jumping up and down like a Jack Russell terrier on pharmaceutical grade cocaine. I love it when people see this stuff and it's like a door has been opened to Narnia, like Sony have take this this technology from Star Wars, as if it were something revolutionary - It isn't really, the PS4 tech demo [from 20/02/2013] was ran on a 'similarly powered PC'. There's two things I'd like to clarify here: A sigh of relief that the PS4 will be using PC components and architecture, which means we won't just be getting 'Console Ports' anymore. They will be designed from the ground up with x86 and GCN architecture in mind. No more ridiculous 'PowerPC' - Performance Optimization With Enhanced RISC Performance Computing or CELL architecture that is poorly converted over to the x86_32 or x86_64 platform. According to contrary belief, the PS4 isn't that powerful - Even if it is Supercharged! It will be similar to that of an mid-high-level laptop. The CPU being what you'd find in an entry-level laptop, the GPU in a high-end gaming laptop. Still though, that bloke (Marc) who was wearing the kitchen curtain shirt, will never live down calling the PS4 hardware 'Supercharged'. He didn't need to give rhyme, reason or even the slightest bit of quantification. The fact that it's entirely magic is good enough for him. I hope that he takes care when he rides his Unicorn to work, they're somewhat a rarity these days. I'm not starting a flame-war here, so don't get all Sony fan-boy on me. Yes, I am a PC gamer but I want to create a constructive thread, on what is under the hood of the PS4 and what kind of performance and benefits we can reep - Both for the benefit of console gamers and the boon of DirectX11 graphics, which will make the PC gaming community infinitely happy.. Side Note - Did you know that DX11 has been around since 2008, but we've only seen a handful of titles utilise it, properly, at least, on the PC, because consoles have been stuck in the year 2002 with DX9. Yes, the technology that powers the 360/PS4 is from 2002, whilst the hardware is high-end 2005. If they'd held out a little longer, they could of dropped DX10 architecture in there. The new PS4 will be, roughly, as powerful as a very high-end PC from 2008, or similar to high-end PC from 2009. Actually, the best comparison to make would be to say that it will be similar to a high-end gaming laptop with an entry-level CPU, that has been doubled in size - Making it a middle-high-ish CPU? I don't even know where to begin on the CPU, as there are very few details regarding the Shaguar. Us ('PC Gamers') have been awaiting this ('PS4 announcement') moment for at least 3-4 years... And it literally couldn't come sooner. Our DX11 monsters don't have any DX11 friends, except from Battlefield 3 and a couple of notable others. Btw, whilst watching that dreadful PS4 announcement - I don't know about you, but I couldn't give a flying toss about the Facebook/Social integration features. DAS HARDWARE Column 1 Column 2 0 CPU : x86-64 AMD Jaguar, 8 cores 1 GPU : 1.84 TFLOPS, AMD next-generation Radeon based graphics engine 2 Memory : 8GB of GDDR5 CPU Source: HERE The good news is here, that the Jaguar will have one floating point unit to one integer unit - For those of you who don't understand, I have one word for you to Google: Bulldozer. Unfortunately, it looks like Sony and Microsoft are using the same CPU in their console. Gotta love a good bit of market competition, there's nothing like it! I hope either 1 or both of the console developers change that CPU out so one has an edge over the other. Otherwise, you may as well be better off buying just one, depending on which has the best networking [both social and gaming] and multimedia capability AKA buy the one your friends are going to buy, or else you're going to be alone - forever. The original AMD Shaguar has 4 cores, so this is definitely a custom chip. Originally, it is/was designed for mobile use [laptops, tablets, phones] and created in direct competition with the Intel Atom, it bamboozles me as to why Sony/MS have went with this chip - unless they're planning to build an unbelievably small-form, or even portable, platform. Possibly, in stead of having a 4 core chip with 3.2GHz per-core and each core processing 2 threads at once, they've went for 8 physical cores, at half the speed. I'd prefer more cores to more hyper-threading; will the chip be cable of AMD's HT equivalent? If so, that's good news too. For those of you that don't know how Intel's HyperThreading works, it is essentially working like this: Imagine in your mind a supermarket, there are 4 queues, with 4 cashiers who can multi-task. All 4 queues have old ladies standing with a basket full of shopping and once they get to the till they start counting mountains of change. So, there you are, standing in a queue, behind an old lady, who is taking her time and counting all of her change. But you only have a loaf of bread, and have the exact change. The cashier looks at you, notices that you've got the correct change and says, "I'll process you just now, and let you pass." - The cashier asks the old lady to stand aside while she rockets you through the till. Now, imagine having the same supermarket, but there are lots of old ladies with lots of change, all standing one behind another... Urgh... And unfortunately, you're waaay at the back. Now, the cashiers can't multi-task, like they did with you. They just have to keep churning through the old ladies until you're in a position to get through. So, at that exact moment, the supermarket decides to upgrade, they go SimCity on you and plop down 4 more cashier desks and replace the cashiers with ones that can't multitask. They can now process all of the old ladies at the same time! Now they can spread out, right across the store, and give you a better chance of getting served. Essentially, HT comes in handy when you have less bandwidth, but it only comes in handy in certain conditions. More bandwidth is always better. I'm thinking that with more cores, less speed they discovered a great trade-off in performance vs heat vs cost. That would definitely be my guess. Less transistors = Less power draw = Less heat = Low cost. The original has a TDP of 25W, so I'm guessing double for 8 cores. Which is hee-haw. This will allow for a very-small-form PS4. GPU Sony have quoted, "1.84 TFLOPS, AMD next-generation Radeon based graphics engine." The way I see it, AMD are able to make these chips by just slamming CUs together - almost in a modular fashion - and keep a (near) linear scale of performance. It's a good style of HW design to go for with consoles, because at any one time, before the release of the PS4, Sony can turn around and say, "Right boys, lets get it right around Microsoft, we want another 4 CUs dropped in for the release day console." - That way they don't need to reinvent the wheel, in order to squeeze more juice out of their box. Pretty much every tech-geek website on the Interweb has quoted that the GPU will host 18 Compute Units, leading me to believe that the performance is going to similar to watered down HD7970M - The desktop equivalent lies somewhere between a HD7850->HD7870. Lets throw out some base stats and whip up a comparison. HD7870: Core Clock: 1000MHz 2.56 TFLOPS Single Precision compute power GCN Architecture 20 Compute Units (1280 Stream Processors) 80 Texture Units 128 Z/Stencil ROP Units 32 Color ROP Units HD7850: 860MHz Engine Clock 153.6GB/s memory bandwidth (maximum) 1.76 TFLOPS Single Precision compute power GCN Architecture 16 Compute Units (1024 Stream Processors) 64 Texture Units 128 Z/Stencil ROP Units 32 Color ROP Units Dual Geometry Engines Dual Asynchronous Compute Engines (ACE) HD7970M: 850 MHz Engine Clock 2.17 TFLOPS Single Precision compute power Graphics Core Next (GCN) Architecture 20 Compute Units (1280 Stream Processors) 80 Texture Units 128 Z/Stencil ROP Units 32 Color ROP Units Depending on what type of architecture Sony are running, if you take a blend of the above stats, take out a little cost, power-requirements and reduce the heat - You get what's inside the PS4. If you take the HD7970M, take away two CUs, you get a GPU with 1152 Stream processors, and you land on a juiced up HD7850. Although, the PS4 has a slightly better count of TFLOPS! I'm guessing this is attributed to a little witchery with the GCN architecture, either that or they're using GCN2, or are using a tweaked version of GCN. The GCN cards are built for computing. There's no doubt about that. These cards can pack a big bang for computing, although, raw computing power isn't shader arithmetic. The GTX680 has 3.1TFLOPS of shader arithmetic, the HD7970 has 3.8TFLOPS of shader arithmetic - Yet, amazingly, they perform very similar. They both push ahead in their own separate titles, depending on which have the best/optimised drivers and who has been a 'partner' during the development cycle. Usually, games that have sided with AMD, AMD cards will have the best performance on that title - Same applies to nVidia. It made sense for Sony to choose AMD, because they were planning on using the GPU as a GPGPU, and are using the GPU for physics computation(s). Personally, I would of liked to of seen the PS4 break the 2TFLOP barrier with the GPU, since the PS3 seemed to do it so easily - according to Sony's press release, many moons ago! But again, I digress, this is a laptop-esque console we're talking about here. You can't cram in all of that power with nowhere for the heat to go. Sony may increase the clock speed before release day, or change it in a firmware update; like they/you can do with the portable PS devices. It's more than possible to squeeze more juice out of these cards. As a PC gamer, I should know. The AMD cards have MASSIVE overclocking capabilities, and that's without playing with the voltages; that's when you drastically increase temperatures. Maybe, having the PS4 and 720 use AMD for gaming is a good step for us PC gamers. A lot of games will be optimised for the GCN & x86 architecture - Here's hoping that AMD don't abandon it, just a little-ways down the line. The boys with HD7970s may not need to upgrade for quite some time! RAAAAAM: I'm so, so ****ing glad they've went with 8GB of unified RAM, that is way more than expected. The 720 is slated to use 4GB of DDR3, coupled with a little bit of cache to make up for their own negligence - That's a fail.. A massively, massive fail. The fact that Sony is going with all GDDR5 is a big sigh of relief too. That's the stuff you normally see on all/most higher end GPUs, and it is FAST. I personally would of wanted to see a little more RAM in there, considering a lot of games these days can use, in excess of 2-3GB of RAM and a lot of them are dogging GPUs that have 2GB of VRAM. Skyrim on the PC can use 3GB of VRAM, when coupled with the HD texture mods. That's a total of 4GB of available RAM and 3GB of VRAM being chewed up by one game. Not to mention, the PS4 will have the SonyOS running in the background, with the multimedia functions and Sony 'home' environment running too. They even have video being constantly being recorded, on a separate chip [The AMD APU?] - That kind of stuff requires a good bit of frame-buffer and is I/O heavy too; especially when considering your friends can tune-in. I'm assuming they will use the Gaikai system for that and keep your PS4 constantly uploading - But again, I digress. We can chat about Gaikai later. Above, I didn't mention anything regarding the RAM on the GPUs, because as always, this console is using a unified RAM system. Although, I will say ONE thing about existing technology. The current HD7000 series can support a maximum of 153.6 GB/s memory bandwidth. The Sony boys' custom unified design has allowed them to squeeze 23GB/s more - That's a lot more pixels.