|
Есть свежачок" ? Почитать на ночь))
Хз что тебя интересует, но вот на пример рассказ альберта пенелло о разработке чипов для консолей. Не совсем в тему, но интересно.
If I understand the question posed, the idea is if it's cheaper to run an existing chip at a higher spec (therefore having less usable parts) to hit a performance target (say 10 TFLOPS) vs. building a new chip from scratch that has overhead to disable CU's to also get to 10 TFLOPS?
Is that the gist?
This is a super interesting question I'll have to think about it. There are a lot of variables at play, not the least of which is - when is this decision being made?
In general, the answer would doubtless be to build the right chip. Getting good yields and having overhead is going to be the most cost effective in the long run. That said, the sunk costs for a whole new chip development are, like, A LOT. So you don't want to do that twice.
But really it's sort of unanswerable without a lot more information. I think it really comes down to how much the yields are impacted and how the longevity of the good chips are affected. I've said before that +/- 10% is a good rule of thumb where you could consider the performance increases vs. the yield implications to be tolerable.
At the risk of beating a dead horse (since I've said this often), I also need to point out the case and cooling design limitations. You can can crank the silicon all the way up until you only have 1 good chip off the wafer, but if you can't cool it once it's in the console, it's no good. So that's a limiting factor in this as well.
(QUICK EDIT: Remember that the cost of throwing away bad chips goes up at a faster rate then the number of bad chips. So if you were expecting 90% yields, removing 20% more chips is ~30% more expensive per chip. Removing 30% more chips makes each chip 50% more expensive, and so on)
expand...The SOC's that AMD are building for Microsoft and Sony are *highly* customized specifically for the vendors based on specs, cost, and performance requirements that each platform defines for AMD. There is no off-the-shelf equivalent of anything that goes inside an Xbox or PlayStation, and the idea they are "pre-made" really misunderstands what's happening and sort of diminishes the role of Cerny or the architects at Microsoft in terms of how much custom work goes into the chip.
AMD has IP and Processes. Sony and Microsoft work with AMD's list of available IP and Processes, add in some of their own requirements, and have an entirely new SOC created specifically for them. This process takes years. Everyone is aware that the PS4 Pro GPU had unique HW instructions for temporal reconstruction, as well as support for FP16 that did not exist together in other part of the AMD portfolio. The ability to decode Xbox 360 textures and much of the native DX12 instructions in the Xbox SOC's only appear in the versions that ship on Xbox (the DX stuff I believe ended up in later AMD GPU's.) This is the 'secret sauce' that is so often talked about, and it absolutely can work for (or against) a specific console. I don't think Sony got much mileage out of FP16, and Xbox One's implementation of ESRAM didn't help them as much as expected either. While the Checkerboard techniques helped PS4 Pro a lot, and the ability to decode X360 textures is why you have such a good back compat story on Xbox.
Here's the best way I can think of to describe it. Think of AMD as a Caterer. They have a list of ingredients (e.g. Zen, RDNA, HW RT, etc. etc.) and they also have a set menu based on their ingredients. (RX5000 series, RX Vega series, RX 500 series). Most people think that Xbox and Sony choose from the Menu. But what actually happens is they choose from the ingredients, add in some of their own specific ingredients (DX instructions, Back Compat Code, Checkerboard, etc.) and have an entirely new dish created just for them.
My point is that even though AMD's R&D efforts provide the groundwork, the specific SOC found in PlayStation and Xbox consoles are completely bespoke designs in collaboration with AMD and the platform makers, and are designed years in advance. This is why anyone speculating that wholesale performance changes can happen at any time don't seem to realize how far in advance these decisions are made.
Yes small changes can happen throughout the process and plans evolve from the initial spec to the final production silicon. But you can't grow the performance of the chips this late in the game without restarting some pretty critical, long-pole parts of the process.
I'm only speaking from one side of this, but I'm pretty sure it's similar on the PlayStation side.
EDIT: Upon a re-read, I'm not giving enough credit to AMD's engineers either. It really is a collaboration between the companies. But the performance, cost and business aspects are defined by the consoles. The real silicon engineering is a collaboration.
It is entirely possible that Sony has been developing, in parallel, two completely different SOC's. I've said before it would be costly to do it, but you seem very convinced about it and I will concede that while I've never heard of such a thing happening since the Dreamcast days, it not impossible and therefore becomes simply a difference of opinion.
I will give you two reasons why I reject it, which of course you may disagree.
First, if there are meaningful difference in performance between the two chips, there will be meaningful differences in cooling and other support components. At a minimum this is an entirely different motherboard layout and cooling design. Therefore Sony would have to engineer two different *consoles*, not just two different chips, to support both scenarios. Otherwise you would have to believe that Sony is over-designing a bigger box (which comes with cost) to have the headroom to support two different SOC designs. So they would be willing to eat both the upfront costs of two silicon programs but also risk over-building a form-factor to support these different outcomes? The elegant HW designs Sony has produced so far don't suggest to me they do anything other than build a holistic system that's highly integrated between the silicon and system design. It's just far too inelegant and expensive.
The second reason (which I'll admit is more subjective) is that this idea also implies Sony isn't confident in their own strategy. That somehow they feel compelled to pay for option value just in case Microsoft does something that scares them. I just find it hard to believe they don't know exactly what they want to build, and what they want it to cost, and have already considered what the competition might do. I just don't believe Sony is so worried about Xbox that they are going to be the ones holding option value and taking this kind of risk and cost.
I think the most likely scenario that we have here is that Sony planned on a Jaguar-based console in 2019, be that PS5 or a PS4 Pro Plus, rejected that idea and moved to a Zen-based console for 2020 which is the PS5. That change seemed to have happened in 2017. If that's true (and I believe it is) then it would be further hard to believe they went and decided to spin up two different 2020 SOC's, both going through a full validation phase, only to wait to see what Xbox does then decide which one to put into production.
And even if you believed that - I can say with a high degree of certainty that they would have needed to abandon one of them at least 6 months ago to launch this holiday.
|