2 Pingbacks/Trackbacks

  • carol argo

    Intel 14 NM main issue is cooling!only grapheme (so far)could help intel .sadly supplying what would be required for 14 NM grapheme wise is elitistjerks territory.yes it it gonna be a lot of theory crafting before this happen.22nm cooling is at the fringe of unacceptable ,intel would only to grapheme a22nm CPU to gain huge boost in everything.so 14 NM isn’t a rush job

    • flashmozzg

      Cooling of 22nm is OK. Atleast at 4GHz it’s easy to cool my haswell quad core with good mid-range cooler.
      And they’ll just have to stop putting cheap-ass thermopaste on their crystals as they started 2 gens ago. If you change it on metal you can easily get -10 degrees C*

      • Topsu

        The cores are too small and would not tolerate soldering.

        • flashmozzg

          They tolerated before and still tolerate it in server/lga2011 versions (atleast eng. samples). thermopaste is just cheaper and enough for general use and it makes it harder to overclock which is what Intel want.

          • Topsu

            But in the server/lga2011 there are more cores making the area to solder hs on larger.

          • flashmozzg
          • Black Thorne

            That has absolutely no bearing on the argument, if anything the reverse is true.
            Either way, Indium or gallium foil are great options for a paste-less high-performance solution to facilitate the usage of a heatspreader. That’s not the problem. The problem with cooling is that we need to move away from air cooling. It’s noisy, inefficient and doesn’t keep playing ball properly with tech increases.
            As mobile chips get more powerful and generate more heat, a solid-state cooling solution is needed.

  • Ole fra trondheim

    Good read. I enjoyed it.

    I am however a bit sceptical about your conclusion.
    GloFo states “No high power module in 20nm”
    TSMC are already producing 20nm low power. Like TSMC said, they saw no differences between LP and HP processes.

    What this means is that both foundries could be producing 20nm wafers for AMD and Nvidia for their GPUs in low power, because it is good enough for High Power. In the video you can also see that the jump from 28nm HPM to 20nm LPM = +16%.

    Combine that with the fact that Nvidia and AMD -need- 20nm to fit 2x as many transistors they use in the 28nm GPUs.

    I think that both GloFo and TSMC have already been producing 20nm LP wafers for AMD and Nvidia and that they both will use that in upcoming GPUs.

    • http://techsoda.com/ TechSoda

      Cheers Ole!

      They are probably measuring the difference between HPM and LPM using an ARM SoC, so there is a good chance a bigger chip like a GPU would lose another 10%. That’s getting it close to scratch…

      There’s also a very interesting point in the Anandtech 750 Ti review that a lot of people missed – http://www.anandtech.com/show/7764/the-nvidia-geforce-gtx-750-ti-and-gtx-750-review-maxwell/3

      Down the bottom of page 3 –

      “Given that TSMC 28nm is now a very mature process with well understood
      abilities and quirks, NVIDIA should be able to design and build their
      circuits to a tighter tolerance now than they would have been able to
      when working on GK107 over 2 years ago.”

      So basically the level of maturity on 28nm now allows for more optimized designs, which should allow it to exceed the abilities of the current 20nm. Using all of this together I concluded that TSMC’s current ultra-mature 28nm is a better process in terms of performance than their current 20nm.

      Now the last point to consider is that both of them released very fast 28nm cards recently – full die chips in the 780 Ti and 290X. So, they are both at their absolute limits of 28nm – making it even less likely that they would be able to surpass it at 20nm, at least for a very, very long time.

      So if we assume the only real gain is in transistor density (which to be fair is the main reason why both AMD and Nvidia would go there) – but that financial benefit is wiped out by much higher fixed costs (wafers), then the reasons for going to 20nm just keep on diminishing.

      I believe that both of them decided a long time ago that 28nm would last them until FinFET’s arrive. Easier and cheaper to stay and optimize on the known quantity that is 28nm.

      You could be right about the low-power part – I had considered that both of them might just throw out ultimate performance and go all-in on mobile. Basically giving up on beating their current high-end cards.

      Interesting times for sure, I can’t wait to find out if I was right or completely wrong on this.

  • http://techsoda.com/forums/memberlist.php?mode=viewprofile&u=120 NaroonGTX

    Indeed, I’m not sure why everyone is assuming that the next wave of GPU’s will be 20nm. Too much evidence pointing against it for it to be remotely feasible. TSMC’s 28nm is much more mature than it was two years ago, so it’ll be interesting to see big Maxwell and Pirate Islands later this year.

  • Pingback: Yes, Intel is in trouble. And it's worse than you think. | TechSoda

  • Pingback: Nvidia GeForce GTX Titan Z - yours for only $2999 | TechSoda

  • chris0101

    Question though for Jim:

    I’m interested in buying a GPU this summer. Does it make sense to buy a 290X or 780Ti or to wait it out?

    Ok, so based on this article:
    – 20nm GPUs are not coming out any time soon and even if they did, the performance gains are paltry, but the expenses to make them are very high. We’re probably only going to see lower power chips from the 20nm node for now.

    – We could see a Maxwell later this year – probably like the 680? How much more powerful would a “big” (550 mm2) Maxwell be than the current 780Ti?

    – Pirate Islands may come out later this year? How much faster would it be? 10-20% maybe?

    So I guess the question is, wait for now for the Kepler/Pirate Islands in say, Q4, or buy something now? I usually buy the custom PCB cards, so add another 3-6 months on top of that I guess.

    • http://techsoda.com/ TechSoda

      A couple of other sites are suddenly jumping on the 28nm bandwagon.


      The chances of seeing a big (20nm) Maxwell this year are for me, non-existent. There could be a chance of a big 28nm Maxwell though. If there is a big 28nm Maxwell you can expect 20% minimum and possibly 30% over the 780 Ti.

      So little is known about Pirate Islands that it’s basically a total guess. We think it will have HBM, most people think 20nm but I’m sticking to 28nm for that. AMD has already said that all they have this year is 28nm and side-stepped the issue of 20nm graphics altogether when asked.

      They are all being very cagey, like they are hiding something.

      As for buying something now or waiting, it’s the age old question. We are half-way between series, making it even harder to decide on what to do right now. I can’t answer that question for you though – I don’t know what I’d do myself right now. Maxwell is impressive and *if* Nvidia has an 880 coming out soon-ish it could give them a big lead. Nobody knows what AMD is doing but Pirate Islands appear to be at least 6 months out still like you said.

      • chris0101

        How faster RAM performance are we likely to see in real world with HBM on the AMD Pirate Islands though? Intuitively, I’m expecting a leap comparable to that of going from 5870 to AMD 6970 in performance, which on paper should be competitive with Maxwell.

        On that note, how well do you think a “big” Maxwell will scale? I hear Nvidia Maxwell is coming out in Q4 2014, which seems reasonable all things considered.

        I’m going to wait a month or two and make a decision.

        • http://techsoda.com/ TechSoda

          Just too hard to say. Bandwidth is king at the extreme high resolutions but less important everywhere else. I think we’ll see AMD pushing Eyefinity and 4K pretty hard because of that.

          As for Maxwell scaling, it’s a complete unknown but Nvidia normally scaled downwards worse, that’s why they were losing so badly to AMD at the low and midrange for years before Kepler.

          I think waiting a couple of months won’t hurt – at least we should have an idea of the mid-range Maxwell by then. If that shows similar gains as the 750 Ti then it’s looking good for Nvidia.

          • chris0101

            Yeah I think you’re right. I will probably wait it out for a couple of months and see.

            Thanks for the help though.

          • NoldorElf

            Update: Bought a used MSI R9 290 Gaming (non-mining luckily). Price was $380 CAD, which is about 350 USD and 205 GBP. It wasn’t the best deal out there, but it was non-mining and from a source I trusted more. Not sure where you are, so I gave all currencies.

            I suspect that 4k is a while off. My goal is to get a monitor that is IPS/PLS with 60 Hz 4k with no MST. That seems a while off.

            I may go Crossfire/SLI with a top tier card next generation though, depending on their relative performance and I’d like to get MSI’s Lightning series then.

            But as far as gaming goes, this seems good enough for now.

  • Dion Piggott

    Can you please explain why is the performance and capacity better when the die is shrinked, because logically I can’t seem to completely grasp the concept, espically when the transistors are considerably smaller to do so on the smaller surface area, couldn’t the smaller transistors be utilized on the larger surface area and thus support more, and negating die shrinks altogether, because for mobile devices I totally understand the benefits but for desktops it seems totally unnecessary…

    • http://techsoda.com/ TechSoda

      Smaller transistors use less power when switching, and power is a major factor in determining the maximum performance of a chip. Think about how the power draw of a chip increases as the transistor speed increases. Smaller transistors can lead to faster clock speeds at the same power, basically.

      As far as increasing performance goes, you tend to find that AMD and Nvidia stick somewhere around their classic chip sizes regardless of transistor (node) size. Nvidia’s high-end chips are 500mm2+ behemoths while AMD’s were generally ~350mm2. Obviously that means Nvidia can cram a lot more transistors, and therefore performance, into a single chip.

  • Mario

    You assume a lot of thing in this article.
    GF most likely skipped 20nm HP because they made a deal with Samsung for 14nm Finfet so I’m sure they are trying to bring out a much better node in the same time frame.

    Also the performance you get out of the 20nm node depends on your chip architecture. Maybe AMD developed from the ground up their next gen cards for 20nm and found ways to improve performance and efficiency above that 10% level. Nvidia like they always do perhaps tried to port their architecture on 20nm and found it brings them few gains. It’s clear that Maxwell was developed for 28nm process.

    The big deal will be 14nm Finfet in 2016. Also for phone SOCs I estimate that 20nm will bring about 25% more performance compared to 28nm. That is decent.

    • http://techsoda.com/ TechSoda

      Well I said Maxwell would remain 28nm and I was right. I said Pirate Islands would stay 28nm and it appears I am also right about that.

      Check what the rest of the press was saying 4 months ago, let alone 8 months ago when this article was first written. You’ll see who was making assumptions.

    • Frank Trottier

      Indeed. AMD clearly stated that they were going for 20nm and then 14nm here: http://www.eteknix.com/amd-tape-14nm-20nm-process-chips-next-2-quarters/
      And it fits with the time line of AMD and what they need and the specs leaked recently. AMD could wait but Nvidia had to release something to make some splash so they were not happy about the state of 20nm. The leaked specs talk about 4096 SP, 1000 Mhz. Lots of transistors from die shrink, no increase in speed (1000 Mhz) and power draw is stabilized but improved greatly from their architecture (exemple Tonga). Fits like a glove.