Billions of dollars are spent every year on shrinking the size of transistors, for good reason.
Smaller transistors have superior performance characteristics but the main reason for the shrink is because the smaller the transistors are, the more you can squeeze into a chip. That means you can get better performance from smaller chips, allowing you to squeeze more chips on to the same wafer – and the more chips on a wafer, the more money you make per wafer.
Take this example of a 40nm wafer and a (more advanced) 28nm wafer:
The left wafer (40nm transistors) has chips of 150mm^2, 12.5mm each side. The right wafer (28nm transistors)has chips of 100mm^2, 10mm each side. The left wafer contains 376 chips and the right wafer contains 600 chips.
Even though the chips on the right wafer are smaller, each chip still performs better than a chip from the left wafer – because the smaller 28nm transistors means you can squeeze more into a smaller area.
So node shrinks bring more money and smaller, faster chips – while using less power than before…it’s just a win all round. Simple, right?
Well it used to be. As the limits of physics are pushed ever further, each new smaller “node” gets more and more complex to achieve – so costs continue to rise. Intel currently leads the way with their 22nm (nanometre) process node, with TSMC (Taiwan Semiconductor Manufacturing Co.) not too far behind on 28nm. Note that it’s a bit more complex than simply looking at the different sizes, but for the purpose of this article it will do.
TSMC (the company who makes graphics chips for AMD and Nvidia) has in fact very recently moved on to 20nm, with two fabs producing chips (said to be Apple’s A8) on their new 20-SoC process. Production isn’t at full tilt yet – that will take another 6 months or so to reach. Intel will soon leap-frog TSMC again by moving to full 14nm production – although they are having some difficulties getting there. Intel does have another advantage however – FinFETs – more of which will be discussed later.
OK so, 20nm graphics?
This article is about graphics though and the two big names in graphics are of course AMD Radeon and Nvidia GeForce. The following table shows the relevant information regarding graphics chip node progression.
|AMD Series (Radeon)||Node||Year||Nvidia Series (GeForce)||Node||Year|
There has been a major slow-down in the move to smaller nodes ever since 40nm first appeared in 2009. The table doesn’t give the entire story however, as before 40nm there were half-steps (called half-nodes) between each smaller node – that is to say, instead of a full shrink (say from 80nm to 55nm), there would be a “half shrink” to 65nm first. There should have been a 32nm half-node at TSMC in 2010 – but it was cancelled. These “half nodes” for graphics basically paused at 40nm and that’s a big part of why it now feels like forever between each progressive node.
So we’re supposed to be getting 20nm graphics cards next, but the evidence against that happening continues to mount.
20nm doesn’t play fair.
The first indication that something was different this time appeared in this March 2012 article over at ExtremeTech – Nvidia deeply unhappy with TSMC, claims 20nm essentially worthless.
The important slides:
As you can see by the dashed blue line, the price per wafer increases considerably at 20nm compared to 28nm. The #2 pure-play foundry, GlobalFoundries, has also stated that 20nm carries a cost penalty – though they claim it is a “one-time blip” which will fix itself at 10nm.
Not only does the price of each wafer increase – the cost benefits will also barely be recognized after time. This is not normal with a new node – initially the costs will always be higher than the previous node due to the new technology, but over time as the node matures, the smaller chips (meaning more chips per wafer) should start to recoup the initial costs and more.
Nvidia’s problem was 20nm costs had now risen to the point where this will barely be seen. The cost per transistor at 20nm only barely drops below the cost at 28nm. 20nm is not worth it for Nvidia – and if it’s not worth it for Nvidia it’s highly unlikely to be worth it for AMD.
Pricing is no good but what about performance?
One month after the ExtremeTech article was released, in April 2012, an EE Times article reported that TSMC would only be offering one process at 20-nm. This was well down from their four process offerings at 28nm – three low-power and one high-performance.
TSMC initially planned to offer two 20-nm processes, presumably a high performance process and a low-power process.
But, after some development, TSMC determined that there was not a noticeable performance difference between the two 20-nm processes.
Each node can be further “tweaked” for high performance or low power, so you can have 28nm HP (High Performance – for stuff like graphics cards) or 28nm LP (Low Power – for stuff like phones) etc. So which one did they decide on? Well that was never really in doubt after the name was leaked – 20-SoC. SoC stands for “system on chip”, i.e. the things you find in your smartphones.
20-SoC is not optimized for high-performance graphics chips – and although both AMD and Nvidia are big customers of TSMC’s, the industry has rapidly moved on to ultra-mobile products where low power is much more desirable than ultimate performance. More importantly, Apple is an awful lot bigger and has a lot more money than both AMD and Nvidia combined.
Let’s see what TSMC says about their 20nm performance -
TSMC’s 20nm process technology can provide 30 percent higher speed, 1.9 times the density, or 25 percent less power than its 28nm technology. TSMC 20nm technology is the manufacturing process behind a wide array of applications that run the gamut from tablets and smartphones to desktops and servers.
I’m sure you noticed the lack of mention of graphics – but the important part is the performance. TSMC claims 30% higher speed, but that’s compared to their low-power 28nm process! Compared to their high-performance process it would barely be 10% faster, if anything. I believe TSMC’s 28nm HP would likely be faster for graphics cards than their 20nm-SoC.
So TSMC will only have one 20nm process called 20-SoC and it will be tweaked specifically for ultra-mobile products. It took a while for this to sink in and nobody really believed it. With every new node comes new graphics chips, simple as that – so something would be done to make 20nm viable for graphics surely?
Most tech analysts continue to believe that we will see 20nm graphics cards. The evidence against it keeps mounting, however.
GlobalFoundries says no 20nm HP.
A recent article at The Register on AMD’s Kaveri – which for a long time was also assumed to be a 20nm or 22nm chip - unearthed the following gems:
There were reasons to go with 28nm rather than 22nm, Macri told us, that were discovered during the design process.
What we found was with the CPU with planar transistors, when we went from 28 to 22, we actually started to slow down.
“So what we saw was the frequency just fall off the cliff,” he said. “This is why it’s so important to get to FinFET.“
Check out the first few seconds of this video -
You can see that they have no high-performance 20nm planned!
- High performance products likely to skip 20nm.
It’s important to point out one major difference – AMD’s “Kaveri” chip is made at GlobalFoundries in Germany – not in Taiwan at TSMC where their discrete graphics chips are made. However, we’re still talking about physics here and that should hopefully remain constant whether in Taiwan or in Germany – if not then we’ve got a lot more to worry about than 20nm graphics cards.
The point is, it is very likely that TSMC and GlobalFoundries ran in to the same high-performance problem at 20nm.
Remember Intel is at 22nm already – and they used FinFETs to get there. Both TSMC and GlobalFoundries decided to go to 20nm without FinFETs. A year or so back TSMC and GlobalFoundries started talking about their 16nm with FinFETs technology that would be appearing soon after 20nm. Neither of these are “true” 16nm – that is to say that the transistors haven’t been shrunk nearly as much as a full node – the 16nm term is being used more as a marketing term (in fact GlobalFoundries is even naming theirs as 14nm).
It seems very likely that all of them ran into the same high-performance wall around 22nm, and Intel got around it by using FinFETs.
Remember back in April 2012, TSMC said:
TSMC determined that there was not a noticeable performance difference between the two 20-nm processes.
What they likely meant was “we couldn’t get high enough performance out of our high-performance node”. Now, almost 18 months later, we hear from GlobalFoundries that they also couldn’t hit the high performance targets and needed FinFETs to get there!
That is what I call a hill of evidence. It by no means ends there, as you’ll see over the page.