Can AMD keep an upper hand on Nvidia in graphics chip war?

amd 2The graphics chip war always has a moving front. Advanced Micro Devices has the upper hand now over Nvidia, but it may not be able to exploit that advantage to the fullest.

AMD fired a great volley in September as it launched its Radeon HD 5000 series of graphics chips (PC with 5000 series pictured right), well ahead of Nvidia, which suffered considerable delays in launching its new generation. But it turns out that AMD didn’t have the greatest help from its supplier, Taiwan Semiconductor Manufacturing Co., which has had trouble making 40-nanometer chips for the past year.

A 40-nm chip has circuits that are 40-nanometers apart, and it can be made more inexpensively than the same design in an older 65-nanometer manufacturing process. A 40-nm chip also consumes less power and can accommodate more transistors in the same size chip than an older generation chip. Both Intel and IBM have been able to do next-generation manufacturing.

Rick Bergman, senior vice president and general manager of AMD’s products group, said at today’s analyst meetings that yields (by TSMC) on 40-nanometer parts have been lower than expected. (Yield is the number of good chips on a chip wafer of silicon compared to the total number of chips on the wafer. A 90-percent yield means that 90 out of 100 chips on a wafer are usable. A wafer is sliced into individual chips.) He said supplies of 40-nm chips have been getting better week by week. But it was frustrating that AMD couldn’t supply enough Radeon HD 5000 series graphics chips to its customers to satisfy demand during the current quarter, Bergman said.

Part of the problem is that demand for consumer notebook computers is far above what was anticipated earlier this year, when TSMC’s manufacturing capacity plans were put in place. But the yield problem has exacerbated a shortage of graphics chips this season. The high demand for PCs was also a tide that lifted all boats, helping Nvidia report strong earnings even though it didn’t have the fastest graphics chips on the market.

Nvidia showed off a 40-nm prototype of its code-named Fermi graphics chip in October. But it hasn’t said when it will ship that chip. Analysts don’t expect it to show up until sometime in the first quarter. That’s far behind AMD. But the consequence of being far behind isn’t as bad, given the combination of AMD’s 40-nm shortage and the recovering PC market.

Bergman said AMD will keep the pressure on Nvidia, launching new versions of its 40-nm graphics chip that it will use to cover the whole market spectrum, from low end PC to high-end PC. At some point, if Nvidia’s Fermi isn’t ready, that will start to be painful for Nvidia.

Fermi likely isn’t ready because Nvidia loaded it up with features that allow it to do non-graphics (GPU Compute) functions such as scientific computing. Hence, Nvidia’s new graphics chip is bigger than AMD’s new one. And in chip making, bigger is worse. Bigger chips take more material, fewer of them fit on a wafer, and the yields on them are harder to achieve. Thus, Nvidia’s Fermi chips will be more expensive and harder to make than AMD’s new chips. Nvidia uses TSMC as well to fabricate its chips, so Nvidia has to hope that TSMC becomes a lot more competent at making 40-nm chips in the coming months.

As you can see, Nvidia is making a huge bet. It is making its own graphics chips less cost effective than AMD’s because it believes that new non-graphics applications will emerge to utilize the so-called GPU Compute functions. Nvidia has to hope that the GPU Compute functions are not a boat anchor that sinks its graphics business. Jen-Hsun Huang, chief executive of Nvidia, has repeatedly said that GPU Compute is going to cause a revolution, shifting a lot of the processing load from the microprocessor to graphics chips and making lots of new applications possible that couldn’t be done before.

AMD has made an equally big bet. In 2006, it bought graphics chip maker ATI Technologies for $5.4 billion. It has been working on hybrid chips, with the product family name Fusion, that combine graphics and microprocessors on a single chip. AMD plans to launch those chips in 2011. That’s AMD’s answer to Nvidia’s GPU Compute, and it’s also AMD’s gamble that it can leapfrog the standalone microprocessors from Intel and the standalone graphics chips from Nvidia.

Bergman said that AMD will continue to make stand-alone graphics chips for years to come. But the Fusion chips are expected to be targeted at mid-range and low-end notebook computers as well as netbooks, which are smaller than laptops and are targeted at web surfing. Those chips are going to be bigger than Nvidia’s stand-alone graphics chips and so will likely be difficult to produce. So the current competitive situation could be flip-flopped, where Nvidia will wind up with smaller chips than AMD. AMD is waiting until its manufacturing partners can create 32-nm chips before it introduces any hybrid chips.

In the meantime, next year Intel is launching its Larrabee graphics chip, which is itself a kind of hybrid of microprocessor and graphics technology, but very much unlike what AMD is planning. Intel’s chip really has lots of microprocessor cores on a chip that are capable of doing graphics calculations. Intel’s own code-named Sandybridge platform will likely go head to head with AMD’s Fusion chips.

If Fusion is wildly successful and introduces the era of “heterogenous” computing (hybrid GPU-CPU) as AMD hopes, then Nvidia and Intel will feel the heat. But that’s probably as audacious a gamble as Nvidia is making. Nobody is going to completely eliminate the competition, given that customers don’t want to lock themselves into sole suppliers. Chances are, Intel, AMD and Nvidia will be slugging it out for years to come.

Dean Takahashi

Dean Takahashi is editorial director for GamesBeat at VentureBeat. He has been a tech journalist since 1988, and he has covered games as a beat since 1996. He was lead writer for GamesBeat at VentureBeat from 2008 to April 2025. Prior to that, he wrote for the San Jose Mercury News, the Red Herring, the Wall Street Journal, the Los Angeles Times, and the Dallas Times-Herald. He is the author of two books, "Opening the Xbox" and "The Xbox 360 Uncloaked." He organizes the annual GamesBeat Next, GamesBeat Summit and GamesBeat Insider Series: Hollywood and Games conferences and is a frequent speaker at gaming and tech events. He lives in the San Francisco Bay Area.