Friday, August 25, 2006

Adaptec, Qlogic, Mellanox…beware



CPU+I/O= ???

Sun Microsystems detailed its Niagara-2 chip for the first time at Hot Chips on Tuesday. See www.hotchips.org I missed the conference but talked with Sun chief architect Rick Hetherington this week. Look for my full report at www.eet.com on Monday morning. (Tip: if you don’t see it, search for Hetherington and it should pop up. Free registration may be required.)

Anyway, Hetherington described the chip with two 10Gbit Ethernet ports and a native 8x PCI Express interface as a 65nm server-on-a-chip. The company has working silicon it will have to re-spin before it is ready to go into Sun’s systems. For more on Niagara, which Sun calls T1, see www.sun.com/processors/throughput/

Interconnect guru Mike Krause of Hewlett Packard called this the first step toward the “universal PHY.” He foresees chips with on-board programmable serdes that can talk Ethernet, Express, SATA, Fibre Channel--whatever--as needed. That could fundamentally change computer design and the industry, rolling up many third party board makers in a whole new phase of consolidation, Krause thinks.

AMD chief architect Chuck Moore told me he doesn’t think rolling I/O into the CPU is such a good idea. But I’m not sure he really understood Krause’s vision. The reality is AMD has already rolled the north bridge into its CPUs and its latest HyperTransport announcements indicate it is well on the way to sucking in the gist of the south bridge I/O and a whole lot more. See www.hypertransport.org

Linley Gwennap of market watcher The Linley Group -- see www.linleygroup.com/ -- reminded me the embedded world is already a step ahead of computerdom. (So what’s new?) Dan Dobberpuhl’s PA Semi has programmable serdes on its chips today. See www.pasemi.org

In the long term, I think Krause is right. Someday the computer industry may not have an Adaptec, QLogic or Mellanox. Those companies’ products will be features on 32-core Intel and AMD CPUs.--rbm

2 comments:

Larry Boucher said...

It is very likely that the hardware presently supplied by Adaptec, Qlogic, etc. will be integrated in the Southbridge at some point in the future. However, since a major part of the value added by these companies resides in the software required to provide a functional interface, it will be interesting to see how it all unfolds. Intel has been working on graphics for quite a while but nVidia seems to be holding their own. Chandler has been trying to get into the storage business for a long time without a lot of success, and Hillsborough's recent iSCSI attempts seem to have come to an end. I/O subsystem software is a field far enough removed from the chip companies center of competence that I suspect that it's likely that the I/O companies will be around for a while.

Mike said...

Try another angle on the future:

- The processor provider delivers the low-level firmware / software interfaces for anyone to program their ports to signal any standard protocol. The world is not dependent upon the hardware provider as some might think leaving the provider to spend its cycles are core competency and performance / functional value add.

- The OS provider like Linux or Windows or perhaps a Cisco for networking or a HP / IBM / EMC for storage provides the software that provides the optimal set of services to communicate to their fabrics / targets. After all, all of these technologies use standard wire protocols already in use today and the IHV is primarily delivering an abstraction between the class driver interface provided by the OS and their hardware, and at times, some value add to differentiate a particular vendor's target.

- To add to the mix, one can see co-processor or acceleration technology becoming more integrated into the platform and likely on the coherency domain. There are already such examples with HT technology. Hardware vendors will either need to spend more of their resources either on this new class of peer cooperation or on subsuming the functionality into the processor.

- Adding more to the mix, just examine virtualization technology and its impact. Any time state is shared between non-coherent components, virtualization and availability become significantly harder to execute. Doing I/O via cores is far simpler in the end and reduces the cost of the customer-visible functionality to be developed, validated, and delivered.

- Add in the basic reality that the number of I/O types required to deliver a client or a server or a storage device is being consolidated and the number of viable vendors per type is being reduced day by day, and most will agree that the status quo for I/O solutions cannot remain the same. If this new take on the future comes true (and it isn't overnight - say 2010-2012 when the process technology is more than sufficient to implement on a volume basis as opposed to a niche server offering), then the fundamentals of how solutions are developed and the associated control points becomes a major concern for many architects and business leads within the industry now. The question is whether or not people can separate the strategic from the tactical enough to understand that these changes are going to happen and the extent and rate of adoption is really what the debate boils down to in the end. History shows that when something becomes volume, it gets integrated into processors. Calculating where that intercept becomes a reality is the tricky part.

 
interconnects