Thursday, November 30, 2006

Super interconnects, Part 2

Although it lost its bid to build a new supercomputer under the DARPA HPCS program, Sun Microsystems will continue developing the novel Proximity interconnect that was one part of its proposal. Jim Mitchell, who oversees the Sun project, said systems using Proximity may appear before competitors get their HPCS systems out in 2010.

Proxmity uses capacitive coupling to link chips arranged in a checkerboard-like module at speeds of as little as 2 nanosceonds. The big hurdle right now is finding an accurate yet low cost way to align the capacitive pads on the chips so the interconnect can hit its promised speed, Mitchell said.

Tuesday, November 28, 2006

Super interconnects

IBM told me yesterday its Power7 processor to be used in a 2010 supercomputer commissioned for the DARPA HPCS program will integrate interconnect on the CPU—-but they won't say which interconnect, yet.

Separately, Jack Dongarra from the University of Tennessee tells me the new benchmarks for supercomputer productivity can be used to isolate and measure the impact of good interconnects. He looked at big systems that used the same x86 CPUs but different interconnects and found Cray's SeaStar provided tons of system bandwidth not available on systems using Gbit Ethernet as an interconnect.

No one has measured the relative pluses and minuses of Infiniband, Myrinet and Quadrics yet. But there are benchmarks for some 137 systems online now, so the analysis is waiting to be done. Please do it, and come back to me here with a posting about the results! –rbm

Monday, November 27, 2006

Parallel I/O powers hard drive concept

In a sort of back-to-the-future proposal, engineers at ECC Technologies are shopping a new concept for hard disk drives based on parallel I/O that could be a substitute for RAID arrays.

So-called parallel-transfer Hard Disk Drives (pHDDs) would contain multiple, single platter head disk assemblies with one or two read-write heads per platter. They would operate in parallel, speeding up I/O and eliminating the need for separate RAID arrays. The Web site TechWorld has posted a summary of a white paper on the concept. The full paper is available to those who go through a free registration process.

ECC Technologies'agenda here is that it owns US Patent Number 5,754,563 that it claims is key to making such drives. It even makes an open solicitation to drive giant Seagate to buy the company to get a lock on the concept. I'd love to hear any comment from HDD engineers on this product idea, so please post away. –rbm

Wednesday, November 22, 2006

A peek into peta projects


Cray Inc. and IBM Corp. will split nearly half a billion dollars as part of a Defense Advanced Research Projects Agency contract announced Tuesday afternoon (Nov. 21) to fund development of petaflop-class supercomputers before the end of 2010. Sun Microsystems was dropped from the High Productivity Computing Systems program (HPCS) that aims to foster work on computers that are more powerful and easier to program than any in current operation.
It's unclear what impact the loss may have for Sun which has struggled since 2000 to be profitable and competed aggressively for the contract. Specifically, I wonder what will happen to the novel capacitive coupling chip-to-chip interconnect called Proximity as well as a high-end parallel programming language called Fortress that Sun proposed to HPCS.

I am also hoping to get fresh details about IBM's proposal which it has kept hush-hush to date. A DARPA release just said it is based on a Power7 microprocessor, IBM's AIX operating system and General Parallel File System.

Cray has been the most candid of the three about the Cascade system it proposed to DARPA planners. Cascade is essentially a cluster-in-a-box that will deliver a mix of scalar, FPGA and hybrid vector/massively multi-threaded processor boards based on future versions of Cray's current separate product lines that will be integrated into in a single hybrid system. --rbm

Thursday, November 16, 2006

White paper: HT beats PCIe in latency

The HyperTransport Consortium recently posted a white paper documenting the latency advantage of HT over PCI Express for communications systems. The paper "presents a couple of usage scenarios involving read accesses and derives the latency performance of each," according to author Brian Holden who chairs the technical working group of the HT Consortium.


The paper measures latency for short packet reads on HT at 147-168 nanoseconds compared to 240-273 for PCIe. For long packet reads HT weighs in at 576-630ns compared to 885-1,008ns for PCIe.

The paper may be timed to exploit the weakness of PCIe in comms now that the related Advanced Switching Interconnect has gone belly up (see my posting of Aug. 23-24). ASI was supposed to bring comms-friendly features to PCIe, such as support for multiple host CPUs.

Despite HT's technical edge in latency, my sense is big comms OEMs like Cisco are moving away from the parallel HT and toward PCIe in their designs because it is more broadly supported. (Intel's PC volumes far outstrip those of HT backer AMD). And PCIe performance may not be the very best, but it is good enough.

I'd love to hear any other opinions from the comms world, so post away! –rbm

Wednesday, November 15, 2006

Do-it-yourself home automation


Toss another protocol stack on the smolder pyre of home networks. Startup Threshold Corp. (Petaluma, Calif.) announced this week it is making its homegrown One-Net wireless home automation software available for free licensing.

The small company decided it needed more range, battery life and security than Insteon, Zigbee and others offered, and didn't want to pay the few thousand dollars some of those technologies require for a developers kit or alliance membership. So it did its own thing which can ride on top of any of seven different 868 or 915 MHz band transceivers from Analog Devices, Texas Instruments, Semtech, RF Monolithics, Micrel and Integration Associates.

The resulting control net can deliver 38.4 to 230.4 Kbits/second over 100 meters indoors for as little as $2-3 a node, says Threshold CEO James Martin. The startup will ship a Wi-Fi access point and about half a dozen control peripherals for One-Net before June.

My guess is there will be very few people who choose to ride this option given its low profile and small, one-company backing…but you never know. A lot of people love to roll their own! –rbm

Tuesday, November 14, 2006

Infiniband flying high in supers…

The latest ranking of the Top 500 supercomputers out yesterday shows Infiniband is gaining momentum on Gigabit Ethernet as a clustering interconnect in the world's most powerful systems.

Gbit Ethernet is still the most widely used clustering interconnect appearing in 211 systems on the list, but that's down from its use in 256 supercomputers just six month ago. Meanwhile InfiniBand was used in 78 systems more than double the 36 systems that used the link in the list six months ago. The proprietary Myrinet interconnect from Myricom came in a narrow second with 79 systems, but that was down from use in 87 systems six months ago.

Erich Strohmaier, a researcher and one of the authors of the list, said he expects IB to continue to make gains. Gbit Ethernet cannot keep up with the performance needs of the latest systems and processors he said. Click here for more details.

…but just another bird in the flock

The outlook for Infiniband is more mixed beyond the rarified world of supercomputers that demand top performance. On Monday, Mellanox Technologies, the last remaining IB merchant chip supplier, rolled out a hybrid architecture supporting IB and Ethernet. ConnectX even has limited support in software for Fibre Channel. For the full story click here.

The message is that the mainstream data center will use multiple networks. Convergence will come in silicon embracing those nets.

Myricom has already started embracing Ethernet. Following a similar trend, Fibre Channel leader QLogic acquired Infiniband switch maker SilverStorm Technologies in October. It had acquired earlier this year Infiniband card maker PathScale.

In this mixed world, Infiniband has a future as the highest performance interconnect at the high end and one of the many nets with a role to play in the mainstream.

A couple interesting data points:

--The Infiniband chip business became profitable for Mellanox last year for the first time since the company was founded in 1999 with profits of $3 million on sales of $42 million in 2005, according to S1 documents filed with the SEC in September.

--To keep its edge, Mellanox will demonstrate 40 Gbit/s Infiniband links with one microsecond latency over the ConnectX chips early next year. It is developing 10Gb/s serdes blocks designed to enable up Infiniband links at up to 120Gb/s in the more distant future. --rbm

Sunday, November 12, 2006

Startup demos 10Gbit/s chip-to-chip innovation


Startup Banpil Photonics, Inc. said today it has demonstrated a way to send 10Gbits/second signals 1.5 meters across a standard pc-board without using power-hogging pre-emphasis or equalization to amplify the signal. Its secret sauce appears to be a technique to dig a passage through the board and send the signals as radio waves.

A February patent application by Banpil chief executive Achyut Dutta describes a method for creating an open trench or slot as part of a single- or multi-layer dielectric system to reduce microwave signal loss.

A "large portion of the signal (electromagnetic wave) is allowed to pass through the air or dielectric material having the dielectric loss less than the base dielectric material itself," the application states. "The trench or back-slot of the dielectric system can be filled with air or kept in vacuum. Alternatively the trench…can be filled with the liquid crystal material, which can tune the dielectric constant and loss," said the patent application.

Banpil has also cobbled together some proprietary tools. Banpil has created a unique RF simulation, layout and modeling tool chain based on two tools from unnamed EDA vendors and a tool designed in house.

"To let others do these designs, we will need to get a Cadence or Mentor to make these tools," said Dutta, who has started talks with two unnamed companies about creating such tools. --rbm

Thursday, November 09, 2006

Voices from the data center, Part 1

Thanks to Mike Krause, an I/O specialist in Hewlett-Packard's x86 server group, for his thoughts about data center convergence. Mike was one of the early proponents of iWarp, a version of Ethernet accelerated with Infiniband-like constructs such as remote direct memory access. Here are some highlights of what he shared with me this week:

On being agnostic: "Convergence is the buzz word for many people, but buzz can take years to translate into real production environments, and customers are a skittish crowd in the enterprise. Even Cisco has taken on the message of being interconnect independent, a message I've pushed for HP for many years now--build what the customer wants and needs rather than what the vendor wants it to be."

Infiniband as the converged fabric: "The IB community [is] in a bit of flux without a credible major company betting that IB is the converged fabric of choice for the enterprise. Certainly HP and Sun have positioned IB as primarily a High Performance Computing technology for the past few years. Will IBM enter in earnest the IB fray? With companies such as Sun and Intel being anti protocol off-load and long-term integrating Ethernet into processors, the IB vendors have to be worried that lock out can occur at any time."

Infiniband in storage: "There are no announced IB native storage [products] for companies such as EMC, IBM, HP, etc. This represents a significant hole in the [IB] convergence argument."

IP over Infiniband: "Micro-benchmarks yield only 2-3 Gbits/s out of the less than 8Gbits/s max possible. Skip past the marketing hype that IB is a 10 Gbits/s link since that is not the equivalent of a 10 GbE where one refers to raw signaling and application payload."

Fibre Channel "is in a bit of a long-term vision quagmire… it is not clear that the industry will invest to go beyond 8 Gbits/s to the 16 Gbits/s that is on many road maps. The optics vendors may balk at a straight 16 Gbits/s solution. Instead, they may want to de-rate a 10 Gbit/s WDM solution to 8 Gbits/s and then just scale up in multiples of 8."

iSCSI: "…not everything in iSCSI has been fully implemented."

Note: I'd love to hear other leading voices from the OEM community speak out. You can contact me at rbmerrit@cmp.com

Tuesday, November 07, 2006

Home nets play the numbers game


What this industry needs is more unification of home networks and their quality of service techniques. What we are getting is more and more of the numbers game as each camp tries to out-position the others.

Today the phoneline camp announced they have leapfrogged coax and powerline competitors. The Home Phoneline Networking Alliance upgraded its spec to support data rates up to a combined 320 Mbits/second over two simultaneous channels. That tops coax at about 135 Mbits/s and HomePlugAV at 180 Mbits/s. CopperGate is sampling chips based on the HomePNA version 3.1.

All interesting numbers, but the number this sector really needs to see is one—one way to let traffic flow across various home nets. We will have to watch plenty of market brawls before we get to that point. –rbm

Monday, November 06, 2006

Data centers converge in silicon

The real convergence in data center networking will not be on Infiniband or even Ethernet. It will be on silicon.

Thanks to Moore's Law the many different interconnects that have been vying to be the sole conduit of the data center will all live together someday in hybrid silicon. Voltaire shows us the way today with its announcement that its Infiniband switches will now support 10 Gbit/second Ethernet too, thanks to a new hybrid Infiniband/10GE ASIC. http://www.voltaire.com/

Not long ago, Myrinet made a similar move embracing Ethernet on its proprietary clustering products. And, though it's not at the silicon level, Qlogic has been reaching beyond its mainstay Fibre Channel business to embrace Infiniband products with its acquisition of switch maker SilverStorm Technologies in October and card maker PathScale in February.

Remember when Cisco's Andiamo unit came out with its storage switches? Turned out they were not so much based around Ethernet as Fibre Channel, the rooster of the SAN roost. And guess who is the largest customer for the Infiniband chips from Mellanox Technologies? Cisco, thanks to its acquisition of Topspin in 2005.

Seems like the big players in the data center are realizing you have to have it all—Fibre Channel, Infiniband and Ethernet. And ideally, you want to get it all in a chip. –rbm

Thursday, November 02, 2006

Cisco flows along with Interlaken


Not only does Cisco Systems continue to be one of the few big OEMs still developing a lot of ASICs, it continues to aggresively push use of its own chip-to-chip interconnect on most of them.

Known internally as "Spaui" because it is a mix of SPI-4 and Xaui, the interface will appear on at least 12 of the about 15 ASICs in the works at Cisco's storage networking group. Cisco co-developed the interface with Cortina Systems and announced it under the name Interlaken in April 2006. Named after the Swiss land between two lakes (pictured), the interface defines links running up to 20 Gbits/second.

The "Spaui" link is serving Cisco well, according to Tom Edsall, a senior vice president of Cisco's Datacenter Business Unit and chief architect of Cisco's MDS 9000 storage switch.

When last contacted, the Network Processing Forum (now part of the Optical Internetworking Forum) was working on its own SPI upgrade called the Scalable-SPI spec which would be an alternative to Interlaken. "I would have to see a significant advantage to make a change," said Edsall.

When you are cranking out 15 ASICs at a time for your own proprietary systems, you can afford to go your own way. –rbm

 
interconnects