Untangling the Interconnect

Today’s computing solutions rely on multiple processors, collaboratively working on a problem, rather than the monolithic processor of just a few years ago. These sorts of computing architectures used to be the realm of supercomputers or high performance computing (HPC), but it’s now becoming main stream. Even laptops have started to ship with four processor cores in a single CPU. The biggest challenge with multiple processors collaboratively working together is how they communicate effectively with each other. The processor-to-processor communication infrastructure is known as the interconnect.

The most direct approach to have multiple processors talk to each other is by stringing a cable directly between them. This can be very effective with a small number of processors, but can quickly become unwieldy. For example, 32 processors would require a minimum of 496 cables to directly communicate with each other. If the number of processors were increased to 128, the cable count would increase to 8,128. The direct connection solution is obviously not scalable to today’s computing needs.

direct-cable-animation.gif

Directly wiring processors together requires a huge number of cables

One method to decrease the number of cables is to have the processors only talk to a few of their nearest neighbors and let a transmission hop through several intermediate processors until it reaches the intended processor. Maintaining the routing algorithms and adjusting them in real time due to failures or overloading can be tricky to manage. The extra steps required to communicate can also add unacceptable delays to mission critical communications.

hypercube.gif

A Hypercube architecture can reduce the number of cables by 75%

Utilizing a connection device known as a switch can reduce cables to the bare minimum with only one cable per processor. A processor using the switch would send data to the switch with the destination processor identified in the data and the switch would route the data to the intended recipient. Switches can be challenged to provide high speed communication with minimum latency under heavy loads. It also is a single point of failure that can bring down your entire computing solution if it fails.

single-switch.gif

A switch reduces the number of cables, but can be a single point of failure

Adding a redundant switch can help with concerns about overloading and having a single source of failure. A second switch doubles the interconnect hardware, complexity and cost. A switch is still challenged to simulate a direct connection and there are dozens of switch and interconnect technology vendors on the market today attempting to reach this lofty goal. In their quest to simulate a direct connect interconnect, the switch vendors have thrown more and more complicated technology at the problem. As a result, some of the current interconnect solutions cost more than the servers they support.

redundant-switch.gif

Doubling the interconnect hardware can improve reliability of switch solutions

Lightfleet’s Direct Broadcast Optical Interconnect (DBOI) technology is a fresh approach to the interconnect problem. By using light instead of cables or wires, direct processor-to-processor communication is accomplished without the complexity imposed by other wired solutions. DBOI can scale from a small number to a very large number of communicating processors as light is non-blocking and signals seamlessly pass through each other.

Corowave broadcast

DBOI fundamentally changes the way multiple processors talk to each other. It removes needless interconnect complexities and allows the direct-connect paradigm to finally be accomplished without a massive amount of cables or other point-to-point methods. DBOI technology unleashes the promise of parallel processing.