InfiniBand


lightbulb

InfiniBand

InfiniBand is a high-speed, low-latency data interconnect technology designed for high-performance computing and data centers, enabling fast data transfer and communication between server nodes and storage devices.

What does InfiniBand mean?

InfiniBand is a high-performance network interconnect technology used in data centers and high-performance computing (HPC) environments, primarily for connecting servers and storage devices. It is a Lossless, switched fabric specifically designed for high-bandwidth, low-latency, and reliable data transfer.

InfiniBand operates at speeds ranging from 20 Gb/s to 400 Gb/s, making it significantly faster than traditional Ethernet networks. Its architecture is based on a point-to-point fabric, where each node directly connects to one or more neighboring nodes, eliminating the need for central switches or routers.

The InfiniBand network fabric is a dedicated, Private network, separate from the public internet or other enterprise networks. This isolation and lack of network congestion ensure predictable performance and reduce the likelihood of data loss or corruption.

Applications

InfiniBand’s high performance and reliability make it critical for various applications, including:

  • High-performance computing (HPC): InfiniBand interconnects the compute nodes in HPC clusters to facilitate high-speed communication and data sharing among processors. Its low latency and bandwidth scalability enhance the performance of simulations, data Analytics, and machine learning applications.
  • Data centers: InfiniBand serves as a high-performance backbone for data centers, connecting servers to storage arrays and networking equipment. Its low-latency capabilities minimize data transfer delays, improving application response times and overall system efficiency.
  • Cloud Computing: InfiniBand is deployed in cloud environments to provide fast and reliable connectivity between virtual machines (VMs) and cloud services. Its high bandwidth enables rapid data transfer between VMs, supporting demanding applications such as cloud-based databases and streaming services.
  • Networking: InfiniBand is used as a high-speed interconnect for network devices, such as routers and switches. It provides low-latency, high-throughput connectivity for network traffic, improving overall network performance and bandwidth utilization.

History

The InfiniBand Architecture (IBA) was initially developed in the late 1990s by a consortium of companies led by Intel, IBM, and Cisco. The first InfiniBand specification was released in 2001, and the technology has since undergone several updates and revisions.

The original InfiniBand (4X) operated at 2.5 Gb/s per lane, with a theoretical maximum bandwidth of 10 Gb/s (both directions). Subsequent versions, such as InfiniBand SDR (10 Gb/s), DDR (20 Gb/s), QDR (40 Gb/s), FDR (56 Gb/s), and EDR (100 Gb/s), significantly increased the bandwidth and data transfer rates.

The latest version, InfiniBand HDR (200 Gb/s), was released in 2019. It doubled the bandwidth per lane to 100 Gb/s, enabling a theoretical maximum bandwidth of 400 Gb/s (both directions). InfiniBand HDR is designed for next-generation data centers and HPC environments that require even higher performance and bandwidth.