Data center bridging

Data center bridging

Data center bridging (DCB) refers to a set of enhancements to Ethernet local area networks for use in data center environments. Specifically, DCB goals are, for selected traffic, to eliminate loss due to queue overflow and to be able to allocate bandwidth on links. Essentially, DCB enables, to some extent, the treatment of different priorities as if they were different pipes. The primary motivation was the sensitivity of Fibre Channel over Ethernet to frame loss. The higher level goal is to use a single set of Ethernet physical devices or adapters for computers to talk to a Storage Area Network, Local Area network and InfiniBand fabric.[1]

Traditional Ethernet is the primary network protocol in data centers for computer to computer communications. However, Ethernet is designed to be a best-effort network that may drop packets when the network or devices are busy. In Internet Protocol networks, transport reliability has traditionally been the responsibility of the transport protocols, such as the Transmission Control Protocol (TCP), with the trade-off being higher complexity, greater processing overhead and the resulting impact on performance and throughput.

One area of evolution for Ethernet is to add extensions to the existing protocol suite to provide reliability without incurring the penalties of TCP. With the move to 10 Gbit/s and faster transmission rates, there is also a desire for higher granularity in control of bandwidth allocation and to ensure it is used more effectively. Beyond the benefits to traditional application traffic, these enhancements would make Ethernet a more viable transport for storage and server cluster traffic.

To meet these goals new standards are being developed that either extend the existing set of Ethernet protocols or emulate the connectivity offered by Ethernet protocols. They are being developed respectively by two separate standards bodies, the Internet Engineering Task Force (IETF) and the Institute of Electrical and Electronics Engineers (IEEE) Data Center Bridging Task Group of the IEEE 802.1 Working Group.

Different terms have been used to market products based on the underlying Data Center Bridging standards:

  • Data Center Ethernet (DCE) was a term originally coined and trademarked by Cisco Systems.[dubious ] DCE referred to Ethernet enhancements for the Data Center Bridging standards, and also including a Layer 2 Multipathing implementation based on the IETF's Transparent Interconnection of Lots of Links (TRILL) standard.[2]

IEEE Task Group

The following have been adopted as IEEE standards:

  • Priority-based Flow Control (PFC): IEEE 802.1Qbb provides a link level flow control mechanism that can be controlled independently for each frame priority. The goal of this mechanism is to ensure zero loss under congestion in DCB networks.
  • Enhanced Transmission Selection (ETS): IEEE 802.1Qaz provides a common management framework for assignment of bandwidth to frame prioities.
  • Congestion Notification: IEEE 802.1Qau provides end to end congestion management for protocols that are capable of transmission rate limiting to avoid frame loss. It is expected to benefit protocols such as TCP that do have native congestion management as it reacts to congestion in a more timely manner.
  • Data Center Bridging Capabilities Exchange Protocol (DCBX): a discovery and capability exchange protocol that is used for conveying capabilities and configuration of the above features between neighbors to ensure consistent configuration across the network. This protocol leverages functionality provided by IEEE 802.1AB (LLDP). It is actually included in the 802.1az standard.

Other groups

  • The IETF TRILL (Transparent Interconnection of Lots of Links) standard provides least cost pair-wise data forwarding without configuration in multi-hop networks with arbitrary topology, safe forwarding even during periods of temporary loops, and support for multipathing of both unicast and multicast traffic. TRILL accomplishes this by using IS-IS (Intermediate System to Intermediate System) link state routing and by encapsulating traffic using a header that includes a hop count. TRILL supports VLANs and frame priorities. Devices that implement TRILL are called RBridges. RBridges can incrementally replace IEEE 802.1 customer bridges. TRILL Working Group Charter
  • IEEE 802.1aq Shortest Path Bridging (IEEE 802.1aq) 802.1aq specifies shortest path bridging of unicast and multicast Ethernet frames, to calculate multiple active topologies (virtual LANs) that can share learnt station location information. Two modes of operation are described, depending on whether the source Bridge is 802.1ad(QinQ) which is known as SPBV or 802.1ah (MACinMAC), which is known as SPBM. SPBV supports a VLAN using a VLAN Identifier (VID) per node to identify the shortest path tree (SPT) associated with that node. SPBM supports a VLAN by using one or more Backbone MAC addresses to identify each node and its associated SPT, and it can support multiple forwarding topologies for load sharing across equal cost trees using a single B-VID per forwarding topology. Both SPBV and SPBM use link state routing technology. SPBM by virtue of its MACinMAC encapsulation is more suitable for a large data centre than SPBV. 802.1aq defines 16 tunable multipath options as part of the base protocol, with an extensible multipathing mechanism to allow many more multipath variations in the future. 802.1aq supports the dynamic creation of virtual LAN's that interconnect all members with symmetric shortest path routes. The virtual LAN's can be deterministically assigned to the different multi paths providing a degree of traffic engineering in addition to multipathing and can grow or shrink with simple membership changes. 802.1aq is fully backward compatible with all 802.1 protocols. 802.1aq is expected to become an IEEE standard in 2012.
  • Fibre Channel over Ethernet: T11 FCoE This project utilizes existing Fibre Channel protocols to run on Ethernet to enable servers to have access to Fibre Channel storage via Ethernet. As noted above, one of the drivers behind enhancing Ethernet is to support storage traffic. While iSCSI was available, it depends on TCP/IP and there was a desire to support storage traffic at layer 2. This gave rise to the development of the FCoE protocol, which needed reliable Ethernet transport. The standard was finalized in June 2009 by the ANSI T11 committee.
  • IEEE 802.1p/Q provides 8 traffic classes for priority based forwarding.
  • IEEE 802.3bd provides a mechanism link-level per priority pause flow control.

These new protocols will require new hardware and software in both the network and the server interconnect. These products are being developed by companies such as Avaya, Brocade, Cisco, Dell, EMC, Emulex, HP, Huawei, IBM, and Qlogic.[citation needed]


  1. ^ Silvano Gai, Data Center Networks and Fibre Channel over Ethernet (FCoE) (Nuova Systems, 2008)
  2. ^ Radia Perlman et al. (July 2011). "Routing Bridges (RBridges): Base Protocol Specification". RFC 6325. IETF. 
  3. ^ "cee-authors". Yahoo Groups archive. January 2008 – January 2009. Retrieved October 6, 2011.