- Data Center Ethernet
Data Center Ethernet (also known as Converged Enhanced Ethernet) describes an enhanced
Ethernet that will enable convergence of various applications in data centers (LAN , SAN, and HPC) onto a single interconnect technology.Today data centers deploy different networks based on distinct interconnect technologies to transport different traffic from different applications; for example, storage traffic is transported over Fibre Channel-based SAN or
InfiniBand , client-server application traffic is handled by an Ethernet-based LAN, while server-to-server IPC may be supported over one of various interconnects such asInfiniBand orMyrinet . A typical server in a high-performance data center has multiple interfaces (Ethernet, FC,InfiniBand ) to allow it to be connected to the various disparate networks.With data centers becoming bigger and more complex, managing different interconnect technologies for traffic from each application is becoming cost- and resource-intensive. With recent advances in speeds of Ethernet (10 Gbit/s is already standard and 40 Gbit/s and 100 Gbit/s are in development) it has become an attractive choice as the technology of convergence in the data center.
Another motivating factor for convergence is the consolidation of servers brought about by the advent of blade servers. Today, blade servers need to accommodate their backplane designs to support multiple interconnect technologies. Using a single interconnect technology such as Ethernet can simplify backplane designs, thereby reducing overall costs and power consumption.
However, current standards-based Ethernet networks cannot provide the service required by traffic from storage and high-performance computing applications. To understand the current limitations and enhancements required, one needs to know about the current state of Ethernet, the high-level requirements of a converged data center network, enhancements needed to current Ethernet, and relevant standards.
State of Ethernet Circa 2007
* Ethernet networks do not provide a lossless transport. Technically 802.3x PAUSE can be used for this, but because it affects all traffic (including control traffic and traffic that can tolerate loss), it is usually turned off.
* Ethernet switches based on the IEEE 802.1Q standard use static priority for scheduling traffic, which works well for current LAN environments (control > voice > data), but can potentially cause starvation of lower priorities. However, the scheduler does not provide minimum bandwidth guarantees or maximum bandwidth limits. As a result, it does not allow control over the sharing of bandwidth across different traffic classes.
* Ethernet bridged LANs typically employ one of the variants of spanning tree protocol (STP, RSTP, or MSTP). As a result, the path from a source to destination is not always the shortest path. Further,ECMP is not supported.Converged Data Center Network Requirements
* The applications developed for transport over existing storage networks demand a low-latency, lossless network.
* High-performance computing nodes need a very high throughput, low-latency, lossless network for server-to-server communication.
* Client-server applications need a scalable TCP/IP-friendly network.
* Storage and HPC networks make extensive use of ECMP to maximize use of network resources.Enhancements to Ethernet Required
* Data Center Ethernet needs a more flexible scheduling algorithm that will allow sharing of bandwidth between lossy and lossless traffic classes while still achieving traffic differentiation.
* A combination of link-level flow control and end-to-end congestion management is required to achieve lossless behavior. In the absence of end-to-end flow control, link-level flow control can lead to congestion spreading and deadlock. Link-level flow control needs to be enhanced to operate per priority.
* Ethernet switch control plane needs to adopt protocols and algorithms to achieve shortest path forwarding and ECMP.Relevant Ethernet standards for supporting Data Center Ethernet
* IEEE 802.1p/Q - 8 traffic classes for priority based forwarding.
* [http://www.ieee802.org/1/pages/802.1au.html IEEE 802.1Qau] – End to end congestion management.
*IEEE 802.3 x – A PAUSE mechanism providing on/off link level flow control.
* [http://www.ieee802.org/1/pages/802.1aq.html IEEE 802.1aq] – Shortest path bridging
* IETF [http://www.ietf.org/html.charters/trill-charter.html TRILL] – Transparent interconnection of lots of links.
* T11FCoE – Fibre Channel over Ethernet. This effort utilizes the existing Fibre Channel protocols to run on Ethernet to enable servers to have access to Fibre Channel storage via Ethernet.
* IEEE New – IEEE 802.1 is currently investigating the possibility of enhanced transmission selection for providing more sophisticated controls for bandwidth sharing between traffic classes, as also for per priority link-level flow control.Companies Actively Contributing to the Development of Data Center Ethernet
The following companies are developing products and actively participate in standards related to Data Center Ethernet: Brocade, Cisco, EMC, Emulex, Force10 Networks, Fujitsu, IBM, Intel, Mellanox, Myricom [http://www.myricom.com] , Nuova Systems, Sun Microsystems [http://www.sun.com] , Teak Technologies, and Woven Systems.
Wikimedia Foundation. 2010.