- Remote Direct Memory Access
Remote Direct Memory Access (RDMA) allows data to move directly from the memory of one
computer into that of another without involving either one'soperating system . This permits high-throughput, low-latency networking, which is especially useful in massively parallelcomputer cluster s. RDMA relies on a special philosophy in using DMA.RDMA supports
zero-copy networking by enabling thenetwork adapter to transfer data directly to or from application memory, eliminating the need to copy data between application memory and the data buffers in the operating system. Such transfers require no work to be done by CPUs, caches, or context switches, and transfers continue in parallel with other system operations. When an application performs an RDMA Read or Write request, the application data is delivered directly to the network, reducing latency and enabling fast message transfer.This strategy presents several problems related to the fact that the target node is not notified of the completion of the request (1-sided communications). The common way to notify it is to change a memory byte when the data has been delivered, but it requires the target to poll on this byte. Not only does this polling consume CPU cycles, but the memory footprint and the latency increases linearly with the number of possible other nodes which limits use of RDMA in High-Performance Computing (HPC) in favor of MPI.
The Send/Recv model used by other
zero-copy HPC interconnects such asMyrinet orQuadrics does not have any of these problems and presents as good performance since their native programming interface is very similar to MPI.RDMA reduces the need for protocol overhead, which can squeeze out the capacity to move data across a network, reducing performance, limiting how fast an application can get the data it needs, and restricting the size and scalability of a cluster.
However, one must be aware that there also may exist some overhead given the need for memory registration.
zero-copy protocols indeed usually imply to make sure thatthe memory area involved in the communications will be kept in main memory, at least during the duration of the transfer. One must for instance make sure that this memory will not be swapped out. Else, theDMA engine might use out-dated data, thus raising the risk of memory corruption. The usual way is to pin memory down so that it will be kept in main memory, but this creates a somehow unexpected overhead since this memory registration is very expensive, thus increasing the latency linearly with the size of the data. In order to address that issue, there are several attitudes that were adopted :
* deferring memory registration out of the critical path, thus somehow hiding the latency increase.
* using caching techniques to keep data pinned as long as possible so that the overhead could be reduced for application performing communications in the same memory area several times.
* pipelining memory registration and data transfer as done onInfiniband orMyrinet for instance.
* somehow getting rid of the need for registration asQuadrics high-speed networks does.RDMA’s acceptance is also limited by the need to install a different networking infrastructure. New standards enable
Ethernet RDMA implementation at the physical layer and TCP/IP as the transport, combining the performance and latency advantages of RDMA with a low-cost, standards-based solution. The RDMA Consortium and the DAT Collaborative [ [http://www.datcollaborative.org/ DAT Collaborative website.] ] have played key roles in the development of RDMA protocols and APIs for consideration by standards groups such as theInternet Engineering Task Force and the Interconnect Software Consortium. [ [http://www.opengroup.org/icsc/ The Interconnect Software Consortium website.] ] Software vendors such asOracle Corporation support these APIs in their latest products, and network adapters that implement RDMA over Ethernet are being developed.Common RDMA implementations include the
Virtual Interface Architecture ,InfiniBand , andiWARP .Notes
External links
* [http://www.rdmaconsortium.org/home RDMA Consortium]
* [http://www.hpcwire.com/hpc/815242.html A Critique of RDMA] for High-Performance Computing
Wikimedia Foundation. 2010.