Abstract
Amongst link technologies, InfiniBand has gained wide acceptance in the framework of High Performance Computing (HPC), due to its high bandwidth and in particular to its low latency. Since InfiniBand is very flexible, supporting several kinds of messages, it is suitable, in principle, not only for HPC, but also for the data acquisition systems of High Energy Physics (HEP) Experiments. In order to check the InfiniBand capabilities in the framework of on-line systems of HEP Experiments, we performed measurements with point-to-point UDP data transfers over a 4-lane Double Data Rate InfiniBand connection, by means of the IPoIB (IP over InfiniBand) protocol stack, using the Host Channel Adapter cards mounted on a 8-lane PCI-Express bus of commodity PCs both as transmitters and receivers, thus measuring not only the capacity of the link itself, but also the effort required by the host CPUs, buses and Operating Systems. Using either the "Unreliable Datagram" or the "Reliable Connected" InfiniBand transfer modes, we measured the maximum achievable UDP data transfer throughput, the frame rate and the CPU loads of the sender/receiver processes and of the interrupt handlers as a function of the datagram size. Performance of InfiniBand in UDP point-to-point data transfer are then compared with that obtained with analogous tests per formed between the same PCs, using a 10-Gigabit Ethernet link.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.