Evaluating Batching for TCP Offload

Ever increasing demand for performance and scalability in server networking has generated significant interest in offloading TCP processing to network adapters. The benefits of TCP offload result from its potential to reduce performance-limiting operations such as interrupts, cache misses, and I/O bus crossings. Exploiting this potential, however, is not easy. Design choices that improve performance for a given hardware configuration, workload or set of network characteristics can reduce performance under different conditions. We have evaluated a TCP offload prototype’s ability to reduce I/O bus crossings focusing on the impact of batching interactions between the host and adapter. Our analysis reveals that latency and the number of bus crossings are sensitive to some key parameters related to batching. We demonstrate the importance of these parameters to the viability of any TCP off-load design.

By: Doug Freimuth, Elbert Hu, Jason LaVoie, Ronald Mraz, Erich Nahum, John Tracey

Published in: RC23894 in 2006


This Research Report is available. This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g., payment of royalties). I have read and understand this notice and am a member of the scientific community outside or inside of IBM seeking a single copy only.


Questions about this service can be mailed to reports@us.ibm.com .