Saturday 13 February 2010

linux udp overhead

An interesting experiment is to see the real cost of the linux UDP stack against our metal UDP stack. e.g. Measuring the round trip 128B UDP metal(A) -> metal(B) -> metal(A) stack VS metal(A) -> linux(B) -> metal(A) stack.

 
128B UDP roundtrip all metal


 128B UDP roundtrip metal->linux->metal
As you can see above, linux overhead is around 4-5,000ns. Keep in mind this is using polling UDP linux loops, so your garden variety blocking socket it will be significantly more. One small note is the metal stack only offloads ethernet checksum with IP/UDP checksums all software.

On a random note, whats really unusual is how UDP payload size quite dramatically changes the profiles. 


64B UDP metal


128B UDP metal


192B UDP metal

 
256B UDP metal
Not sure whats going on here. Certainly hope its its not PCIe/SB congestion e.g. the MAC`s DMA(of the Rx buffer) into DDR competing with register read bandwidth? surely not. One thing is quite clear tho, the spacing of the spikes are most likely the latency for a single read on machine B... question is, wtf are there so many peeks in the larger transfers? bizarre.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.