OK. Now the full story :)
We have actually had no goal of testing the link with big TCP MSS size, our
idea was to compare the performance of TCP over IPsec on the satellite link
with standard TCP/IP. But as soon as we have realized that TCP over IPsec
(FreeSWAN on Linux 2.2.18) performed much better then TCP over regular IP
(how come!) we found out that by default TCP is using 16K MSS if the session
has to be build via encrypted ipsec interface. The link was not idle during
the test, but the average load for this period did not exceed 600-800kbps,
so it was not a problem to reach the speed of 730Kbyte/sec with 16K MSS TCP
over IPsec on 8Mbps link. Now as far as your reaction is pessimistic should
we try to reduce default TCP MSS on our server if we want to implement this
IPsec implementation on our proxy server in order to avoid performance
degradation in future (than the link would be average loaded at ~30%)?
Comments?
With best regards,
--- Sergey Raber GMD.NET Project Satellite Communication----- Original Message ----- From: "Charlie Younghusband" <[email protected]> To: <[email protected]> Cc: <[email protected]> Sent: Tuesday, August 28, 2001 9:10 AM Subject: Re: TCP question
> Argh, I just realized that some of my email is bogus. I was thinking that > the MSS was much larger than what you using (or even your 64K trick), not simply > roughly 10x. At that point it becomes more an issue of using a larger but still > reasonable granularity chunk. It would still scare people running it > onto a hybrid network such as the greater Internet or even busy LAN > (which has a surprisingly terrible effect on your sat bandwidth from my > experience) on one end for classical reasons as already mentioned. But > for a private network satellite hookup, it's more viable as a tuning > option given the bandwidth you're playing with. Interesting... > > Charlie > > Charlie Younghusband wrote: > > It does not surprise me that you received performance gains. Looking > at > the simple single connection over an uncongested direct satellite link > as > part of another project I worked on, we did a simple mathematical > formula > where we asked how low does the BER have to go before the largest (RFC > compliant) TCP MSS size available was no longer optimal, and factoring > in > header overhead the answer was ridiculously low (like 10e-3). By > extension, it was clear that a larger MSS was much more efficient on the > higher bandwidth links seen today. I'm not sure where you got the 16208 > MSS > size, but I suspect that it is link bandwidth specific and if for some > reason your bandwidth was seriously suddenly reduced (such as a > competing > data stream) you'd suddenly have virtually no bandwidth available to > either > data stream as neither would complete a TCP segment very often. Still, > it > is an interesting tuning idea for a single TCP connection over a > satellite > link with no other competition when the bandwidth is known and > fixed. Other > than that case, things fall apart quickly as there is poor scaling and > little adaption. As Gorry Fairhurst pointed out, run it with anything > else > and then see. There are other better options. (I'll also point out that > you're still only at best about 73% link efficiency with your > modifications, > so even if this specific case always applies to your usage, wasting over > 2Mbps of satellite bandwidth probably won't please whoever pays for it > :)) > > Cheers, > Charlie > > --- > Charlie Younghusband > Network Software Engineering > Xiphos Technologies http://www.xiphos.ca/ > 514-848-9640 > >
This archive was generated by hypermail 2b29 : Tue Aug 28 2001 - 10:26:35 EDT