[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [ns-users] TCP Reno (bug?)
The cwnd in ns always keep growing in the absence of loss. However the
send window does not. The send window is governed by the cwnd as well as
the maximum receiver advertised window. Though there is no flow control
in the TCP implementations in ns, there is a maximum window size for
transmission. This window is set in wnd_ I think.
Moncef Elaoud
[email protected] wrote:
> I'm doing some experiments with various flavours of TCP and have come
> across an oddity (ns 2.1b5). I wonder if anyone could enlighten me or
> comment?
>
> With a simple network of one sender, one receiver, 3 links of 2Mb, 1Mb,
> 2Mb and 5ms delays between them. I set up an FTP over Reno source from
> Tx to Rx.
>
> Monitoring the 'cwnd' of Reno, it goes through slow start, into linear
> increase... and carries on going. Dumped data follows:
>
> -------------------------------------------------
> t: 0.100000 seqno: 0 cwnd: 1.000000
> t: 0.146640 seqno: 1 cwnd: 2.000000
> t: 0.146640 seqno: 2 cwnd: 2.000000
> t: 0.193280 seqno: 3 cwnd: 3.000000
> t: 0.193280 seqno: 4 cwnd: 3.000000
> t: 0.201280 seqno: 5 cwnd: 4.000000
> t: 0.201280 seqno: 6 cwnd: 4.000000
> t: 0.239920 seqno: 7 cwnd: 5.000000
> t: 0.239920 seqno: 8 cwnd: 5.000000
> t: 0.247920 seqno: 9 cwnd: 6.000000
> t: 0.247920 seqno: 10 cwnd: 6.000000
> t: 0.255920 seqno: 11 cwnd: 7.000000
> t: 0.255920 seqno: 12 cwnd: 7.000000
> t: 0.263920 seqno: 13 cwnd: 8.000000
> t: 0.263920 seqno: 14 cwnd: 8.000000
> t: 0.286560 seqno: 15 cwnd: 9.000000
> t: 0.286560 seqno: 16 cwnd: 9.000000
> t: 0.294560 seqno: 17 cwnd: 10.000000
> t: 0.294560 seqno: 18 cwnd: 10.000000
> t: 0.302560 seqno: 19 cwnd: 11.000000
> t: 0.302560 seqno: 20 cwnd: 11.000000
> t: 0.310560 seqno: 21 cwnd: 12.000000
> t: 0.310560 seqno: 22 cwnd: 12.000000
> t: 0.318560 seqno: 23 cwnd: 13.000000
> t: 0.318560 seqno: 24 cwnd: 13.000000
> t: 0.326560 seqno: 25 cwnd: 14.000000
> t: 0.326560 seqno: 26 cwnd: 14.000000
> t: 0.334560 seqno: 27 cwnd: 15.000000
> t: 0.334560 seqno: 28 cwnd: 15.000000
> t: 0.342560 seqno: 29 cwnd: 16.000000
> t: 0.342560 seqno: 30 cwnd: 16.000000
> t: 0.350560 seqno: 31 cwnd: 17.000000
> t: 0.350560 seqno: 32 cwnd: 17.000000
> [...]
> t: 24.334560 seqno: 3033 cwnd: 79.946146
> t: 24.342560 seqno: 3034 cwnd: 79.958654
> t: 24.350560 seqno: 3035 cwnd: 79.971161
> t: 24.358560 seqno: 3036 cwnd: 79.983665
> t: 24.366560 seqno: 3037 cwnd: 79.996168
> t: 24.374560 seqno: 3038 cwnd: 80.008668
> t: 24.382560 seqno: 3039 cwnd: 80.021167
> t: 24.390560 seqno: 3040 cwnd: 80.033664
> t: 24.398560 seqno: 3041 cwnd: 80.046158
> [...]
> t: 904.934560 seqno: 113108 cwnd: 475.965514
> t: 904.942560 seqno: 113109 cwnd: 475.967615
> t: 904.950560 seqno: 113110 cwnd: 475.969716
> t: 904.958560 seqno: 113111 cwnd: 475.971817
> t: 904.966560 seqno: 113112 cwnd: 475.973918
> -------------------------------------------------
>
> Having run quite lengthy simulations of this type, the 'cwnd' variable
> just keeps increase as there are no losses or timeouts to affect it. Is
> anyone able to explain why this is, and why it does not seem to
> recognise the "hard limit" of available bandwidth and exhibit a
> "sawtooth" profile? (where cwnd increases to the point of loss or
> timeout and drops back to 1/2*cwnd)
>
> Running the same experiment with TCP Vegas shows predictable results.
> In this case, cwnd fluctuates around a value of 7, which seems correct
> for this capacity link.
>
> any information greatly appreciated.
>
> Rik Wade
> --
> ATM-MM Group
> School of Computer Studies
> University of Leeds
> UK