[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ns] delay jitter



At Thursday 04:35 AM 5/17/01 -0400, Sudhindra Suresh Bengeri wrote:
>Hi,
>
>Could someone of u pls. tell me if the following definition of Delay
>Jitter is correct:
>
>Delay Jitter = max. delay - min. delay
>also does mean delay jitter (and variance of delay jitter) make any sense?
>That does it make sense to keep track of delay jitter over time and then
>calculate its mean..

You have to be accurate when you say min and max, I mean what we mean or 
resemble by min and max.
I will give you a detailed answer, where I start from scratch and end up 
with what you like to learn about, if I didn't misunderstand your question.
Here we go:
First, this is very related to statistics and probability, I can describe 
the delay bound as a deterministic bound (bound referring to min or max).

Second, I would understand max delay as "worst case" delay.
but remember, this max delay or worst case delay also includes the 
propagation delay in its calculation.

Don't forget: Total Delay = Propagation delay + Delay_jitter.

So, The worst case delay is the biggest *total* delay metric you can get 
over a connection.
One might still wonder, so what is it statistically?
The answer is that to get the worst case delay over a connection we 
calculate the longest time or worst case behavior of every routing point on 
the pathway. To be clearer, we need to define the word scheduler (to be 
more scientific), a scheduler is the module that decides the queuing and 
forwarding of data over a connecting node on the pathway from source to 
destination. therefore, now we can say that to calculate the worst case 
delay we just work out the calculations with the assumption that each and 
every scheduler is functioning with its worst case behavior. Now this gives 
rise to a questions on what a worst case behavior is for a scheduler. The 
answer is that we just have to identify that over our connection with the 
help of knowledge about the schedulers. A good point to mention is that 
many schedulers nowadays use WFQ or Weighted Fair Queuing techniques , 
mostly Round Rubin ones. We can discuss this aside if you like, but it is 
just a tip for those who want to learn more.
Another important parameter is the average delay over the same connection, 
which shall be calculated over each and every traffic arrival pattern of 
every other connection in the system. Note here that each packet delay here 
means a lot. Thus, the real average delay which is an "ideal" metric- if we 
can say so- can not be calculated or is impossible to measure.
*** Here I would like to mention that this is a problem in traffic analysis 
and QoS analysis, that many experiments under study you can not have 
perfect control over.
So, how to solve this problem?
The answer is that we can measure the mean or average delay over packets 
sent on a connection, and that is what all people working in traffic 
analysis call as average delay i.e. it is the *measured* mean delay.
As for delay-jitter, it is simply the difference between the worst delay 
and the smallest delay. So jitter is a variable that changes between two 
bounds, think of it like:
a1 < jitter < a2
where a1 and a2 are the smallest and worst delay bounds respectively.

So, where is jitter useful?
Ans. It is mainly useful for playback applications.
For an audio or video stream of such a playback application (like voice 
calls, video-conference, etc..), a playback receiver will use playback 
instants in a way so that it can play a packet while one is received. Ah, 
well, but this is not simple! However, if delay jitter is bounded (as we 
just saw before), then the playback receiver can simply just delay the 
playing of the first packet on a connection determined by the higher jitter 
bound (a2) using a buffer- or what is known by traffic analyzers as the 
elastic buffer.
So, what does this tell us about applications and demands?
Ans. It simply says that if we want good QoS for our application, we should 
know that the more the jitter upper-bound is the more is our need of the 
elastic buffer size.
However it is very complicated when the two parameters of buffer size and 
packet scheduling come to be studied at the same time It is good to know 
that queuing algorithms should keep track of the delay jitter upper bound 
for all packets being forwarded and not only the first few.

Iyad

>TIA.
>
>Regards,
>Sudhin.
>
>****************** Sudhindra Suresh Bengeri ********************
>Graduate Research Assistant   | Home:                          |
>to Dr. George N. Rouskas      | 2110, Gorman St.,              |
>EGRC 454H, Computer Sci. dept.| Raleigh, NC - 27606            |
>NCSU, Raleigh, NC 27695       | Ph. (919) 852-1961             |
>(919)515-3655(P) / 515-7925(F)
>My web projection: http://www4.ncsu.edu/~ssbenger
>
>* Always glad to share my ignorance - I've got plenty. *