[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ns] ns scalability



>
>Hi NS users,
>
>I have searched through NS mailing-list archieve and have
>found a similar question posted a year ago, but I failed to
>find an answer for that question. So I re-post this question
>again, hopefully someone can give me a hint.
>
>http://www.isi.edu/nsnam/archive/ns-users/webarch/2000/msg03604.html
>
>http://www.isi.edu/nsnam/archive/ns-users/webarch/2000/msg03609.html
>
>As I understand, because NS implements a packet-level
>simulation, the time it takes to run the simulation, and the
>computer power required is proportional to the number of
>packets generated. If I want to simulate a big network (many
>links with large capacity), does it pose a scalability
>problem for NS ?

You're correct that ns is packet-level, and your analysis is along the
right lines.

(Some cavetats: ns also includes optional abstraction approaches that
can speed things up, and we're working on an analytic-based
pre-filtering tool that is much faster but doesn't guarantee
accuracy.)

BUT I'd be careful projecting extrapolating from a short
(run-time and virtual-time) scenario, since you may be multipling
startup overhead that should be fixed.

I know that we've run some very long (virtual time) simulations in a
reasonable amount of real time.  I've asked several people for
specific data points.

The other thing to be VERY careful about in large simulations is
memory.  In every case I'm aware of with large simulations, we've run
out of memory before we've run out of CPU speed.  (And I've heard
similar things from people using other simulators, including
parallized ones.)

(Your best bet though is to try your experiment and see [and let us
know!].  It should be reasonable to do a simple version of your
scenario.)

   -John Heidemann