From: Herbert Poetzl (herbert_at_13thfloor.at)
Date: Thu 01 May 2003 - 22:04:32 BST
On Thu, May 01, 2003 at 09:54:03AM -0400, Michael H. Warfield wrote:
> On Fri, May 02, 2003 at 01:29:42AM +1200, Sam Vilain wrote:
> > On Tue, 29 Apr 2003 04:35, Michael H. Warfield wrote:
> > > Actually, I also found it in chbind.c as well as in the kernel
> > > patch. That's not real good having a dependency like that where two
> > > numbers have to be maintained in sync like that. Might be better if
> > > chbind could determine what the limit is from the kernel if they aren't
> > > both dynamic. Including the value from the kernel header file is ugly
> > > but extracting it dynamically via /proc (or an ioctl) seems like a bit
> > > of overkill. Could have chbind be dynamic and run until it gets an
> > > error back from the kernel indicating too many addresses...
>
> > Really, maintaining a list of consecutive IP addresses is a misuse of the
> > algorithm. It means that, for instance, when you try to bind to a port the
> > kernel has to scan up to 16k (4096 addresses * 4 bytes each), which isn't
> > terribly good. This is why the default limit is so low; keeps the overhead
> > from this O(N) loop small.
>
> Woa! Who said anything about them being consecutive???
> In fact, I think in my original message I mentioned assigning 64 or
> more addresses to each vserver assigned psuedo randomly out of a pool
> of 4094 (/20 - 2) addresses.
>
> > Perhaps what you want could be achieved more efficiently by adding a
> > netmask to the s_context structure, which defaults to 255.255.255.255, and
> > applies only to the first IP address. It would be set via the set_ipv4root
> > system call.
>
> Wouldn't help me. Even on my production colo virtual hosting
> servers, that wouldn't help (much) since it then requires assignments on
> netmask boundries. Ok, it does reduce the linear problem to a log base 2
> problem so 16 slots would allow for 16 masked ranges but the ranges would
> require more complex testing. Not sure where the cycle count break even
> point would be. Would be a net loss in the "sparcely populated field"
> example I'm dealing with.
>
> > Of course it all depends on whether you care about losing a few thousand
> > clock cycles every time you create a inet socket.
>
> Point.
>
> The time loss problem should only be a problem when those number
> of addresses are assigned. If you allocate space for 64 but only
> assign 4 addresses, the kernel shouldn't search all 64 slots so
> the computational issues are only a factor in those applications
> taking advantage of the facility to that extent. OTOH... By having
> a static structure like this, we are allocation kernel space memory
> which is only rarely used (the additional space) for certain specific
> applications. Perhaps it should be "run time definable" much along
> the line of MAX_FILES or such. Set a sysctl or proc variable to
> set the size.
In my opinion, the best solution would be to use
a (simple) dynamic structure (like a bucket hash)
or a self adjusting (growing) hash to keep those
addresses as well as associated interfaces and
restrictions ...
done efficiently this would eliminate not only the
requirement to define a hard maximum, but also the
waste of (unused) kernel memory ... and it would
be much faster if you use many addresses ...
If I'm not entirely wrong, Alexey has done something
similar (at least dynamic) but I'm realy not sure
about that ...
best,
Herbert
>
> > --
> > Sam Vilain, sam_at_vilain.net
>
> > Real computer scientists don't write code. They occasionally tinker
> > with 'programming systems', but those are so high level that they
> > hardly count (and rarely count accurately; precision is for
> > applications.)
>
> Mike
> --
> Michael H. Warfield | (770) 985-6132 | mhw_at_WittsEnd.com
> /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/
> NIC whois: MHW9 | An optimist believes we live in the best of all
> PGP Key: 0xDF1DD471 | possible worlds. A pessimist is sure of it!