From: Sam Vilain (sam_at_vilain.net)
Date: Thu 01 May 2003 - 14:29:42 BST
On Tue, 29 Apr 2003 04:35, Michael H. Warfield wrote:
> Actually, I also found it in chbind.c as well as in the kernel
> patch. That's not real good having a dependency like that where two
> numbers have to be maintained in sync like that. Might be better if
> chbind could determine what the limit is from the kernel if they aren't
> both dynamic. Including the value from the kernel header file is ugly
> but extracting it dynamically via /proc (or an ioctl) seems like a bit
> of overkill. Could have chbind be dynamic and run until it gets an
> error back from the kernel indicating too many addresses...
Really, maintaining a list of consecutive IP addresses is a misuse of the
algorithm. It means that, for instance, when you try to bind to a port the
kernel has to scan up to 16k (4096 addresses * 4 bytes each), which isn't
terribly good. This is why the default limit is so low; keeps the overhead
from this O(N) loop small.
Perhaps what you want could be achieved more efficiently by adding a
netmask to the s_context structure, which defaults to 255.255.255.255, and
applies only to the first IP address. It would be set via the set_ipv4root
system call.
Of course it all depends on whether you care about losing a few thousand
clock cycles every time you create a inet socket.
-- Sam Vilain, sam_at_vilain.netReal computer scientists don't write code. They occasionally tinker with 'programming systems', but those are so high level that they hardly count (and rarely count accurately; precision is for applications.)