From: Christian Andreetta (satya_at_gentoo.org)
Date: Thu 03 Feb 2005 - 13:45:06 GMT
From: Martin List-Petersen <martin_at_list-petersen.net>
> So unionfs fit's only in certain vserver installations, exactly as it is
> with vunify. Often too much of a hassle in comparison to the pricing,
> that storage costs today.
I agree: unionfs/vunify is not for hosting. It is a good method to ease
upgrade and management of nearly identical security isolated servers.
If 10 of these servers differ only in few conf files, it could be better
(administrator's choice) to have a common template (lower layer, r/o) to be
overridden with few files (top layer, r/w).
What is different from vunify (immutable-unlinkable)?
IMHO, the ease of management and control: with unionfs, to know which files
have been removed and/or changed is more immediate than a deep search of
(in)existent hard links. You're paying this with a little VFS overhead and a
slighlty lower performance, of course.
From: Herbert Poetzl <herbert_at_13thfloor.at>
> - filesystem benchmarks normal vs. overlay
> (probably overlay is a lot slower)
According to unionfs docs, read-only branches are slowed by 2-3% (I
sincerely didn't notice it).
Multiple levels of several mixed read/write and read/only branches render
access a lot slower (from 20% to 250%: it will be improved, since this is
first release). As said above, this is not for hosting: it is to manage
multiple servers that differ in very little detail, with uniform binaries
and libraries base.
Each vserver can always have network/bind/external mounted dirs that are
specific for their tasks (home, mirrors, ...)
> - filesystem cache benchmarking
> (probably the overlay uses twice as much cache)
There's a good paper on this on the lufs/unionfs site. VFS cache usage is
very good. What isn't so good is the management of mixed r/w and r/o
_overlapping_ levels (ex: ro, rw, rw, ro, ro, rw, ro, rw, ...: but it is
auto-inflicted torture :-) ).
> - consistency and migration
> (what if vs1 holds a reference to file xy and
> you want to update it in the template?)
You simply edit the template file (in context 0), and (if needed) refresh
caches with a 'mount /[vserver_id_root] -o remount'.
The read-write branch will save any modified inode and/or data blocks, if
local changes were made (=in vserver_id context), masking all lower
precedence layers.
Most important, if there is only one rw layer, all modifies are in a unique
external subtree: very useful for live activity inspection.
> so IMHO all this stuff needs a lot of testing, which
> I would appreciate btw ;)
At now, I'm using a host with 3 vservers: a samba file server with a couple
hundreds of clients (samba dirs are re-exported after a nfs import) and two
DB (low cpu consuming, with careful nfs db datafile mounts).
I haven't done precise benchmarks yet, but clients didn't notice slowings.
HTH,
Christian
_______________________________________________
Vserver mailing list
Vserver_at_list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver