On Thu, Apr 14, 2011 at 04:01:43PM -0700, Martin Fick wrote:
> --- On Thu, 4/14/11, Gordan Bobic <gordan@bobich.net> wrote:
>>> --- On Thu, 4/14/11, Gordan Bobic<gordan@bobich.net>
>> DRBD can do active-active, but you'll need a cluster FS to
>> achieve that.
> Yes I could use a clusterfs (or more precisely a
> shared disk cluster fs) with DRBD, but those are
> fairly scary currently and do not work reliably
> (from what I have read) with DRBD. They were not
> designed with drbd in mind.
> I could use a distributed clusterfs without DRBD
> too, if there were one which were mature,
> opensource, and prevented splitbrain properly,
> but I have yet to find one (fingers crossed for
> ceph someday).
>> However
>> - what use-case do you have where one guest will fail
>> unrecoverably on one machine but resumes working on another
>> machine with the exact same FS? In what case would a single
>> guest fail without all of them failing?
> Think load balancing. Say 10 vservers, split them
> so that 5 run on each host normally. If either host
> goes down, the other one picks up the slack.
> Everything runs slower, but at least it still runs.
>> How will it work safely without the inode being marked
>> CoW?
> Because the whole filesystem is effectively COW, that
> is what unionfs and aufs do. They allow you to modify
> the fs view without modifying the bottom readonly
> layer. The top layer simply stores the deltas. They
> are nothing but an FS level (instead of file level)
> COW mechanism.
except for the fact that all unionfs like solutions I
encountered so far do not preserve the device:inode so
the 'new' filesystem ends up using new inode caches and
new mappings which kind of defeats the purpose ...
in any setup, check with 'stat file' for two identical
files (e.g. /bin/bash) and if you get exactly the same
device and inode entry, you're fine, otherwise you lost
the benefit of sharing at the filesystem layer ...
HTC,
Herbert
> -Martin
Received on Fri Apr 15 00:09:13 2011