--- On Thu, 4/14/11, Gordan Bobic <gordan@bobich.net> wrote:
> > Actually, I can think of a new approach to unification that
> > might have some benefits which the current approach does not
> > have. Currently, I use vservers with drbd, and each vserver
> > has its own partition (so that each vserver can failover
> > independently). The separate partitions means that I cannot
> > use unification with my vservers.
>
> Why do you need separate partitions? Why not have a single
> partition, mirrored between the hosts? All guests are in a
> seprate /vserver subdirectory anyway. What are you gaining
> from having a separate partition per guest?
I think you missed the "independently part". :) A
single partition cannot be mounted on both hosts at
the same time with drbd.
> > Ideally, it would be nice to have a COW like hard link
> > mechanism that is able to hardlink to files in other
> > partitions/fses. That would also help in the case of the
> > cross snapshotting idea mentioned in the brtfs threads
> > you linked to. So, how could cross filesystem COW
> > hardlinks be implemented?
>
> I don't think that's implementable with the current file
> system model. It also wouldn't help you since you would end
> up with a hard-link pointing to a partition that isn't
> necessarily the one you have associated with the guest that
> is failing over, so you'd end up failing over a guest
> without all it's files being available.
...
> Same inode on a different file system wouldn't mmap to the
> same place in memory. But I'm still not sure what you are
> gaining by splitting up the file systems.
I missdescribed it above, my solution did not actually
have a real cross filesystem hard link. It uses
staking to simulate one. The share files are on the
same partition, so they should be the same inode.
Another benefit to this solution over the current
vserver solution, is that is should work with any FS.
-Martin
Received on Thu Apr 14 23:21:29 2011