Re: [Vserver] Mounting /vservers/vs, prepre-start script and namespace

From: Oliver Heinz <oliver.heinz_at_schunk.net>
Date: Tue 21 Nov 2006 - 20:23:13 GMT
Message-Id: <200611212123.13456.oliver.heinz@schunk.net>

Am Dienstag, 21. November 2006 18:27 schrieb Xavier Montagutelli:
> On Tuesday 21 November 2006 10:29, Oliver Heinz wrote:
> > Am Dienstag, 21. November 2006 09:48 schrieb Xavier Montagutelli:
> > > Hello list,
[...]
> > >
> > > My goal is to have many physical servers accessing the same VG to be
> > > able to mount the vservers directories (ext3 FS on different LVs) under
> > > one or the other host (but not at the same time :-). The same
> > > /etc/vservers/ directory will be mounted under all hosts with the OCFS2
> > > filesystem.
> >
> > Doesn't heartbeat do what you need here, It hast multiple lock/stonithar
> > as mechanisms. And from version 2.0 on it has multinode support afaik.
> > [1]
> >
> > You can use it like this: If one node fails the filesystem is mounted on
> > the other node via heartbeat and a modified vservers-default script that
> > looks for the corresponding /apps/init/mark ist started to get the
> > vservers running (and for stopping when the other node comes back online)
>
> We already have a service running with heartbeat v2 on two nodes (in my
> experience, delicate to administer, still subject to small changes,
> difficult to diagnosticate in case of problems - but great product).
> Perhaps, we could ultimately use heartbeat (or redhat cluster ?) for
> monitoring the vservers. But this solution would only be valuable to
> protect us against a failure on the physical host (not against problem in
> the vserver : FS full, admin error, ...).

You're right heartbeat is good for hardware failover, but that's it. We use
monit [2] plus a couple of our own scripts to monitor the services,
filesystems, etc inside the guests.

>
> > We use it here for a pure 2-node failover which operates on a shared
> > storage with a non-cluster FS. I'd be interessted why you want it to be
> > mounted on only one node as you use a cluster-FS with is capable of
> > concurrent multinode access.
>
> In my configuration, the vserver chroot directory is ext3, not a
> cluster-FS. OCFS2 is only used to share /etc/vservers between the physical
> hosts. I could also use OCFS2 for the chroot directory. But I don't want to
> go in a share filesystem for this critical part : there's no *real* need
> for a share FS for this, and I keep conservative for performance and
> administration.

So that's the same as we did (use ext3 for the vdir) - I was just wondering
what the OCFS2 was good for then. As those vserver configurations are pretty
static we use rsync to keep them synchronised (OCFS2 wasn't an option (for
me) when the cluster was build more than a year ago)

> Moreover, OCFS2 lacks some things, GFS2 / Lustre are not
> included in Linus kernel (I don't want to add another patch on top of
> vserver (one day perhaps containers and Co will be in upstream ? :-))
>
> Cluster-LVM only allows sharing PV, VG and LV between hosts, without
> compromising LVM meta-data integrity. But it doesn't guarantee mounting a
> LV on one single node.

Thanks for posting those details.

Bye,
Oliver

[2] http://www.tildeslash.com/monit/

>
> Perhaps some folks have another opinion / approach / experience ?

Sorry no, only one cluster running ;-)

>
> > Cheers,
> > Oliver
_______________________________________________
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver
Received on Tue Nov 21 20:25:12 2006

[Next/Previous Months] [Main vserver Project Homepage] [Howto Subscribe/Unsubscribe] [Paul Sladen's vserver stuff]
Generated on Tue 21 Nov 2006 - 20:25:21 GMT by hypermail 2.1.8