Re: [Vserver] /vservers as an nfs mount?

From: Chuck <chuck_at_sbbsnet.net>
Date: Wed 26 Apr 2006 - 17:41:48 BST
Message-Id: <200604261241.48858.chuck@sbbsnet.net>

On Wednesday 26 April 2006 11:51 am, Herbert Poetzl wrote:
> On Wed, Apr 26, 2006 at 06:44:04AM -0400, Chuck wrote:
> > On Wednesday 26 April 2006 05:44 am, Herbert Poetzl wrote:
> > > On Wed, Apr 26, 2006 at 07:00:21PM +1200, Sam Vilain wrote:
> > > > Chuck wrote:
> > > >
> > > >> we are completely restructuring our entire physical network around
> > > >> the vserver concept.
> > >
> > > >> it has proven itself in stability and performance in production to
> > > >> the point we no longer see the need for dedicated servers except in
> > > >> the most demanding instances (mostly our email server which cannot
> > > >> be run as a guest until there is no slow down using > 130 ip
addresses).
> > > could you describe what scheme is behind those 130 ips
> > > in your case? I'm trying to get an idea what addresses
> > > such large-ip systems typically use ...
> >
> > it is commercial, licensed software so multiple uses of it are
> > prohibitive. every domain we host in email is offered a unique ip
> > address if they wish so the server takes on the identity of that
> > domain. this gives the effect that the domain owns their own email
> > server to the outside world. in this case we have a single /23 block
> > of (512)ips dedicated strictly to email which we are using roughly 1/4
> > at this moment. when we fill it we will assign another block.
>
> okay, so basically a permission scheme which would grant
> the entire block would be sufficient for your purpose, right?
> (or at most 3 blocks or so :)
>

yes as long as it doesnt slow down any more than a dedicated server would...
whenever we use large amounts of ip addresses they always are in given
network blocks... same with our web server all are within the same
block..generally /24 or /23. we have never used larger than a /23 as a single
block. currently all email is xxx.xxx.34.xxx attached to 35 net in a /23 and
our web is 39 net for 1 server and 40net for another server each of those
being a /24 but can easily be joined to /23 in a guest situation. we never
mix networks within any given server unless it is simply a utility server
which then may have 4 or 5 ip addys total and only serves our local lan. this
ip organization is required by our vlan system of dedicated switch ports for
certain net blocks. putting all this on a vserver host was only made possible
by iproute2 tables and rule sets since our host had to have 5 nics installed
to cover it all.

> > we also mix additional domains that do not care about appearing to
> > own their own email server into 'namespace' using the single primary
> > server ip address.
> >
> > > >> in our network restructuring, we wish to use our large storage
> > > >> nfs system and place all the vserver guests on that sharing those
> > > >> directories to be mounted on the proper dual opteron machine front
> > > >> end as /vservers.
> > >
> > > >> i am seriously thinking of also making /etc/vservers an nfs mount so
> > > >> that each host configuration and guests live in a particular area on
> > > >> the nfs to make switching machines a breeze if so needed.
> > >
> > > >> does anyone see a problem with this idea? we will be using dual GB
> > > >> nics into this nfs system in a pvtnet from each machine to facilitate
> > > >> large amounts of data flow. public ip space will still use 100mb
> > > >> nics.
> > >
> > > that is basically what lycos is doing, and together with
> > > them we implemented the xid tagging over nfs (requires
> > > a patched filer though), which seems to work reasonably
> > > fine ...
> >
> > im sure their system load is far more than what we experience, so if
> > it is fine for them it will be for us also.. i assume you have the
> > patch available along with instructions what this gets applied to?
>
> it was included back then, so it is now part of the kernel
> patch (i.e. recent devel and stable kernels already have
> everything required, for server and client)
>

excellent :)

> > > >> if this can work efficiently (most of our guests are not disk i/o
> > > >> bound.. those with ultra heavy disk i/o will live on each front end
> > > >> machine), we can consolidate more than 100 machines into 2 front end
> > > >> machines and one SAN system. This would free enough rack space that
> > > >> if we don't need any dedicated
> > >
> > > >> machines in the future we could easily add more than 1500 servers in
> > > >> host/guest config in the same space 100 took up. it would also hugely
> > > >> simplify backups and drop our electric bill in half or more.
> > >
> > > yes, just requires really sensitive tuning, otherwise
> > > the nfs will be the bottleneck
> >
> > thats the only other thing i have to figure out is the nfs tuning for
> > this. i am assuming it will require large buffers rather than the tiny
> > ones it uses by default.
>
> tcp and large read/write buffers are a good start :)

cool.. thanks!

>
> best,
> Herbert
>
> > > > Nice idea, certainly NFS is right for /etc/vservers, but consider
using
> > > > a network block device, like iSCSI or ATA over Ethernet for the
> > > > filesystems used by vservers themselves. You'll save yourself a lot of
> > > > headaches and the thing will probably run a *lot* faster.
> > >
> > > this is a viable alternative too, at least iSCSI and AOE
> > > was already tested with Linux-VServer, so it should work
> >
> > only problem is after quickly scanning some documents on aoe and
> > vblade, i dont know if it would live well in the same machine as
> > the nfs side of the server. this storage array we have is huge and
> > is already set up as an nfs server and working serving about 70
> > companies.. however the total array is not partitioned completely so
> > we can add via lvm where necessary. i suppose i could just partition
> > an additional section for vblades or aoe once i understand it.. the
> > total array is 50TB of which we have only assigned 15 at the moment.
> > the rest is unpartitioned, begging to be used. the other catch is that
> > the aoe/vblade setups must work well on amd64 arch..
> >
> > > > Unification would be impractical on top of all of this, but this is
> > > > probably not a huge problem.
> > >
> > > why would that be so? if it is the same block device, the
> > > filesystem on-top can as well use unification, not across
> > > different filesystems though ...
> >
> >
> > >
> > > HTH,
> > > Herbert
> > >
> > > > Sam.
> > > > _______________________________________________
> > > > Vserver mailing list
> > > > Vserver@list.linux-vserver.org
> > > > http://list.linux-vserver.org/mailman/listinfo/vserver
> > >
> >
> > --
> >
> > Chuck
> >
> > "...and the hordes of M$*ft users descended upon me in their anger,
> > and asked 'Why do you not get the viruses or the BlueScreensOfDeath
> > or insecure system troubles and slowness or pay through the nose
> > for an OS as *we* do?!!', and I answered...'I use Linux'. "
> > The Book of John, chapter 1, page 1, and end of book
> >
> >
> > _______________________________________________
> > Vserver mailing list
> > Vserver@list.linux-vserver.org
> > http://list.linux-vserver.org/mailman/listinfo/vserver
>

-- 
Chuck
"...and the hordes of M$*ft users descended upon me in their anger,
and asked 'Why do you not get the viruses or the BlueScreensOfDeath
or insecure system troubles and slowness or pay through the nose 
for an OS as *we* do?!!', and I answered...'I use Linux'. "
The Book of John, chapter 1, page 1, and end of book
_______________________________________________
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver
Received on Wed Apr 26 17:42:22 2006
[Next/Previous Months] [Main vserver Project Homepage] [Howto Subscribe/Unsubscribe] [Paul Sladen's vserver stuff]
Generated on Wed 26 Apr 2006 - 17:42:27 BST by hypermail 2.1.8