On Wednesday 26 April 2006 05:44 am, Herbert Poetzl wrote:
> On Wed, Apr 26, 2006 at 07:00:21PM +1200, Sam Vilain wrote:
> > Chuck wrote:
> >
> >> we are completely restructuring our entire physical network around
> >> the vserver concept.
>
> >> it has proven itself in stability and performance in production to
> >> the point we no longer see the need for dedicated servers except in
> >> the most demanding instances (mostly our email server which cannot
> >> be run as a guest until there is no slow down using > 130 ip addresses).
>
> could you describe what scheme is behind those 130 ips
> in your case? I'm trying to get an idea what addresses
> such large-ip systems typically use ...
>
it is commercial, licensed software so multiple uses of it are prohibitive.
every domain we host in email is offered a unique ip address if they wish so
the server takes on the identity of that domain. this gives the effect that
the domain owns their own email server to the outside world. in this case we
have a single /23 block of (512)ips dedicated strictly to email which we are
using roughly 1/4 at this moment. when we fill it we will assign another
block.
we also mix additional domains that do not care about appearing to own their
own email server into 'namespace' using the single primary server ip address.
> >> in our network restructuring, we wish to use our large storage
> >> nfs system and place all the vserver guests on that sharing those
> >> directories to be mounted on the proper dual opteron machine front
> >> end as /vservers.
>
> >> i am seriously thinking of also making /etc/vservers an nfs mount so
> >> that each host configuration and guests live in a particular area on
> >> the nfs to make switching machines a breeze if so needed.
>
> >> does anyone see a problem with this idea? we will be using dual GB
> >> nics into this nfs system in a pvtnet from each machine to facilitate
> >> large amounts of data flow. public ip space will still use 100mb
> >> nics.
>
> that is basically what lycos is doing, and together with
> them we implemented the xid tagging over nfs (requires
> a patched filer though), which seems to work reasonably
> fine ...
>
im sure their system load is far more than what we experience, so if it is
fine for them it will be for us also.. i assume you have the patch available
along with instructions what this gets applied to?
> >> if this can work efficiently (most of our guests are not disk i/o
> >> bound.. those with ultra heavy disk i/o will live on each front end
> >> machine), we can consolidate more than 100 machines into 2 front end
> >> machines and one SAN system. This would free enough rack space that
> >> if we don't need any dedicated
>
> >> machines in the future we could easily add more than 1500 servers in
> >> host/guest config in the same space 100 took up. it would also hugely
> >> simplify backups and drop our electric bill in half or more.
>
> yes, just requires really sensitive tuning, otherwise
> the nfs will be the bottleneck
>
thats the only other thing i have to figure out is the nfs tuning for this. i
am assuming it will require large buffers rather than the tiny ones it uses
by default.
> > Nice idea, certainly NFS is right for /etc/vservers, but consider using
> > a network block device, like iSCSI or ATA over Ethernet for the
> > filesystems used by vservers themselves. You'll save yourself a lot of
> > headaches and the thing will probably run a *lot* faster.
>
> this is a viable alternative too, at least iSCSI and AOE
> was already tested with Linux-VServer, so it should work
>
only problem is after quickly scanning some documents on aoe and vblade, i
dont know if it would live well in the same machine as the nfs side of the
server. this storage array we have is huge and is already set up as an nfs
server and working serving about 70 companies.. however the total array is
not partitioned completely so we can add via lvm where necessary. i suppose i
could just partition an additional section for vblades or aoe once i
understand it.. the total array is 50TB of which we have only assigned 15 at
the moment. the rest is unpartitioned, begging to be used. the other catch is
that the aoe/vblade setups must work well on amd64 arch..
> > Unification would be impractical on top of all of this, but this is
> > probably not a huge problem.
>
> why would that be so? if it is the same block device, the
> filesystem on-top can as well use unification, not across
> different filesystems though ...
>
> HTH,
> Herbert
>
> > Sam.
> > _______________________________________________
> > Vserver mailing list
> > Vserver@list.linux-vserver.org
> > http://list.linux-vserver.org/mailman/listinfo/vserver
>
-- Chuck "...and the hordes of M$*ft users descended upon me in their anger, and asked 'Why do you not get the viruses or the BlueScreensOfDeath or insecure system troubles and slowness or pay through the nose for an OS as *we* do?!!', and I answered...'I use Linux'. " The Book of John, chapter 1, page 1, and end of book _______________________________________________ Vserver mailing list Vserver@list.linux-vserver.org http://list.linux-vserver.org/mailman/listinfo/vserverReceived on Wed Apr 26 11:44:48 2006