About this list Date view Thread view Subject view Author view Attachment view

From: Matthew Nuzum (matt_at_followers.net)
Date: Mon 10 Nov 2003 - 01:33:10 GMT


> > a) I/O bound applications that only slow down when migrated to a
> different
> > node on the cluster, or
> > b) Applications that are incompatible with Mosix for various reasons
> > (threaded, shmem, device dependent etc.) such as Database software,
> > Apache and other server processes.
>
> a)
> true, I/O bound processes are not good for migrating, but not all
> has that kind of processes. And even if they do... Gbit networking
> is cheap, and is certainly faster than most harddrives. So, fill a
> server with several harddrives, run raid, and export it through the
> network, and you will get a better throughput than you would on a
> single drive, and probably even a 2 disk mirror/stripe.
>

I don't think this answer would be satisfactory to the original poster.
Luís has already mentioned that he is uncomfortable with the cost of a SCSI
disk subsystem. I don't think I've seen an IDE server system (even RAID)
that can saturate a GB connection and even if it could, the cost of that
server and a GBe Switch would certainly make him long for a simple SCSI
drive.

> b)
> one day those processes might be migrateable.
>

I look forward to that day as it will greatly enhance the usefulness of the
OpenMosix system. I have to say, I'm thorough exciting at the power and
ease of use the mosix system offers. We got a 5 node cluster setup in under
75 minutes. That's with configuring DNS, DHCP and networking from scratch.

The problem is, we had to contrive ways to test the system as all of the
useful applications we could come up with were non-migratable or performed
horribly because of the increased I/O latency.

The Linux Virtual Server project seems to me to be a much better system for
handling spiked traffic loads to the types of applications used by Vserver.
I'll add more on that below your think geek example.

>
> > I think the audience of people who can legitimately benefit from
> > a combined vserver/mosix installation is rather small.
>
> I'm not so sure, i think that once the technology is there, alot
> of people would want it. They just dont know it yet. Besides even
> if it is a small ammount of people, why not allow them to have this
> functionality?
>

Resources are finite. There are several to-do items that would benefit the
linux-vserver community at large, even those that want OpenMosix support.
As a web-programmer, I can't actively help the coding process for the
linux-vserver project or tell the core developers what to do with their
time, but as a user of the linux-vserver code, I long for tighter
integration with stock redhat kernels, easier install and management and
progress towards the 2.6 kernels.

I think that focusing development towards the desires of mainstream users
will make this project more useful and therefore attract more programming
talent, which can then be used to add support for OpenMosix and such.

>
> > For those that have heavily loaded servers, why not just put fewer
> > vservers on a server? If 6 is too many, just do 4 or 5. If you
> > have extra boxes with spare cpu cycles, put the vserver there.
>
> if you can NOT tolerate downtime, then it is currently card to
> migrate the vservers.
> I imagien a cluster of machines, which only power on when needed.
> This saves electricity and heat. The actual data is located on a
> server, because disks are slower than network. Thus, the machines
> boot fast, and then you can more the "vservers" to that new physical
> machine. I can imagien this is usefull for hosting providers, because
> they can automaticaly power up that extra 8-way opteron with 16G ram
> the hosting provider has for backup purposes. They have 2. One runs the
> vservers for customers, and the other is a spare, used if the other
> fails, or is too small. The spare is normaly shut down to save
> electricity and heat. (it's located in california ;-P
>
> One of their customers is thinkgeek. Then suddently thinkgeek is
> slashdottet, and normaly their vserver would crawl to a halt,
> because they ordered a vserver with these limits 10%cpu, 1G memory.
>
> However, the hosting provider has a special option for their customers
> they can choose a disaster package, that allows the customers vservers
> to suddently use ALOT MORE COMPUTER-RESOURCES. So, the hosting provider
> power on the extra spare 8-way opteron, and migrate the thinkgeek-vserver
> to that machine, running ONLY that vserver, and gives it all the cpu
> and memory it want.
>
> The result is that thinkgeek captures all those extra 324723489273434
> orders, rather than dropping them because the vserver is not powerfull
> enough.

This scenario can not benefit from OpenMosix. A: No large scale database is
supported on OpenMosix, B: No large scale web-server is supported by
OpenMosix.

All of these tasks are accomplished through the judicious use of load
balancers, layer 3/4/5 switches, SSL accelerators and clever application
design.

If someone needs a set-up to handle a load like this, they use LVS or
similar.

>
> All this happens without anyone noticing anything, and while the machine
> is running. No downtime.
>
>
> > Honestly, if your mailserver setup is too slow, I'm certain you can
> > get better performance by switching from IDE disks to SCSI. In my
> > recent tests, I/O tasks on IDE drives kept the CPU at about 53%
> > utilization. Same tests on SCSI disks used only 14% utilization
> > with the I/O processes taking significantly less time to complete
> > on the SCSI. That was a system with a single IDE drive compared
> > to the same system with a single SCSI drive.
>
> Just because this situation might not use vserver/openmosix, does
> not mean other situations can not use it.
>
>

I am not against OpenMosix support; I just want to point out that few will
benefit from such an addition.

Matthew Nuzum | ISPs: Make $200 - $5,000 per referral by
www.followers.net | recommending Elite CMS to your customers!
matt_at_followers.net | http://www.followers.net/isp

_______________________________________________
Vserver mailing list
Vserver_at_list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver


About this list Date view Thread view Subject view Author view Attachment view
[Next/Previous Months] [Main vserver Project Homepage] [Howto Subscribe/Unsubscribe] [Paul Sladen's vserver stuff]
Generated on Mon 10 Nov 2003 - 01:34:29 GMT by hypermail 2.1.3