Re: [vserver] hard scheduler, was Re: [vserver] Disk limits problem?

From: Corey Wright <undefined_at_pobox.com>
Date: Fri 11 Sep 2009 - 18:56:29 BST
Message-Id: <20090911125629.4ce1f870.undefined@pobox.com>

On Fri, 11 Sep 2009 18:45:26 +0200
ADNET Ghislain <gadnet@aqueos.com> wrote:

>
> > i ask because i'm using 2.6.27 with cgroup cpu scheduling [1] and have
> > found it comparable to the description of hard limit + idle time [2]
> > (as i never got it to work).
> >
> > [1] http://linux-vserver.org/util-vserver:Cgroups
> > [2] http://linux-vserver.org/CPU_Scheduler
> >
> the hard scheduler let you put hard limits on a guest. with cgroups it
> seems to me you can only balance the load. IE if one month 9 out of 10
> guest does nothing the 10th will have 100% of the cpu and as the 9 other
> guest start to do things his cpu share will lower to 1/10. If all the
> guest are for you then all is good but..

correct, cgroups cpu scheduling guarantees a lower-bound (eg this guest
will always be guaranteed a minimum of 10% of the cpu), much like hard
limit + idle time, where hard limit alone creates an upper-bound (you are
only ever guaranteed a maximum of 10% of the cpu).

my problem isn't in understanding the scheduling algorithms, but in knowing
the real-world requirement for such as i use vservers in a very limited way
and with limited real-world experience.

> This could bring issue when you ressell part of this cpu time as
> customers will have widly different experiences betwee the 10% an 100%
> of the cpu without knowing why. With hard limit you could give them a
> hard limit a 15% cpu so that they gain a little but cannot use 100% even
> if the whole server is idle. Limiting the "but yesterday it worked fine
> and i do not changed anything and we have the same amount of visitors
> !!!" effet to the hotline.

ah, i guess as a detailed engineer, one who reads the fine print, i always
try to know what my minimum quality of service is (or at least should be)
and i'm thankfully when it's above that and not disappointed when it's at
that.

but i can now imagine provisioning problems where a customer has two guests
on two different vservers with the same QoS agreement, where one gets more
time than the other because the other guests on vserver1 are often idle
where the other guests on vserver2 are always busy. of course i would just
be content that i'm getting more than i paid for and find some way to load
balance between the two (so if there are any dependencies between the
vservers, the slower vserver doesn't cause the faster vserver to sit idle
waiting on it).

my set-up is non-commercial/personal, so i've never encountered those
"customer" problems.

> the other possibility that is not cpu nor planet friendly is to run
> one guest with cpuburn on it so that you consume allway the idle time of
> the cgroups forcing the limit as a side effect. Of course hard scheduler
> will be the prefered way ;)

yes, a hack to implement hard limits with cgroups is to run cpuburn in
every guest so that there is no idleness to be shared (but technically
cpuburn is a little overkill because it's not just meant to busy the cpu,
but utilize it in the most heat-producing way, exercising functions of the
cpu known to generate the most heat; the difference between someone reading
a book or running up and down a flight of stairs: both consumes their time,
but one generates a lot more heat).

> Most of the time vserver stuff is like ethernet: beautifully simple
> and efficient. cgroups looks like token ring you know, not those strnage
> just send a packet and pray for no collision thing. At the end
> simplicity win the day but the kernel is such a moving target i think we
> will allways have trouble getting the funding to allow herbert and
> Daniel to spend more time on it. I think until a good sized sponsor
> comes up it will be hard to have the full feature set following the
> incredible change rate the kernel has in this particular area.

i might be misinterpreting your analogy, but i see vserver's token bucket
scheduler more like token ring and the kernel scheduler more like ethernet
(and the cgroup scheduler somewhere imbetween the two in guaranteed
behavior) and the kernel scheduler has won out due to being "good enough"
for the average case (just like ethernet degraded horribly under high
loads, but had faster speeds & higher throughput than token ring under
"normal" use, so it won out).

thanks for the real-world hard-limit examples.

corey

-- 
undefined@pobox.com
Received on Fri Sep 11 18:56:50 2009
[Next/Previous Months] [Main vserver Project Homepage] [Howto Subscribe/Unsubscribe] [Paul Sladen's vserver stuff]
Generated on Fri 11 Sep 2009 - 18:56:52 BST by hypermail 2.1.8