About this list Date view Thread view Subject view Author view Attachment view

From: Sam Vilain (sam_at_vilain.net)
Date: Fri 22 Oct 2004 - 01:19:28 BST

Gregory (Grisha) Trubetskoy wrote:
> vsched takes the following arguments:
> --fill-rate
> The number of tokens that will be placed in the bucket.
> --interval
> How often (the above specified) number of tokens will be placed.
> This is in jiffies. Through some googleing I've found references
> that a jiffy is about 10ms, but it seems to me it's less than
> that. Not sure if the CPU speed has bearing on it. (Anyone know?)

The important factor is the ratio;

     --------- * 100 = % CPU allocation

Note that that this is the proportion of a *single* CPU in the system.
So, if you have four CPUs and you want one context to get an average of
one whole CPU to itself, then you'd set fill-rate to 1 and interval to 4.

It is advantageous to smooth operation of the algorithm to make the
interval as small as possible (or much smaller than the bucket size).
You can in most cases simplify the fraction, such as changing
--fill-rate=30 and --interval=100 to --fill-rate=3 and --interval=10.

For simple cases, like evenly distributing cpu time between vservers,
you probably just want to set the ratio to somewhere between 1/N (where
N is the number of servers) and 1/P (where P is the maximum expected
peak load per CPU), and not bother with hard scheduling. Process count
ulimits will put an upper bound on possible abuse by a context.

> When trying to come up with a good setting in my environment (basically
> hosting), I was looking for values that would not cripple the snappiness
> of the server, but prevent people from being stupid (e.g. cat /dev/zero
> | bzip2 | bzip2 | bzip2 > /dev/null).

To achieve this, it is important that contexts that are being CPU hogs
are penalised fairly quickly...

As the tokens in the bucket deplete, the "nice" value of the contexts is
adjusted - they lose their vavavoom. As this happens, the processes get
shorter and shorter timeslices. Other, more deserving processes will
get longer timeslices and hence more CPU time.

Additionally, bear in mind that individual processes also get a minor
nice boost or penalty, depending on whether those processes have been
CPU hogs recently or not. This is diminished in vserver kernels
compared to standard kernels, but should still have sufficient effect to
counter extreme conditions.

> The fill interval should be short enough to not be noticeable, so
> something like 100 jiffies. The fill rate should be relatively small,
> something like 30 tokens. Tokens_min seems like it should simply equal
> to the fill rate. The tokens_max should be generous so that people can
> do short cpu-intensive things when the need them, so something like
> 10000 tokens.

 From the experimentation I did, I'd say 10,000 tokens is quite large -
10 seconds of real CPU time. Compare this with the default value of
500. If you've given a context 30% of the CPU as described above, then
that actually means about 10-15 wall clock seconds of CPU hogging before
the context gets appreciably penalised. For the algorithm to work best,
I think you would want to reduce this to about 1-2 seconds' worth of

You are right in saying that tokens_max is the "burst" CPU rate, so
setting it to a large value like 10000, while setting the interval to a
large value like 100, would indicate that you are optimising your system
for batch scheduling (long time slices, higher overall throughput), not
interactive use (short time slices, reduced throughput). My guess is
that min_tokens (not in my original implementation) is a batch
optimisation as well, but perhaps small values (~10) are useful to avoid
excessive context switching.

But then, I didn't really experiment with the hard scheduling side of
things, so maybe if you are hard scheduling it is more important to make
sure that the buckets don't normally run out.

Of course just because I wrote the original algorithm does not by any
means lend much extra weight to my opinion on how to use it, and I
invite others to respond with their experience.

> While playing with this stuff I've run into situations where a context
> has no tokens left, at which point you cannot even kill the processes in
> it. Don't panic - you can always reenter the context and call vsched
> with new parameters.

Heh. I don't know if this is current behaviour or not, but I think the
signals should really queue and the context will close as soon as the
processes wake up and receive enough cycles to process them and exit.
Sending -KILL signals would clean it up pretty quickly (as soon as
enough tokens are allocated for the processes to run), as chances are
they won't consume any tokens to receive a KILL signal. Though, it
would be nice if they didn't need tokens allocated to be stopped via KILL.

Sam Vilain, sam /\T vilain |><>T net, PGP key ID: 0x05B52F13
(include my PGP key ID in personal replies to avoid spam filtering)
Vserver mailing list

About this list Date view Thread view Subject view Author view Attachment view
[Next/Previous Months] [Main vserver Project Homepage] [Howto Subscribe/Unsubscribe] [Paul Sladen's vserver stuff]
Generated on Fri 22 Oct 2004 - 01:19:48 BST by hypermail 2.1.3