About this list Date view Thread view Subject view Author view Attachment view

From: Herbert Poetzl (herbert_at_13thfloor.at)
Date: Mon 13 Oct 2003 - 17:00:19 BST


On Mon, Oct 13, 2003 at 02:27:09PM +0100, Sam Vilain wrote:
> On Sun, 12 Oct 2003 21:56, Herbert Poetzl wrote;
>
> > I ripped the O(1) scheduler out of the -aa series
> > for 2.4.23-pre7
>
> Yikes! That's keen :-).
>
> Funny, I've just been working on a port of ctx-18pre1 to 2.4.22-ac4.
> I've added `knobs' to the bucket tunables via a /proc interface,
> though it's less than ideal - only the values from new contexts may be
> set. I did it this way because I had an example to work from -
>
> http://www.kernel.org/pub/linux/kernel/people/rml/sched/sched-tunables/
>
> It would be better if the functionality was incorporated into the
> new_s_context syscall, so that it could be used to set the values on
> an existing context.

guess we'll do do something like this after c17f
is released as stable version, and the syscall switch
is in place ...

> A quick diff of my untested working version (all I'll say about it is
> it compiles) is at:
>
> http://vilain.net/linux/ctx/patch-2.4.22-ac4-ctx18pre1.UNTESTED
>
> > http://vserver.13thfloor.at/Experimental/split-2.4.23-pre7-O1/
> > then I rediffed the c17f to this 'new' kernel and
> > basically replaced the scheduler changes by your
> > patches for ctx17 ...
> > the result is at:
> > http://vserver.13thfloor.at/Experimental/split-2.4.23-pre7-O1-c17f/
>
> Nice ... I'll try applying that last lot to Alan's tree too.
>
> > well, it compiles and doesn't explode immediately
> > when I start the kernel, but I guess you know more
> > about the scheduler stuff than I, maybe you could
> > have a look at it, and tune it somehow?
>
> This is all I do to test it;
>
> 1. start 2 new sched locked contexts, and run "perl -e '1 while 1'" in
> both of them. Their `bucket level', that is, in /proc/X/status:
>
> ctxsched: %d/%d = (FR: %d per %d; ago %ld)
> ^^
> should drop to 0 gradually (in ~12.5s with two greedy contexts and
> the default settings). The processes should have ~50% of CPU each.
>
> 2. start 1 new sched locked context, and run another cpu hog in it.
> In `vtop', you should see that process gets more CPU than the other
> two contexts, and gradually as its bucket level drops to 0, it will
> get the same and the processes will be on 33% each.
>
> 3. start 1 new sched locked context, and run another cpu hog in it.
> It should start off with more time, and gradually move towards all
> processes getting 25% of CPU time.
>
> 4. start another 4 CPU hogs in one of the contexts. They should each
> drop to roughly 5% each reasonably rapidly, while the other 3
> contexts still get 25% CPU each.
>
> Obviously this only works because there are four contexts and each is
> getting 1/4 of the issued CPU token.

thanks for this HowTo, maybe someone could test?

best,
Herbert

> As far as `tuning' it goes, this is why we need some way of setting
> these per-s_context values with an ioctl or suchlike. If there are
> too few or too many tokens being issued, then it can become quite
> biased. Perhaps also some way of the kernel notifying when a context
> has been `running on empty' for too long to give a hint that not
> enough tokens are being dished out, or there is a context running
> perl -e "fork while 1"
> --
> Sam Vilain, sam_at_vilain.net
>
> Real Programmers never "write" memos. They "send" memos via the
> network.
>
>
>
> _______________________________________________
> Vserver mailing list
> Vserver_at_lists.tuxbox.dk
> http://lists.tuxbox.dk/mailman/listinfo/vserver


About this list Date view Thread view Subject view Author view Attachment view
[Next/Previous Months] [Main vserver Project Homepage] [Howto Subscribe/Unsubscribe] [Paul Sladen's vserver stuff]
Generated on Mon 13 Oct 2003 - 17:29:23 BST by hypermail 2.1.3