btw, CONFIG_VSERVER_HARDCPU is enabled already. Did that when i was
compiling my kernel...
-jf
On Sat, May 9, 2009 at 1:17 PM, Jeffrey 'jf' Lim <jfs.world@gmail.com> wrote:
> On Sat, May 9, 2009 at 5:23 AM, Herbert Poetzl <herbert@13thfloor.at> wrote:
>> On Fri, May 08, 2009 at 02:21:15PM +0800, Jeffrey 'jf' Lim wrote:
>>>
>>> On Wed, May 6, 2009 at 6:14 PM, Jeffrey 'jf' Lim <jfs.world@gmail.com> wrote:
>>> > hey guys, I'm looking at http://linux-vserver.org/CPU_Scheduler, and
>>> > specifically at the "Fair Share" section
>>> > (http://linux-vserver.org/CPU_Scheduler#Fair_Share), and i'm a bit
>>> > confused.
>>
>>> > The way the calculation works, it seems like "1/2" and "1/4" isnt
>>> > exactly right for the wasted cpu time? It looks more like "1/2 over
>>> > (1/2 + 1/4)" vs "1/4 over (1/2 + 1/4)" of the waste cpu time. Is this
>>> > intentional? This is a different concept from the "standard" cpu
>>> > scheduling, which is a "pure fraction of 1" ("hard limit").
>>
>> no idea what 'waste cpu time' is ...
>>
>
> wasted cpu time. Or idle time.
>
> <quote>
> Consider a configuration with 5 contexts each limited to 1/5 of CPU
> time, where two of these contexts run CPU intensive processes and the
> rest is idle. Given that each context may only allocate 1/5 of CPU
> time, 3/5 of CPU time are wasted since 3 contexts are idle.
> </quote>
>
>
>>> > A few other questions:
>>
>>> > - the most basic one: how do i define guaranteed + fair share
>>> > scheduling for a context? like eg. guarantee of 1/5 for a context, +
>>> > 1/2 for fair scheduling. I'm looking at the flower page, and while
>>
>>> > I know what file to edit for guaranteed cpu, i dont know its format.
>>
>> interesting, as there is no explicit 'guarantee' only limits
>>
>
> well, guarantees are mentioned in
> http://linux-vserver.org/CPU_Scheduler#Guarantees.
>
>
>>> > Is it simply '1/5'? How about for fair scheduling? Where do i put
>>> > this?
>>
>>> > - is the fair scheduling ratio "dynamic"? Let's say I have 4 contexts.
>>> > All of them have Rk/Tk 1/4. And let's suppose that right now, 3
>>> > contexts are idle - and only 1 context is busy. So will the wasted cpu
>>> > time all go to this one busy context? (ie. '1/4 over 1/4'). Or is it
>>> > more like '1/4 over (1/4 + 1/4 + 1/4 + 1/4)'?
>>
>> as long as a context is busy, the idle time (fair scheduling
>> part of the old scheduler extensions) will not kick in
>>
>
> so in that case what does the fair scheduler schedule? It would sound
> like it would schedule only non-busy contexts - but that's not right
> (non-busy contexts have no work to be done).
>
>
>>> > - how does this whole bucket token thing work? ie. is it a
>>> > "sub-scheduler" within the standard kernel scheduler (kernel
>>> > schedules vserver process, vserver process then schedules context).
>>> > Or is it an entire "takeover/replacement" of the standard kernel
>>> > scheduler?
>>
>> neither nor .. it is an extension on-top of the scheduler,
>> i.e. as long as tokens are available, normal scheduling is
>> not changed or affected ... once a contexts is out of
>> tokens, the TB extension kicks in ...
>>
>
> ok. Is each context is treated as a separate process in the normal
> scheduler, or does the normal scheduler schedule each context's
> processes as well?
>
>
>>> > - any recommended number for "amount of tokens on start"? Let's say I
>>> > dont want any penalization (and therefore minimum tokens = 0). And I
>>
>> the minimum token value is more to control the hysteresis
>> i.e. to make scheduling more batch suited
>>
>>> > want scheduling to be as smooth as possible. Then the recommended
>>> > amount would be either 0, or fill rate? I guess this also means that i
>>> > am asking a question about the scheduling algorithm. Does it mean that
>>> > if a context has let's say 1000 tokens, that the scheduler will let it
>>> > use up all its tokens (if it's that busy!) before moving on to another
>>> > context?
>>
>> no, it just means that the TB extension will not interfere
>> with normal scheduling for that context :)
>>
>>> > - any recommended number for maximum number of tokens? again, if i
>>> > want smooth scheduling, it looks like putting the fill interval value
>>> > here would be right.
>>
>> the maximum value controls how much tokens a context can
>> accumulate when being idle (and thus for how long it will
>> be able to 'burst' when getting busy again :)
>>
>> best,
>> Herbert
>>
>>> > thanks,
>>> > -jf
>>
>
Received on Wed May 13 09:55:48 2009