[vserver] Fixed -> Re: [vserver] Hard time with IO bottleneck

From: Markus Fischer <markus_at_fischer.name>
Date: Fri 08 May 2009 - 07:00:30 BST
Message-ID: <4A03CA7E.2000806@fischer.name>

Hi,

it's been *quite* a while, however just for the record, I was able to
fix this issue.

The RAID controller, a "LSI Logic / Symbios Logic SAS1068 PCI-X
Fusion-MPT SAS", had by default "write caching" disabled.

Once enabled, the throughput jumped from 5MB/sec to 25MB/sec ....

Here's the complete story of my tragedy:
http://serverfault.com/questions/1276/severe-write-performance-problem

cheer,
- Markus

Herbert Poetzl wrote:
> On Tue, Jun 24, 2008 at 11:24:59AM +0200, Markus Fischer wrote:
>> I guess it makes not much sense without some numbers/versions:
>>
>> Kernel 2.6.22.19 with vserver 2.2.0.7, using two 250GB disc in
>> mirroring RAID and on top of that LVM.
>
> | I'm having a hard time with my system when it comes to IO performance.
>
> | As an example, I've three directories to delete with about 50000 files
> | in each. I'm deleting on the "host" system already, using "ionice -c3"
> | (only when idle) and CFQ scheduler is activated.
>
> the bottleneck with disk I/O is mostly caused
> by seeking (which is very slow compared to
> throughput), so maybe your raid/filesystem/disk
> combination is very unfortunate, and any kind
> of extensive seeking causes the I/O subsystem
> to congest
>
> | While this is done, I've about 11 vserver running.
>
> what do they do? disk seeks too? ideling?
>
> | As long as I have the delete job *not* running, the system has a load
> | As < 1.0. soon as I start/resume the job, in short the load goes over
> | As e.g. 12 !
>
> a load of 12 is nothing bad per se, it just
> means that 12 more processes could be run
> but need to wait on some I/O or CPU
>
> | I'm currently completely puzzled as to what is going on here. Using
> | vtop I can't find any two processes on top being the cause for the
> | load. The list usually looks like
>
> first, check with iostat and vmstat to get
> an idea about the ongoing I/O
>
> the next step would be to recreate the guest
> I/O in some way, so that you can run it on the
> host (without hurting anything). this can then
> be used to check without the Linux-VServer patch
> (to verify if that is some kind of mainline
> issue/regression)
>
> HTH,
> Herbert
>
>
>> thanks,
>> - Markus
>
Received on Fri May 8 07:00:51 2009

[Next/Previous Months] [Main vserver Project Homepage] [Howto Subscribe/Unsubscribe] [Paul Sladen's vserver stuff]
Generated on Fri 08 May 2009 - 07:00:54 BST by hypermail 2.1.8