Re: [vserver] SSD/HD hybrids

From: Eugen Leitl <eugen_at_leitl.org>
Date: Thu 01 Apr 2010 - 22:09:41 BST
Message-ID: <20100401210941.GP1964@leitl.org>

On Thu, Apr 01, 2010 at 05:38:32PM +0100, Ed W wrote:

> I have been hanging around the gluster mailing list recently and one of
> the things you see with network filesystems is people not noticing how
> much the latency is going to kill you. eg if your latency was 1ms to

I'm extremely aware of latency. Which is why I talk SSD and us, not 7200 rpm
HD and ms.

> the iSCSI server then under some circumstances you will max out at 1,000
> IOs per second. Now depending how that translates to app performance

A single spindle SATA with 1 TByte has 75 IOPS, while an consumer MLC
SSD has up to 9 kIOPS for random 4 k reads, up to 25 kIOPS random
4 k writes. Even if this is mostly marketing, that's an almost
three orders of magnitude difference.

> this can be a huge performance hit and presumably why Adrian commented
> that even 10gbit FC with it's massively lower latency can still limit
> bandwidth to each client to a fraction of the total server bandwidth.

Again, I'd rather use a 200 EUR SSD within the guest host rather than
an external iSCSI target. It's just lacking native hybrid zfs support
I would have to do it manually. It's probably just a stupid idea,
and I should stick two 80 GByte SSDs in RAID 0 (doubles the write
endurance, at only slightly increased background failure rate) and
leave it at that. Mount everything else via NFS. Expensive, but effective.
 
> I think your question is so academic as to be much better to simply
> benchmark some tests and literally see what will work best for you? In

I asked because I don't have the hardware yet, and assumed somebody
else had benchmarked this or has this in production. At that matter,
I do not have a good workload model, since I don't have hundreds
of loaded guests in production, just idling ones. As things go, a simple
nmap will make the firewall almost crap out.

> general an SSD will help most if you have io wait problems and it's less
> beneficial for streaming reads/writes?

I don't worry about streams. Arguably, SSD (especially RAID 0 SATA SSD)
are very good at streams. With low end Atoms, I'll be most likely CPU-bound.
 
> A datapoint though, but I recently upgraded my Macbook Pro to 6GB from
> 4GB ram and also bought a 256MB SSD to replace the spinning disk it had
> before. Now the problem I was trying to solve was that I was short of
> free memory while running some Windows 7 instances under parallels and
> the machine was literally freezing completely for seconds at a time
> while switching apps. Now the SSD arrived first and simply changing
> this *completely* transformed the machine, literally night and day,

I've been using a 80 GByte SSD in an Atom netbook as well as RAID 10
stripes over 8x 160 GB consumer Intel MLC SSDs for read-mostly Oracle.
No official benchmarks, but I'm completely sold. Not a single failure
yet, which is a lot more than I can say about SATA HDs (Seagate <cough>
7200.11 <cough>).

> whole machine is lighting fast and zero slowdown booting some win7
> instances even with low ram. I then added the extra ram a day later and
> noticed no appreciable difference in performance. Now this is
> COMPLETELY the opposite way around to what I expected, but interesting -
> usually I would add ram first and faster disks second.

What I worry about is flash wear with small MLC SSDs. If the swap sees
a lot of use, it's toast pretty soon. So is write-intensive databases.
 
> However, this is a desktop and here the speed of opening an application
> (which only happens a dozen times a *day*) is the benchmark by which I
> judge the speed of the computer! In contrast on a server we want to
> optimise for steady state IOs happening throughout the day and it's less
> clear how the SSD will benefit things here...

I'm thinking a dramatic enhancement, but I was wondering about experience
of people actually doing it.
 
> So I think you need to benchmark your setup and perhaps look instead at
> something like a controller with writeback cache rather than an
> SSD/iscsi? I see you can get the Dell PERC things quite cheap on ebay
> for example?

Again, I'm working with 300 EUR server kits, 90 EUR memory which take
less than 20 W without spinning bits.
 
> Just a thought...
>
> (P.S. Anyone with a laptop running slowly, the SSD blows the doors off
> with perceived performance - seriously impressive...)

-- 
Eugen* Leitl leitl http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
Received on Thu Apr 1 22:09:57 2010
[Next/Previous Months] [Main vserver Project Homepage] [Howto Subscribe/Unsubscribe] [Paul Sladen's vserver stuff]
Generated on Thu 01 Apr 2010 - 22:09:58 BST by hypermail 2.1.8