Re: [vserver] SSD/HD hybrids

From: Eugen Leitl <eugen_at_leitl.org>
Date: Thu 01 Apr 2010 - 14:29:17 BST
Message-ID: <20100401132917.GN1964@leitl.org>

On Thu, Apr 01, 2010 at 02:45:47PM +0200, Adrian Reyer wrote:
> On Thu, Apr 01, 2010 at 11:32:16AM +0200, Eugen Leitl wrote:
> > Any additional ideas?
>
> I'd save the money for the SSD and spend it on more RAM.

I'm maxed out on RAM (4 GBytes for the Atom D510 in
http://www.supermicro.com/products/system/1U/5015/SYS-5015A-PHF.cfm
)

> RAM = Cache. No SSD will be as fast as your RAM. Additionally, the base
> install of a typical VServer has only a few 100 MB, by unifying them it
> leads to having the base infrastructure of all VServers in RAM within a
> short time.

I presume I can run 100-200 idling instances on above system. Probably
50 instances which are under constant, but low load.

> The SSD part becomes intresting only if you have much more data than RAM
> that you need to access in a random order rendering the cache useless.
> Typical szenarios would be databases or huge IMAP servers.

The reason I'm looking at SSD is because hundreds of guests would be
potentially fighting for access to one spindle. I'm putting an nginx, a postfix
and also IMAP instances in each server, along with postgresql and perhaps
other goodies.

> I don't know what you actually run on the VServers, if it is network
> servers like samba, apache, mail I'd not waste time on SSDs unless you
> have way more than 1GBit/s NICs.

I have 100 MBit/s dedicated router port, which I can double or
upgrade to a GBit/s. I don't expect to service more than 1000 customers,
of which only few % would be actually active at the time. Bandwidth
is not the problem as far as I can see, I/O contention is.

> Personally I uses SATA disks almost everywhere with the possible

So do I.

> exception of databae servers, there SAS is an option.

I use SATA drives (Velociraptor and Intel SSD) in DB applications, too.

> iSCSI is no real benefit, it brings in extra letency, exactly what you
> try and avoid with the SSDs. I built a 10GBit/s iSCSI infrastructure

I don't think the Ethernet stack would add more than few 10 us latency, and the
problem is IOPS when tens or hundreds of simultaneous accesses.

> recently and due to latencies I had to be comfortable with 60-350MB/s
> depending on file sizes. Not exactly what was expected as a simple
> 1GBit/s iSCSI does 60-108MB/s in a similar setup.

My idea if an iSCSI hybrid target is 2x 1-2 TByte SATA drives with
a 80-160 SSD (SLC for ZIL, MLC for L2ARC) and 8 GBytes of RAM. That's
probably around 1.2 kEUR, or so, and will need one or several machines
it's serving which run vserver guests. Out of budget for time being.

> Generally I'd use all kinds of SAN-setup only if I have many seperate
> hardware boxes that require minimal disk space to get rid of the local
> disks. With VServers you can just build real disks with fast controllers
> right into the box.

I'm using a 300 EUR server kit, with maybe 90 EUR memory and a single 100 EUR
SATA disk, with space for another 2.5" SATA disk which I want to be an SSD.
The advantage is that's low-power, and be mounted back-to back, so you
can have some 80 servers with some 320 GByte RAM and some 50-100 TByte disk
in one 19" rack with 2.4 kW power for total costs of less than 40 kEUR (adding
SSD will make that quite a lot more expensive, admittedly).

So, should I bother?

-- 
Eugen* Leitl leitl http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
Received on Thu Apr 1 14:29:31 2010
[Next/Previous Months] [Main vserver Project Homepage] [Howto Subscribe/Unsubscribe] [Paul Sladen's vserver stuff]
Generated on Thu 01 Apr 2010 - 14:29:33 BST by hypermail 2.1.8