Re: [vserver] btrfs/hashify/cow....

From: Gordan Bobic <gordan_at_bobich.net>
Date: Sun 09 Sep 2012 - 10:21:39 BST
Message-ID: <504C5FA3.1070907@bobich.net>

On 09/08/2012 09:38 PM, John A. Sullivan III wrote:
> On Sat, 2012-09-08 at 17:16 +0100, Gordan Bobic wrote:
>> On 09/08/2012 11:10 AM, Tor Rune Skoglund wrote:
>>> 2012/9/7 Gordan Bobic<gordan@bobich.net>:
>>>> On 09/07/2012 04:11 PM, Tor Rune Skoglund wrote:
>> <snip>>>
>>>>> - Presumably, all hashifed files must reside on the same partition?
>>>>
>>>> Indeed, they must all be on the same file system.
>>>
>>> Additional ?: The host run on a partition of it own. The guests on
>>> another partition. So then the host is left out of the hashify
>>> process, but the guests still can be hashified towards "eachother" ?
>>
>> All that is required is that your /vservers/ directory is on a single
>> file system. The host file system is unrelated. Only the guests get
>> mutually hashified, not the host.
>
> Does this mean your earlier comment about disabling prelink does not
> apply to the host, i.e., we must disable it in the guests but can keep
> it in the host?

Indeed, it doesn't apply to the host in this context, it is only
relevant to the guests.

However - disabling prelink on the host also matters if your backup
solution does deduplication. My backup solution involves rsync-ing to
ZFS with dedupe and compression, and having prelink disabled ensures
that all of the OS files are only saved once rather than once per
machine being backed up. Prelink is also only useful at application
startup, i.e. not useful for servers at all.

>> I never tried it, so I cannot comment either way. After the BTRFS devs
>> didn't manage to understand why CoW hard-links would be useful as a FS
>> feature (without vserver), and after some of the comments they made
>> regarding deduplication features and how (and whether) they plan to
>> implement it in BTRFS, I made a firm decision I'm not going to touch it
>> with a barge-pole. Ever. If these are the people designing and
>> developing the FS, I'm not prepared to entrust my data to it. Where my
>> requirements are feature-rich, I have switched to ZFS (ZFS-on-Linux
>> kernel driver, not the fuse implementation) and never looked back. I
>> still think it was the right decision.
>
> We've been using ZFS on OpenSolaris as I've heard the BSD implementation
> is poor and the FUSE implementation does not perform as well. I was not
> aware there was a ZFS-on-Linux kernel driver. I was under the
> assumption the licenses were incompatible and hence FUSE was the only
> Linux option.

I haven't tried it, but from everything I've heard the BSD ZFS
implementation is _awesome_. It even has some features that the
OpenSolaris implementation lacks (e.g. TRIM/discard). FUSE
implementation is handy as a fallback option if you have a problem (make
sure you create a pool with version that can be accessed by all the
implementations you might want to try it on).

As for licencng - yes, the ZFS licence means it cannot be shipped with
the mainline kernel, but there's nothing at all stopping it from being
shipped as an external module.

And BSD people have a much more healthily pragmatic view of licencing
nit-picking in this sense, as is evident from the fact that they've had
ZFS support in their kernel for years.

> Is this real ZFS on Linux?

Yes. (As opposed to what? Fake ZFS on Linux?)

> Does it compare in features and performance to OpenSolaris?

Yes.

> If so, I think it would be even better than OpenSolaris
> as it appears the network stack latency is lower in Linux than
> OpenSolaris from what we've seen. Thanks - John

ZFS doesn't go anywhere near the network stack so I don't know what
exactly the connection here might be.

Gordan
Received on Sun Sep 9 10:21:48 2012

[Next/Previous Months] [Main vserver Project Homepage] [Howto Subscribe/Unsubscribe] [Paul Sladen's vserver stuff]
Generated on Sun 09 Sep 2012 - 10:21:48 BST by hypermail 2.1.8