On 08/26/2010 11:01 AM, Gordan Bobic wrote:
> Is there an efficient way to copy hashified data across hosts? e.g.
> using tar or rsync?
>
> I want to have a backup host that I can periodically rsync data to, but
> would like to avoid having to re-hashify the files at the destination
> since they will already be hashified at source.
>
> Is there a way to achieve this?
Sorry, I felt I should clarify what I'm asking further - I know that
rsync can preserver hard-links. My concern is that they will become
normal hard-links at the destination, and not be COW.
Is the hashify feature smart enough to simply flag these hard-links COW,
without doing any other expensive disk I/O?
One obvious way of doing this would be to use GFS2/OCFS2 with DRBD for
active-active vserver guest trees, or even just DRBD with ext2 with
fail-over. However, GFS2 and OCFS2 are journalled file systems, and
journalling can cause the write volume amplification of 5-50%. DRBD also
keeps extra metadata, which again means write overheads, and my writes
need to be as restricted as possible.
The reason for all this is that I'm planning to use some very minimalist
hardware with CF as a disk (I want to keep the whole machine under the
12W PoE budget), and cheap CF/SD media can be painfully slow on writes
(~ 10 IOPS, not to mention that I would prefer to avoid hammering the
cheap flash and shortening it's life). That means that I really need to
minimize the writes.
To this end, I have merged the ext2 compression patches with vserver
patches for 2.6.31 (the 2.6.32 and later kernels have deprecated the
generic_osync_inode() calls from what I can tell, and I haven't yet
figured out how it works well enough to port the patch to newer
kernels). In theory, this should further minimize writes and thus
further mitigate the write-slowness.
Gordan
Received on Thu Aug 26 11:21:01 2010