Re: [vserver] vhashify not working? - 2.6.27.14-vs2.3.0.36.4

From: Roderick A. Anderson <raanders_at_cyber-office.net>
Date: Tue 31 Mar 2009 - 15:01:26 BST
Message-ID: <49D22236.5090608@cyber-office.net>

John A. Sullivan III wrote:
> On Tue, 2009-03-31 at 08:40 +0000, Christoph Lukas wrote:
>> Hi John,
>>
>> Am Dienstag, den 31.03.2009, 04:08 -0400 schrieb John A. Sullivan III:
>>> On Tue, 2009-03-31 at 06:51 +0000, Christoph Lukas wrote:
>>>> Hi John,
>>>>
>>>>> Hello, all. In our earlier deployments on kernel 2.6.22, we were very
>>>>> happy with the results of vhashify. For some reason, our use in
>>>>> 2.6.27.14 using vserver 2.3.0.36.4 does not seem to be working as well.
>>>>> This is important to us because we are planning roughly 400 nearly
>>>>> identical guests on this one host.
>>>>>
>>>>> We currently have 10 Ubuntu 8.0.4 guests running on our CentOS 5.2 based
>>>>> host. Each one is roughly 2GB in size and were all cloned from the same
>>>>> template. Total storage on the vserver partition is roughly 21GB. The
>>>>> only unusual bit about this installation is there is /vservers/vetc
>>>>> directory which is then mounted via bind to /etc/vservers. This was
>>>>> originally because there was a single encrypted partition mounted via
>>>>> iSCSI holding all the vserver information.
>>>>>
>>>>> We've done:
>>>>> mkdir /etc/vservers/.defaults/apps/vunify/hash /vservers/.hash
>>>>> ln -s /vservers/.hash /etc/vservers/.defaults/apps/vunify/hash/root
>>>>>
>>>>> We noticed there is another
>>>>> link, /etc/vservers/.defaults/apps/vunify/hash/00
>>>>> and hashify complained of a duplicate "root"directory so we deleted the
>>>>> root symlink. With or without it, we have the same results.
>>>> are there any hardlinks visible inside the /vservers/.hash directory? If
>>>> not the hashify did not work.
>>>>
>>>> You can check which files inside the vservers are not unified by running
>>>>
>>>> find /vserver/<guest> -type f -links 1
>>>>
>>>> and the unified files by running:
>>>>
>>>> find /vserver/<guest> -type f -links +1
>>>>
>>>> This should give you a hint if vhashify worked correctly.
>>>>
>>>>> For each vserver guest, we did:
>>>>> mkdir /etc/vservers/<name>/apps/vunify
>>>>>
>>>>> Has something changed with 2.6.27.14?
>>>> AFAIK unification is just done in userspace it should not depend on the
>>>> kernel version. Just the copy-on-write-link-breakage is done in the
>>>> kernel.
>>>>
>>>>> Are these realistic numbers?
>>>> Does not seem realistic to me.
>>>>
>>>>> I
>>>>> would think ten identical systems should yield just slightly more than
>>>>> the space of one system after running hashify.
>>>> I would guess something with your .hash directories is not setup
>>>> correctly and therefore the hashify did nor work as expected. You can
>>>> try to fix this and then just run:
>>>>
>>>> vserver <guest> hashify
>>>>
>>>> on a running guest. I have setup unification successfully here and used
>>>> this two wiki pages as howto:
>>>>
>>>> http://linux-vserver.org/Frequently_Asked_Questions#Unification
>>>>
>>>> http://linux-vserver.org/util-vserver:Vhashify
>>>>
>>>> Hope this helps,
>>>> Christoph
>>>>
>>> Thank you, Christoph. The commands show plenty of both types of files
>>> with the hard links clearly outweighing the regular files. Strange that
>>> I still show 20GB for 10 servers. Take care - John
>> Are you sure that your measurements of disk usage are correct?
>> Do the 20 GB disk usage correspond with the df output on the /vservers
>> partition?
>>
>> du -sch /vservers/*
>>
>> should notice multiple hardlinks and count them only once.

I have this in my script.

if [ $( find /vservers/.hash -type f -links 1 ) ]
then
    find /vservers/.hash -type f -links 1 -print0 | xargs -0 /bin/rm
else
    echo No dangling links.
fi

Would this make a difference?

Rod

-- 
>>
>> Regards,
>> Christoph
>>
>>
> Yes, the numbers match and they also match the size of the space
> consumed on the thinly provisioned ZFS zvol on which they reside (28.1
> GB):
> [root@vd01 ~]# du -sch /vservers/*
> 4.4G    /vservers/cle
> 2.2G    /vservers/gssstation
> 2.2G    /vservers/jasstation
> 1.4G    /vservers/jintra
> 16K     /vservers/lost+found
> 1.4G    /vservers/mintra
> 1.4G    /vservers/mlapo
> 1.4G    /vservers/simple
> 1.4G    /vservers/smcc
> 1.4G    /vservers/tkee
> 1.4G    /vservers/tvan
> 1.4G    /vservers/vdb
> 1.9G    /vservers/vdb2
> 1.4G    /vservers/vde
> 1.1M    /vservers/vetc
> 23G     total
> 
> I am guessing this may also affect my memory usage.  Does VServer use
> the hashify information to determine whether a binary being called into
> memory already exists? Thanks - John
Received on Tue Mar 31 15:01:43 2009
[Next/Previous Months] [Main vserver Project Homepage] [Howto Subscribe/Unsubscribe] [Paul Sladen's vserver stuff]
Generated on Tue 31 Mar 2009 - 15:01:44 BST by hypermail 2.1.8