Den 06. jan. 2016 07:06, skrev Gordan Bobic:
> One other thing - did you make sure prelink is undone and uninstalled in all your guests before you hashified the guest contents?
Yes, there is no prelink installed. The test guest are "pure" stage3
Gentoo images from around Xmas and I cannot find any traces of any
prelink in there.
> On 6 Jan 2016 2:31 am, Herbert Poetzl <herbert@13thfloor.at> wrote:
>>
>> On Mon, Jan 04, 2016 at 01:51:17AM +0100, Tor Rune Skoglund wrote:
>>> Den 01. jan. 2016 22:25, skrev Herbert Poetzl:
>>>> On Fri, Jan 01, 2016 at 09:37:52PM +0100, Tor Rune Skoglund wrote:
>>>>> Having been a happy linux-vserver user for more than 10 years
>>>>> now, it was about time to test the hashify feature. The disk
>>>>> savings are obvious, and easily measured, but I have been
>>>>> trying a lot harder to measure any possible run-time memory
>>>>> savings.
>>
>>>>> For the testing, I created a simple template LAMP guest, and
>>>>> a lot of hashified guests cloned from that one. I am unable
>>>>> to measure noticeably less memory usage when running multiple
>>>>> hashified guests compared to non-hashified ones using free and
>>>>> /proc/meminfo/'s MemAvailable entry.
>>>>> However, this could very well be to shortcomings in my own
>>>>> understanding how this should work or what to look for.
>>>>> What should I look for regarding possible memory savings?
>>>>> Anyone with any pointers?
>>
>>>> You won't see any memory savings with dynamic memory allocations
>>>> and you won't get any benefits on read-write mappings either,
>>>> but you should be able to see a reduction for read only mappings
>>>> like they happen when using static binaries or read only mapped
>>>> shared libraries as well as read only memory mapped data files.
>>
>>> Thank you, Herbert.
>>
>>> Although reading a lot lately and trying my best to get a grip
>>> on how this works, I am still a newbie in this area.
>>
>>> So please excuse me for continuing to ask possibly stupid
>>> questions.... ;)
>>
>> No problem.
>>
>>> As far as I can tell, all code and libraries are by default PIC
>>> now on my setup. (Is this a requirement?)
>>
>> PIC (position independant code) is nice because it doesn't
>> require patching the code after it was loaded, but PIC is
>> not a requirement.
>>
>>> Does your comment above then mean that all read only mappings
>>> can be shared across guests no matter their setting of the
>>> execute flag and the MAP_SHARED/MAP_PRIVATE flag?
>>
>> Files do not get mapped in unix, inodes get mapped into
>> memory. Unification works by using hard links to share
>> identical files between guests in a "safe" way.
>>
>> A mapping which results from simply reading a page from
>> an inode into memory without further modification will
>> result in a cache entry (inode -> mapping) which will be
>> reused when that page from the very same inode is mapped
>> again (at least that was how it worked in 2.4, but I
>> doubt that has changed since :)
>>
>>> (In my test setup, based on greping /proc/*/maps for "r--p"
>>> and "r--s", there are very few shared read only mapped files
>>> ("r--s") compared to read only private ("r--p").
>>
>> Shared read only vs private read only is only relevant
>> for pages which can be written by somebody but typically
>> (as you figured already) code and data from binaries and
>> libraries are usually not written to, so a read only
>> mapping basically means "get a reference to that page".
>>
>>> It seems like almost every binary or .so has a considerable
>>> read-only private section which then will be part of the
>>> assumed memory savings.)
>>
>>> If not, what should I look for --- e.g. using /proc/<pid>/maps,
>>> pmap or in some other way ?
>>
>>> How does KSM ( https://en.wikipedia.org/wiki/Kernel_same-page_merging )
>>> play with linux-vserver? If at all?
>>
>> As far as I know, same page merging is only active on
>> pages which are explicitely registered for this service.
>> e.g. from kvm when allocating virtual machines or similar.
>>
>> So for Linux-VServer there is no real benefit as the
>> KSM part won't be active for guests.
>>
>> Nevertheless, you can runn applications with a preload
>> library which will advise the pages as candidates for
>> merging.
>>
>>> Lastly, I am sorry if I am jumping to wrong conclusions
>>> somewhere here... Please feel free to brutally educate me. :)
>>
>> The main problem is finding good test scenarios and
>> using proper instrumentation to demonstrate a real
>> benefit.
>>
>> Honestly I do not consider the savings worthwile for
>> the typical guest setups (aka virtual server hosting)
>> because both, memory and disk space have become very
>> cheap and the main consumers (like for example java :)
>> won't benefit from shared pages at all.
>>
>> That said, in scenarios where you run a rather complex
>> binary code with a lot of read only data several thousand
>> times in parallel, the saving on memory might be easily
>> worthwhile.
>>
>> Best,
>> Herbert
>>
>>> BR,
>>> Tor Rune Skoglund, trs@swi.no
>>
>>>> If I would devise a test to show the advantages, I would run a
>>>> binary which doesn't do many dynamic allocations but uses a lot
>>>> of code and/or libraries and run it as only process in each guest
>>>> with a few thousand guests in parallell, once with and without
>>>> unification in place.
>>>> Best,
>>>> Herbert
>>
>>>>> This is Gentoo, util-vserver 0.30.216_pre3120, kernel Linux amd64
>>>>> 3.18.7-vs2.3.7.4.
>>>>> BR, Tor Rune Skoglund
>>>>> trs@swi.no
>>
Received on Wed Jan 6 13:26:51 2016