On Sun, Oct 17, 2010 at 03:25:04PM +0100, Gordan Bobic wrote:
> On 17/10/2010 14:54, Michael S. Zick wrote:
> >On Sat October 16 2010, Gordan Bobic wrote:
> >>Hi,
> >>Can anybody hazard a guess as to what happened here? I'm prepared to
> >>consider any theory at the moment, no matter how far fetched.
> >>I'm running 2.6.30.10-vs2.3.0.36.14-pre8. The file system is ext4
> >>without journal and in data=writeback mode.
> >Lets go with your first guess, file corruption, and speculate a bit...
> >We know that ext4 gets its speed by the high degree of meta-data and
> >data catching that it uses.
> >We know that if ext4 is not cleanly shut down, your file system is
> >burnt toast.
> >On any type of system.
> That is, in my experience, superstition. I have a number of laptops with
> SSDs where I don't want the write overheads of journalling with the
> exact same setup, and none have ever had any file corruption issues.
> Sure, sometimes after yanking the battery the files that the open for
> writing get broken and fsck puts their fragments in lost+found, but
> that's no worse than ext2 has been before it.
putting superstition aside, can you recreate the issue?
i.e. is there a script or procedure which reliably
produces the 'corruption'?
> >Now, can we relate those behaviors to a single file system name space?
> >Or, first, was it limited to a single file system name space?
> Yes - there is only one partition, only one file system (the root one).
it is not a good idea to put Linux-VServer guests on
the same filesystem as the host (system). having at
least one partition (shared between all the guests)
is strongly advised.
> >Was the guest you where running and changing file content on the __only__
> >one that may have had changed files?
> Both guests are toast in exactly the same way. The host's binaries are
> fine and the host boots OK. The guests were running fine for days, with
> many guest reboots in the meantime. Things appear to have gone wrong
> when the host was shut down. That _might_ imply that things were running
> fine from the caches pre-filled some time before, but it seems really
> strange that ALL binaries would be hosed, even the ones that were never
> touched. The only thing that would have touched them all that I can
> think of is hashify.
what do those 'corrupted' binaries contain?
> >That one is a slim chance, the host context is writing to /var/log/* if
> >nothing else - any of those get corrupted?
> My /var/log is on tmpfs in both the host and the guests (I'm on a SSD
> and don't need the logs so I don't want them wasting my write cycles).
> >Where there other running guests on the system, with changed /
> >changing files that did not get corrupted?
> There are only two guests on the system, and they were both running.
> >Did you shut down just this one guest or the entire machine?
> First the guests individually, then the host machine. Clean shutdowns.
> >Are you using tagging on this file system?
> Tagging? What do you mean?
tagging as in 'tag' as mount option (which is
intentionally really hard to set on a single root
partition :)
best,
Herbert
> >Sorry for only having questions rather than answers.
> Questions are good, too. Right now I'm out of ideas so anything that
> comes up with possibilities is good.
> Gordan
Received on Mon Oct 18 01:53:28 2010