Hi- I have a system in front of me with Ubuntu 9.04 on it- it was
installed ext3. the part that is escapingmy understanding is
recovering from a file corruption of some sort. Duplicate or bad
blocks is the error message. The automatic run- in recovery mode- of
fsck dies with exit status 4.
A manual run of fsck shows a list of what it calls multiply-claimed
blocks in several inodes. and there's aprompt asking yor no about
cloning them.
Needless to say- I am quite aware that my lack of …
[View More]understanding risks
avoidable data loss. Any suggestions both on what to do and the best
training document/s so I can understand why/how this issue happened in
the first place will be appreciated. This is one of those gaps in my
understanding of Linux admin skills. Either this is so drop dead
simple that I will feel more of an idiot for not having "known" what
is going on..Or it's going to be non-trivial to recover from.
My first suggested fix was using Puppy etc and an external drive to
simply grab any unique personal files that are intact- then DBAN wipe
that drive before starting over. Which is what I cal a "Last Resort
First" tactic . I hate having to use them, but LRF type methods have
become a major mental hygiene tool for me :)
Of course- I am deeply curious if my oft preached model for data
protection- keeping the OS and Userdata on physically separate drives
would have helped here -or not..
--
Oren Beck
816.729.3645
[View Less]
I forgot to mention a few more points.
1) fsck -y isn't designed, at least in single pass mode, to ensure the filesystem is completely fixed (see next point), It is designed to make sure that further writes to the fs don't make it worse, among other things.
2) After running fsck, and it reports, "FILE SYSTEM MODIFIED", you must run fsck again to make sure you don't have further errors. When running fsck it must be run, repeatedly until you no longer get the message above. Only then are all …
[View More]the problems with the fs fixed.
3) If you have in the list of messages a note about hard link count being wrong and the count is too few, you *will* cause the deletion of the file, by using the -y fix. This, on a file that would have been totally recoverable had you not chosen the -y option.
4) In the particular case at hand, where there are multiply linked files, you may also wind up rewriting over the top of a good file with information that is not good, hence causing you to lose a file, had you chosen to not clone the file. The opposite is also true.
Lastly, my source for my notes comes from the linux-ext4 mailing list, but what do those guys know anyway?
Jack
[View Less]
Hello, all.
Attached is a press release about a new FLOSS conference coming up in May 2011. If you are interested in volunteering, speaking or sponsoring the event, please let us know as soon as possible. Thank you for your time!
Regards,
Russ
MAGNet Con
http://magnetcon.info