I've been having some trouble using fatback's automatic
recovery on a partition that Gnome recently trashed
(for some reason the wastebasket is now linked to my
second Windows partition, so when I emptied the trash
it went off and deleted half the files on there before
I realised and stopped it :-()
Although it's only an 8 Gig partition and I've got 24
Gig spare in /home, I run out of space when running
'fatback --output=win_d_recovery/ --auto /dev/hda5'.
It seems there's a sector being misinterpreted as part
of a directory entry, and when it tries to recover its
contents (a bunch of files with random names,
occassionally including a non-printable control
character!) it ends up creating a number of files of
random data up to a Gig or more. I didn't expect this
to be a problem as I figured I could delete them again
from another shell as fast as they're created.
However, when I delete these files it's not freeing up
any space on the disk, and eventually the /home
partition fills up completely. lsof reveals that the
deleted files are in fact still open, which is why the
space remains unavailable.
I believe the cause of this is in recovery.c, in the
extract_file function. In almost every place before
the function returns, it does
close(file);
free(buffer);
free(fname);
except for one - there's a bit which reads
if (chainlen < reqd_clusts) {
display(VERBOSE, log_carve, fname);
carve_file(clusts, cluster, size,
bytes_per_clust, file);
return 0;
}
So far as I can see, carve_file doesn't do any cleaning
up either, so in this case the file doesn't get closed.
In fact, I've been seeing a number of messages in the
log about carving files, so it seems likely to me that
this is what's stopping me deleting the files until
fatback exits (or is killed).
I guess the fix is to replace the above with
if (chainlen < reqd_clusts) {
display(VERBOSE, log_carve, fname);
carve_file(clusts, cluster, size,
bytes_per_clust, file);
close(file);
free(fname);
return 0;
}
(buffer won't need freeing, as it hasn't been
emalloc'ed by this point) As I'm mainly a java
programmer, though, and my C is a bit rusty, I'd
appreciate it if someone else could take a look at it
and confirm this won't cause any unpleasant
side-effects elsewhere.
Thanks,
Andrew.
Logged In: YES
user_id=247081
Something else that looks a bit dodgy in recovery.c - near
the top it has
/* Find a filename for output */
if (!(fname = unused_fname(filename))) {
display(VERBOSE, error_naming, fname);
free(fname);
return -1;
} else if (strcmp(fname, filename) != 0)
display(VERBOSE, log_nametaken, filename, cluster,
fname);
While it's good that it checks the returned value from
unused_fname, if in fact it did ever fail then this would
attempt to free(NULL), most likely causing a segfault...