tracing
Run just one test for now
Use fuse2fs in path
Thanks. The easiest thing is just to build with --disable-sqlite I'm not sure how useful this sql indexer stuff is, there's absolutely nothing to use what gets written to the db! I've just pushed my wip branch so you can get my fix here: https://sourceforge.net/p/dump/code/ci/6e1ec6d65c3dc7d57640fa55b4d541d262ae6aca/ My branch needs some minor cleanups, and removing some of the commits that drag in some large binary images for my testing but addresses a number of open bugs and feature requests. My...
Better filename completion, handle more than 32 tapes read, some new tests
Temporary add images
Save compile_commands.json
Add --use-blockdev-for-dumpdates, support incremental dumps of a subdirectory, support dumping from a bind mount
Fix some whitespace
Cleanups of includes so that our local files are includes with ""
Exclude quota and orphan inodes from dump (We should handle quotas specially I guess! )
Require blkid support on linux unless explicitly disabled
Remove __STDC__
Update indexer_test
Fix sqlite_indexer compilation (--enable-sqlite)
Basic noroot stuff
Support fuse.ext4
fix memory leak
Improved detection of bad EA attributes
Build as C++
Test improvements
Fix make maintainer-clean
OpenAi chatGPT offered, and gave me permission, to post its analysis: Yep — with just the link and the compiler output, that’s enough to fix it. Your file is using old UFS-style di_ member names on an ext2_inode_large. In e2fsprogs’ headers the members are i_ (and one of them changes name entirely: links → i_links_count). Here’s the authoritative struct so you can see the field names you actually have: i_atime, i_mtime, i_ctime, i_mode, i_uid, i_gid, i_links_count, i_size, i_file_acl, i_extra_isize,...
sqlite_indexer.c:384:17: error: 'struct ext2_inode_large' has no member named
Support for the musl C library
support ext4 user/group/project quota inodes.
Hi, I have since updated my workflow. I use a static mountpoint which seems to work OK.
dump 0.4-b52 does not find dumps in /etc/dumpdates, creates level 0
Oh bother, this is because of a change that was supposed to make it easier (actually possible at all) where device nodes change. TBH, It hadn't occured to me that someone would do this. If you're creating a snapshot, why do you mount it in order to dump it? You can use the device node directly (whether or not it's mounted) nice -n+10 /sbin/dump -m -h 0 -11 -z8 -u -f dumpkukana@backupserver:/BACKUP/kukana//kukana_2025-06-11_rootfs_l11.dmp /dev/mapper/fedora_vncinstall-snap_backup_dump_rootfs -L kukana_rootfs...
dump 0.4-b52 does not find dumps in /etc/dumpdates, creates level 0
Before dump/restore fixed the ftruncate and core dump errors, I checked out borg as a suitable replacement. Since I have to backup up a bunch of qemu-kvm virtual drive, which are sparse files, I picked out three to backup with borg. Originals: $ dd count=0 bs=1M seek=100 of=sparseFile $ du --bytes KVM-W10.raw KVM-W11.raw sparseFile 64_424_509_440 KVM-W10.raw 133_143_986_176 KVM-W11.raw 104_857_600 sparseFile $ du --block-size=1 KVM-W10.raw KVM-W11.raw sparseFile 48_891_629_568 KVM-W10.raw 51_850_604_544...
Before dump/restore fixed the ftruncate and core dump errors, I checked out borg as a suitable replacement. Since I have to backup up a bunch of qemu-kvm virtual drive, which are sparse files, I picked out three to backup with borg. Originals: $ dd count=0 bs=1M seek=100 of=sparseFile $ du --bytes KVM-W10.raw KVM-W11.raw sparseFile 64_424_509_440 KVM-W10.raw 133_143_986_176 KVM-W11.raw 104_857_600 sparseFile $ du --block-size=1 KVM-W10.raw KVM-W11.raw sparseFile 48_891_629_568 KVM-W10.raw 51_850_604_544...
It doesn't seem to be overwritten by dnf upgrade I just verified it myself. I also verified you have to remove to old one to upgrade to the one you posted.
I've emailed the fedora maintainer to ask if he can do another release with an updated spec file that will fix this issue.
It doesn't seem to be overwritten by dnf upgrade (although I don't know how to do the equivalent of apt-cache policy dump in rpm world). But no I won't be uploading to the Fedora repos, I know almost nothing about fedora and I assume if I could upload it then so could you, but I have absolutely no idea if what I've done to the spec file is appropriate nor how to upload even if I can.
Thank you!!! Will you be uploading the good ones to the Fedora repo's? Dnf update will overwrite the good ones otherwise
will you be uploading the good ones to the Fedora repo's? Dnf update will overwrite the good ones otherwise
Try these (they might conflict witht he version numbers already so you might have to uninstall the previous version first.
Try these (they might conflict witht he version numbers already so you might have to uninstall the previous version first.
core dump - some code-paths do not return a malloced string in get_device_name when built with --disable-blkid
Try these (they might conflict witht he version numbers already so you might have to uninstall the previous version first.
The code is the same. All that is required is that the libblkid-devel package is installed before it is built. The most likely "fix" will be to stop it building if this package isn't installed. The diff I posted above just adds BuildRequires: libblkid-devel to the spec file which forces this to be installed before running configure
I am confused. How do I get my hands on the good code?
core dump - some code-paths do not return a malloced string in get_device_name when built with --disable-blkid
core dump
:-( There is a bug, sadly, triggered by my valgrind fixes, but there's an easy fix for this that doesn't require any code changes: root@dirac:/mnt/nobackup/rinse/fedora-40/root/rpmbuild# diff -u dump.spec.orig dump.spec --- dump.spec.orig 2025-05-04 01:00:00.000000000 +0100 +++ dump.spec 2025-05-08 08:25:49.843062966 +0100 @@ -14,6 +14,7 @@ BuildRequires: device-mapper-devel, libselinux-devel BuildRequires: lzo-minilzo BuildRequires: lzo-devel, libtool +BuildRequires: libblkid-devel # This Requires...
The new rpm core dumps. See https://sourceforge.net/p/dump/bugs/185/ Bummer.
core dump
That's good to hear. The spec file I used can be extracted using the attached src.rpm rpm2cpio dump-0.4-0.59.b52.fc40.src.rpm | cpio -idmv It's the spec file for the existing version with all the patches removed.
downstream just announced that he will be posting b52 in the repos shortly
downstream just announced that he will be posting b52 in the repos shortly
Is this warning just noise or is there something wrong?
Not having any luck building my own. I do believe the hold up is the spec file. To create an rpm from scratch using the b52 tar ball, what would your spec file look like?
Many C++isms in the source
I managed to build this in a fedora 40 chroot on a debian host fairly trivially so I think it should be straight forwards to do it for a newer version (for reasons I don't understand, trying to build a package on fedora 42 using the method below to create a chroot on debian failed with a missing /sbin/ldconfig and I'm not prepared to spend time debugging it but I assume when run on a RPM system it would "just work) I think the following will allow you to build an up to date rpm dnf install rpmdevtools...
RPM?
This is a harmless warning unless it is actually failing to ftruncate a file you are restoring (which will be obvious because the restored file will have the wrong length)
Does redhat build with COMPARE_ONTHEFLY unset? (that would require actually changing the code as this is not settable via configure) because I don't see any other way that restore -C can possibly reach the only ftruncate call. (I will fix this in a future release - possibly by just removing that other code completely) There was a further fix for another manifestation of this same warning when extracting a file which went into 0.4b49 but that is not the same issue as this bug.
I am still seeing this on Fedora 41 and dump-0.4-0.57.b47.fc41.x86_64. Would you please re-open this bug report.
I am still seeing this on Fedora 41 and dump-0.4-0.57.b47.fc41.x86_64
No problem, happy to help. Thanks for investing the time to make dump more portable!
0.4b51 has been pushed. Thank you so much for all your help, I might never have found this without all of the testing you did. I included those two missing headers you reported in this release.
Suppress spurious error message from faketape-regression.sh
make 0.4b51 release
Add missing headers
Fix memory errors
Update the test-image README and checksum
I'm preparing 0.4b51 with this, some other minor valgrind fixes and the missing headers. I run the full suite of tests on amd64 and i386 which takes a while but it should be up late tonight or early next week.
That was it! $ sudo ./restore/restore -C -D /mnt/debian -f ../dump-0.4b50/debian.img Dump date: Fri Mar 21 12:47:12 2025 Dumped from: the epoch Level 0 dump of /mnt/debian on mertle:/dev/mmcblk1p2 Label: none filesys = /mnt/debian $ sudo ./restore/restore -C -D /boot -f ../dump-0.4b50/boot.img Dump date: Fri Mar 21 12:22:26 2025 Dumped from: the epoch Level 0 dump of /boot on mertle:/dev/nvme0n1p2 Label: _/boot filesys = /boot
I've finally managed to get valgrind to complain and it is that malloc above: ==31077== Syscall param read(buf) points to unaddressable byte(s) ==31077== at 0x4AA521D: read (read.c:26) ==31077== by 0x11AA2D: read_some_data (readtape.c:215) ==31077== by 0x11AA2D: read_some_data (readtape.c:193) ==31077== by 0x11AF86: readblock_tape (readtape.c:398) ==31077== by 0x1160CD: readtape (tape.c:2023) ==31077== by 0x116A40: getfile (tape.c:1286) ==31077== by 0x117DF8: setup (tape.c:418) ==31077== by 0x10C618:...
Something else to try is restore -b 2048 ... which should read the entire TS_CLRI bitmap in one go. If it then finds the TS_BITS bitmap and fails later then it does seem to indicate there's an issue actually reading data from the tape. Nothing I can do seems to be able to reproduce your issue which is very frustrating.
Hmm, well that all looks good - and yes, I expected that size==0 there but it's not wrong to go into that case when it's zero - the time we'd need to enter there when size is 0 is when it's a file (rather than a bitmap) and the file size is a multiple of 1K but not a multiple of the disk block size. Dump writes whole disk blocks so we have to skip the unused parts. (so the patch is definitely wrong and the <= was correct) Here we read the last block of that bitmap mjo: readtape 1, i=1843, b=0, blksread=1843...
I applied that one-line patch, and then also added in a printf statement for the case where size - TP_BSIZE == 0. It looks like I am hitting it: Begin compare restore mjo: readtape 1, i=0, b=0, blksread=0 mjo: readtape 1, i=1, b=0, blksread=1 mjo: readtape 1, i=2, b=0, blksread=2 mjo: readtape 1, i=3, b=0, blksread=3 ... mjo: readtape 1, i=1842, b=0, blksread=1842 mjo: readtape 1, i=1843, b=0, blksread=1843 mjo: size - TP_BSIZE is zero mjo: converthead: FAIL due to swabi(buf->c_magic) != NFS_MAGIC...
Support for the musl C library
This is even more weird because looking at dump-info I do check the magic and the checksum there too. At line 385 of restore/tape.c we must have found the CLRI header and there's no trace that we've skipped blocks.
Please also include tpblksread in the trace - that's from the start of the tape while blksread is from the start of the file/bitmap.
I'm pretty sure you are reaching line 1305 but I cannot see how that can happen. You get to 1299 with size==0 and b==0 and i==1843 having read the last block of the bitmap at line 1286. Then that do { } while loop sets enclen to 1 so readtape(junk); should never be reached. You could try this fix but I'm suspicious it's wrong where we've read a file of say 1K on a system with blocks of 4K although interestingly, all the "non slow tests" have passed, I'm running the historical-regression tests while...
Incorrect block for <file removal list> at 1845 blocks has read one too many blocks because it starts at 0 so it's skipped over the TS_BITS and then fails when it gets to the TS_INODE
Hmmm, now I've looked at the code I don't know why it's gone wrong. converthead sets blksread to 0 which means Incorrect block for <file removal list> at 1845 blocks is reading what should be the TS_BITS header which means that something went wrong with gethead at line 1316 of restore/tape.c It's clearly read 1844 blocks at line 1286 Ohhh, it's gone wrong at line 1305
Support for the musl C library
This looks like an off-by-one bug in dump / restore. Hopefully easily fixed in restore otherwise.. TS_CLRI at blockno 2 inode=15097856 mode=---------- UNKNOWN size=1888256 flag$ 1844 data blocks with CLR-inode bitmap ``` That's 15MM inodes stored in a bitmap of 1844 blocks but. ``` scale=10 15097856/8/1024 1843.0000000000 It fits exactly in 1843 blocks. Label: none Incorrect block for <file removal list> at 1845 blocks I suspect that's counting from 0 and it looks like it's gone wrong on the last...
Nevermind about catch, I was able to get it working with v3.7.1: $ ./faketape_test Randomness seeded to: 462966853 =============================================================================== All tests passed (79946 assertions in 6 test cases)
This does fail
What version of the catch library are you using? I think the two that are available on Gentoo are too old / too new, but I could build one myself. Ignoring that for the moment... To get the rest to build, I had to add #include <sys/stat.h> to faketape/bswap_header.cpp and faketape/dump-info.cpp. Afterwards, file-to-tape does work. This is what dump-info has to say: dump-info: Starting ./dump-info Tape has ntrec of 10 TS_TAPE at blockno 1 inode=0 mode=---------- UNKNOWN size=0 flags= TS_CLRI at blockno...
I'm trying to guess what is most likely to be the cause. Struct padding would be my first thought but u_spcl_size_assert is supposed to do a compile time check on this. You might want to change say line 129 of compat/include/protocols/dumprestore.h to something like: case sizeof(us) == 1025 ? 1 : 0: and ensure that it fails to build and the musl compiler isn't doing something clever and not compiling an unused function which is then hiding a struct padding issue. In file included from dumprestore.c:44:...
Actually, now I remind myself about how this works, file-to-tape should work, it's dump-info that will hopefully give some clues as to what's going wrong.
diff --git a/faketape/file-to-tape.cpp b/faketape/file-to-tape.cpp index 03237b91..07c88ac6 100644 --- a/faketape/file-to-tape.cpp +++ b/faketape/file-to-tape.cpp @@ -172,7 +172,7 @@ int main(int argc, char* argv[]) { } } close(dumpfd); - warnx("Dump %s written successfully", argv[0]); + warnx("Dump %s read successfully", argv[0]); argv++; } tape.close();
Ha! file-to-tape: Dump ../x.img written successfully that's a bug. It's not writing x.img at all, only reading it.