Commit graph

9 commits

Author SHA1 Message Date
Yorhel
ca46c7241f Fix off-by-one in binfmt reader 2024-08-18 08:38:57 +02:00
Yorhel
26229d7a63 binfmt: Remove "rawlen" field, require use of ZSTD_getFrameContentSize()
The zstd frame format already supports this functionality and I don't
really see a benefit in not making use of that.
2024-08-11 15:56:14 +02:00
Yorhel
c30699f93b Track which extended mode fields we have + bugfixes
This prevents displaying invalid zero values or writing such values out
in JSON/bin exports. Very old issue, actually, but with the new binfmt
experiments it's finally started annoying me.
2024-08-09 18:32:47 +02:00
Yorhel
6b7983b2f5 binfmt: Support larger (non-data) block sizes
I realized that the 16 MiB limitation implied that the index block could
only hold ((2^24)-16)/8 =~ 2 mil data block pointers. At the default
64k data block size that means an export can only reference up to
~128 GiB of uncompressed data. That's pretty limiting.

This change increases the maximum size of the index block to 256 MiB,
supporting ~33 mil data block pointers and ~2 TiB of uncompressed data
with the default data block size.
2024-08-09 09:40:29 +02:00
Yorhel
252f7fc253 Use u64 for item counts in binary export
They're still clamped to a u32 upon reading, but at least the file will
have correct counts and can be read even when it exceeds 4.2 billion
items.
2024-08-08 11:37:55 +02:00
Yorhel
30d6ddf149 Support direct browsing of a binary export
Code is more hacky than I prefer, but this approach does work and isn't
even as involved as I had anticipated.

Still a few known bugs and limitations left to resolve.
2024-08-06 09:50:10 +02:00
Yorhel
8ad61e87c1 Stick with zstd-4 + 64k block, add --compress-level, fix 32bit build
And do dynamic buffer allocation for bin_export, removing 128k of
.rodata that I accidentally introduced earlier and reducing memory use
for parallel scans.

Static binaries now also include the minimal version of zstd, current
sizes for x86_64 are:

  582k ncdu-2.5
  601k ncdu-new-nocompress
  765k ncdu-new-zstd

That's not great, but also not awful. Even zlib or LZ4 would've resulted
in a 700k binary.
2024-08-03 13:16:44 +02:00
Yorhel
85e12beb1c Improve performance of bin format import by 30%
By calling die() instead of propagating error unions. Not surprising
that error propagation has a performance impact, but I was hoping it
wasn't this bad.

Import performance was already quite good, but now it's even better!
With the one test case I have it's faster than JSON import, but I expect
that some dir structures will be much slower.
2024-08-02 14:09:46 +02:00
Yorhel
025e5ee99e Add import function for the new binary format
This isn't the low-memory browsing experience I was hoping to implement,
yet, but it serves as a good way to test the new format and such a
sink-based import is useful to have anyway.

Performance is much better than I had expected, and I haven't even
profiled anything yet.
2024-08-02 14:03:30 +02:00