While this simplifies the code a bit, it's a regression in the sense
that it increases memory use.
This commit is yak shaving for another hard link counting approach I'd
like to try out, which should be a *LOT* less memory hungry compared to
the current approach. Even though it does, indeed, add an extra cost of
these parent node pointers.
This complicated the scan code more than I had anticipated and has a
few inherent bugs with respect to calculating shared hardlink sizes.
Still, the merge approach avoids creating a full copy of the subtree, so
that's another memory usage related win compared to the C version.
On the other hand, it does leak memory if nodes can't be reused.
Not quite as well tested as I should have, so I'm sure there's bugs.
Two differences compared to the C version:
- You can now select individual paths in the listing, pressing enter
will open the selected path in the browser window.
- Creating this listing is much slower and requires, in the worst case,
a full traversal through the in-memory tree. I've tested this without
the same-dev and shared-parent optimizations (i.e. worst case) on an
import with 30M files and performance was still quite acceptable - the
listing completed in a second - so I didn't bother adding a loading
indicator. On slower systems and even larger trees this may be a
little annoying, though.
(also, calling nonl() apparently breaks detection of the return key,
neither \n nor KEY_ENTER are emitted for some reason)
In a similar way to the C version of ncdu: by wrapping malloc(). It's
simpler to handle allocation failures at the source to allow for easy
retries, pushing the retries up the stack will complicate code somewhat
more. Likewise, this is a best-effort approach to handling OOM,
allocation failures in ncurses aren't handled and display glitches may
occur when we get an OOM inside a drawing function.
This is a somewhat un-Zig-like way of handling errors and adds
scary-looking 'catch unreachable's all over the code, but that's okay.
Performance is looking great, but the code is rather ugly and
potentially buggy. Also doesn't handle hard links without an "nlink"
field yet.
Error handling of the import code is different from what I've been doing
until now. That's intentional, I'll change error handling of other
pieces to call ui.die() directly rather than propagating error enums.
The approach is less testable but conceptually simpler, it's perfectly
fine for a tiny application like ncdu.
I plan to add more display options, but ran out of keys to bind.
Probably going for a quick-select menu thingy so that we can keep the
old key bindings for people accustomed to it.
The graph width algorithm is slightly different, but I think this one's
a minor improvement.
Now we're getting somewhere. This works surprisingly well, too. Existing
ncdu behavior is to remember which entry was previously selected but not
which entry was displayed at the top, so the view would be slightly
different when switching directories. This new approach remembers both
the entry and the offset.
I initially wanted to keep a directory's block count and size as a
separate field so that exporting an in-memory tree to a JSON dump would
be easier to do, but that doesn't seem like a common operation to
optimize for. We'll probably need the algorithms to subtract sub-items
from directory counts anyway, so such an export can still be
implemented, albeit slower.
libc locale-dependent APIs are pure madness, but I can't avoid them as
long as I use ncurses. libtickit seems like a much saner alternative (at
first glance), but no popular application seems to use it. :(
I tried playing with zbox (pure Zig termbox-like lib) for a bit, but I
don't think I want to have to deal with the terminal support issues that
will inevitably come with it. I already stumbled upon one myself: it
doesn't properly put the terminal in a sensible state after cleanup in
tmux. As much as I dislike ncurses, it /is/ ubiquitous and tends to kind
of work.