mirror of
https://code.blicky.net/yorhel/ncdu.git
synced 2026-01-12 17:08:39 -09:00
No description
And it's not looking well; this implementation seems to be 3x slower in the hot cache scenario with -J8, which is a major regression. There's way too much lock contention and context switching. Haven't tested with actual disk I/O yet and I've not yet measured how much parallelism this approach will actually get us in practice, nor whether the disk access patterns of this approach make a whole lot of sense. Maybe this low-memory approach will not work out and I'll end up rewriting this to scan disjoint subtrees after all. TODO: - Validate how much parallelism we can actually get with this algorithm - Lots of benchmarking and tuning (and most likely some re-architecting) - Re-implement exclude pattern matching - Document -J option - Make OOM handling thread-safe |
||
|---|---|---|
| LICENSES | ||
| src | ||
| .gitignore | ||
| build.zig | ||
| ChangeLog | ||
| Makefile | ||
| ncdu.pod | ||
| README.md | ||
ncdu-zig
Description
Ncdu is a disk usage analyzer with an ncurses interface. It is designed to find space hogs on a remote server where you don't have an entire graphical setup available, but it is a useful tool even on regular desktop systems. Ncdu aims to be fast, simple and easy to use, and should be able to run in any minimal POSIX-like environment with ncurses installed.
See the ncdu 2 release announcement for information about the differences between this Zig implementation (2.x) and the C version (1.x).
Requirements
- Zig 0.9.0
- Some sort of POSIX-like OS
- ncurses libraries and header files
Install
You can use the Zig build system if you're familiar with that.
There's also a handy Makefile that supports the typical targets, e.g.:
make
sudo make install PREFIX=/usr