Compare commits

...

264 commits
v1.14.2 ... zig

Author SHA1 Message Date
Yorhel
1b3d0a670e Version 2.9.2 2025-10-24 10:00:44 +02:00
Yorhel
f452244576 Fix infinite loop when reading config file on Zig 0.15.2
Works around Zig issue https://github.com/ziglang/zig/issues/25664

Fixes #266
2025-10-23 11:27:01 +02:00
Yorhel
14bb8d0dd1 Version 2.9.1 2025-08-21 09:11:18 +02:00
Yorhel
19cfdcf543 Fix bug with drawing scan progress before calling ui.init()
This triggered an invalid integer cast that wasn't caught with Zig's
LLVM backend, but it did trigger on the native x86_64 backend.
2025-08-19 14:17:53 +02:00
Yorhel
5129de737e Zig 0.15: Fix support for new IO interface
I've managed to get a single codebase to build with both 0.14 and 0.15
now, but compiling with Zig's native x86_64 backend seems buggy. Need to
investigate what's going on there.

This is a lazy "just get it to work" migration and avoids the use of
0.15-exclusive APIs. We can probably clean up some code when dropping
support for 0.14.
2025-08-19 14:02:41 +02:00
Yorhel
68671a1af1 Zig 0.15: Migrate all ArrayLists to the Unmanaged API
Managed ArrayLists are deprecated in 0.15. "ArrayList" in 0.15 is the
same as "ArrayListUnmanaged" in 0.14. The latter alias is still
available in 0.15, so let's stick with that for now. When dropping
support for 0.14, we can do s/ArrayListUnmanaged/ArrayList/.
2025-08-19 13:05:20 +02:00
Yorhel
74c91768a0 Version 2.9 2025-08-16 11:16:53 +02:00
Yorhel
ac4d689e22 Accessibility: move cursor to selected option on delete confirmation
This makes the selection usable for screen readers. Ncdu 1.x already
correctly did this.
2025-08-13 09:37:18 +02:00
Yorhel
66b875eb00 Add --delete-command option
Fixes #215.

delete.zig's item replacement/refresh code is pretty awful and may be
buggy in some edge cases. Existing refresh infrastructure wasn't
designed to update an individual file.
2025-07-15 18:27:10 +02:00
Yorhel
67f34090fb Avoid statx() when reading binary export
Should fix #261.
2025-06-23 13:47:47 +02:00
Yorhel
5b96a48f53 Version 2.8.2 2025-05-01 15:00:06 +02:00
Yorhel
58e6458130 Use Stat.mtime() instead of .mtim
Aha, so that's why the mtime() method exists, the field has a different
name on some other systems.

Fixes #258
2025-05-01 14:51:11 +02:00
Yorhel
653c3bfe70 Version 2.8.1 2025-04-28 13:22:03 +02:00
Yorhel
beac59fb12 Use std.c.fstatat() instead of @cImport()ed version
Because translate-c can't handle struct stat as defined by musl.
(Should have done this in the first place, but wasn't aware fstatat()
had been properly wrapped in std.c)
2025-04-28 13:22:03 +02:00
Yorhel
d97a7f73dd Fix integer overflow in binary export
And allocate an appropriately-sized initial block when
--export-block-size is set.

Fixes #257.
2025-04-28 12:43:04 +02:00
Yorhel
35a9faadb2 Work around panic in Zig fstatat wrapper
https://github.com/ziglang/zig/issues/23463
2025-04-06 10:36:36 +02:00
Eric Joldasov
e43d22ba3f
build.zig: change -Dpie option default to be target-dependant
`null` here indicates to Zig that it should decide for itself whether
to enable PIE or not. On some targets (like macOS or OpenBSD), it is
required and so current default will cause unneccessary build error.

Behavior for `-Dpie=false` and `-Dpie=true` is not changed.

Signed-off-by: Eric Joldasov <bratishkaerik@landless-city.net>
2025-03-06 01:23:28 +05:00
Eric Joldasov
f4e4694612
build.zig: link zstd instead of libzstd
Seems like it was used here because `pkg-config` does not work with
`zstd` passed but works with `libzstd`.

In 0.14.0 there was a change with allows pkg-config to be used here:
it now tries to use `lib+libname.pc` if `libname.pc` was not found:
https://github.com/ziglang/zig/pull/22069

This change allows Zig to use plain file search if pkg-config was not
found (it would now search `libzstd.so` instead of `liblibzstd.so`).

Signed-off-by: Eric Joldasov <bratishkaerik@landless-city.net>
2025-03-06 01:14:14 +05:00
Eric Joldasov
c9f3d39d3e
build.zig: update to new API for 0.14.0
Consolidates some options in one single module now, which is helpful
when you want to edit linked libraries etc.

Useful for making future changes here, or patching by distros (1 place
to change instead of 2).

Signed-off-by: Eric Joldasov <bratishkaerik@landless-city.net>
2025-03-06 01:10:55 +05:00
Yorhel
2b4c1ca03e Version 2.8 2025-03-05 11:10:31 +01:00
Yorhel
af7163acf6 Revise @branchHint probabilities
Arguably, my previous use of @setCold() was incorrect in some cases, but
it happened to work out better than not providing any hints.  Let's use
the more granular hints now that Zig 0.14 supports them.
2025-03-05 10:56:46 +01:00
Yorhel
5438312440 Fix tests
Broken in 5d5182ede3
2025-03-05 10:44:14 +01:00
Eric Joldasov
0918096301
fix std.Target.isDarwin function replaced with direct version
See https://github.com/ziglang/zig/pull/22589 .

Signed-off-by: Eric Joldasov <bratishkaerik@landless-city.net>
2025-03-03 22:46:51 +05:00
Eric Joldasov
ee1d80da6a
std.fmt.digits now accept u8 instead of usize
See https://github.com/ziglang/zig/pull/22864 .

Signed-off-by: Eric Joldasov <bratishkaerik@landless-city.net>
2025-03-03 22:46:51 +05:00
Eric Joldasov
93a81a3898
ArrayList.pop now returns optional like removed popOrNull
See https://github.com/ziglang/zig/pull/22720 .

Signed-off-by: Eric Joldasov <bratishkaerik@landless-city.net>
2025-03-03 22:46:51 +05:00
Eric Joldasov
cf3a8f3043
update to new allocator interface
See https://github.com/ziglang/zig/pull/20511 .

Signed-off-by: Eric Joldasov <bratishkaerik@landless-city.net>
2025-03-03 22:46:51 +05:00
Eric Joldasov
f7fe61194b
update to new panic interface
See https://github.com/ziglang/zig/pull/22594 .

Signed-off-by: Eric Joldasov <bratishkaerik@landless-city.net>
2025-03-03 22:46:50 +05:00
Eric Joldasov
456cde16df
remove anonymous struct types
See https://github.com/ziglang/zig/pull/21817 .

Signed-off-by: Eric Joldasov <bratishkaerik@landless-city.net>
2025-03-03 22:46:50 +05:00
Eric Joldasov
3c77dc458a
replace deprecated std.Atomic(T).fence with load
See https://github.com/ziglang/zig/pull/21585 .
I went with second option listed in `Conditional Barriers` section.

Signed-off-by: Eric Joldasov <bratishkaerik@landless-city.net>
2025-03-03 22:46:50 +05:00
Eric Joldasov
ce9921846c
update to new names in std.builtin.Type
See https://github.com/ziglang/zig/pull/21225 .

Signed-off-by: Eric Joldasov <bratishkaerik@landless-city.net>
2025-03-03 22:46:49 +05:00
Eric Joldasov
e0ab5d40c7
change deprecated @setCold(true) to @branchHint(.cold)
New `@branchHint` builtin is more expressive than `@setCold`, therefore
latter was removed.
See https://github.com/ziglang/zig/pull/21214 .

Signed-off-by: Eric Joldasov <bratishkaerik@landless-city.net>
2025-03-03 22:46:49 +05:00
Eric Joldasov
607b07a30e
change deprecated timespec.tv_sec to timespec.sec
Part of the reorganization of `std.c` namespace.
See https://github.com/ziglang/zig/pull/20679 .

Signed-off-by: Eric Joldasov <bratishkaerik@landless-city.net>
2025-03-03 22:46:49 +05:00
Eric Joldasov
b4dc9f1d4d
change deprecated std.fs.MAX_PATH_BYTES to max_path_bytes
It was deprecated before, and became hard compile error.
See https://github.com/ziglang/zig/pull/19847 .

Signed-off-by: Eric Joldasov <bratishkaerik@landless-city.net>
2025-03-03 22:46:45 +05:00
Yorhel
2e5c767d4c List all options in --help
Fixes #251. Not as bloated as I had expected, this seems workable.
2025-03-03 13:31:51 +01:00
Yorhel
5d5182ede3 Add support for @-prefix to ignore errors in config file
Forward-ported from the C version: ff830ac2bf
2025-03-01 13:33:29 +01:00
Henrik Bengtsson
db96bc698c SI units: The unit is kB, not KB 2024-12-22 11:21:36 -08:00
Yorhel
4873a7c765 Version 2.7 2024-11-19 14:41:50 +01:00
Yorhel
49d43f89a1 Fix build on 32bit systems + static build adjustments
I was trying to remove the need for that strip command with some
-fno-exceptions -fno-unwind-tables -fno-asynchronous-unwind-tables,
but passing even one of those to the ncurses build would result in an
ncdu binary that's twice as large and fails to run. I don't get it.
2024-11-17 11:51:29 +01:00
Yorhel
e5a6a1c5ea build: Expose -Dstrip flag
Compiling with -fstrip generates *much* smaller binaries, even compared
to running 'strip' after a regular build.
2024-11-17 09:30:42 +01:00
Yorhel
5593fa2233 Expand ~ and ~user in config file
Fixes #243
2024-11-16 11:38:12 +01:00
Yorhel
9d51df02c1 Add --export-block-size option + minor man page adjustments 2024-11-15 11:08:26 +01:00
Yorhel
7ed209a8e5 Properly fix zstd streaming decompression
I had a feeling my last workaround wasn't correct, turns out my basic
assumption about ZSTD_decompressStream() was wrong: rather than
guaranteeing some output when there's enough input, it always guarantees
to consume input when there's space in the output.

Fixed the code and adjusted the buffers again.
2024-11-14 10:46:24 +01:00
Yorhel
4bd6e3daba Fix handling short reads when uncompressed json
Turns out that zstd can consume compressed data without returning any
decompressed data when the input buffer isn't full enough. I just
increased the input buffer as a workaround.
2024-11-13 15:28:10 +01:00
Yorhel
2fcd7f370c Drop ncdubinexp.pl
Was a useful testing tool during development, but now been replaced with
a more robust 'ncdutils validate' in https://code.blicky.net/yorhel/ncdutils
2024-11-03 13:13:55 +01:00
Yorhel
232a4f8741 JSON import: support reading escaped UTF-16 surrogate pairs
Fixes #245

json/scanner.zig in std notes inconsistencies in the standard as to
whether unpaired surrogate halves are allowed. That implementation
disallows them and so does this commit.
2024-11-03 10:40:57 +01:00
Yorhel
bdc730f1e5 Bin export: fix incorrectly setting prev=0 on the root node 2024-10-29 14:45:10 +01:00
Yorhel
df5845baad Support writing zstd-compressed json, add --compress option 2024-10-26 19:30:16 +02:00
Yorhel
0e6967498f Support reading zstd-compressed json
Oddly enough this approach is slightly slower than just doing
`zstdcat x | ncdu -f-`, need to investigate.
2024-10-26 15:49:42 +02:00
Yorhel
bd442673d2 Consolidate @cImports into a single c.zig
Which is, AFAIK, a recommended practice. Reduces the number of times
translate-c is being run and (most likely) simplifies a possible future
transition if/when @cImport is thrown out of the language.

Also uses zstd.h instead of my own definitions, mainly because I plan to
use the streaming API as well and those need more definitions.
2024-10-26 14:35:05 +02:00
Yorhel
28d9eaecab Version 2.6 2024-09-27 10:49:22 +02:00
Yorhel
61d7fc8473 man: Mention new flags in the synopsis 2024-09-27 10:40:13 +02:00
Yorhel
e142d012f0 Man page updates
Haven't mentioned the new -O flag in the examples section yet. Let's
first keep it as a slightly lower-profile feature while the format gains
wider testing and adoption.
2024-09-26 15:07:09 +02:00
Yorhel
39517c01a8 Remove kernfs dev id cache
Kernfs checking was previously done for every directory scanned, but the
new parallel scanning code only performs the check when the dev id is
different from parent, which isn't nearly as common.
(In fact, in typical scenarios this only ever happens once per dev id,
rendering the cache completely useless. But even people will 10k bind
mounts are unlikely to notice a performance impact)
2024-08-25 09:29:41 +02:00
Yorhel
cc26ead5f8 Fix integer overflow and off-by-one in binfmt itemref parsing 2024-08-23 09:53:33 +02:00
Yorhel
ca46c7241f Fix off-by-one in binfmt reader 2024-08-18 08:38:57 +02:00
Yorhel
e324804cdd Strip stack unwinding info from static binaries
Saves another 70k on the x86_64 binary, more on x86.

None of the included C or Zig code will unwind the stack at any point,
so these sections seem pretty useless.
2024-08-11 16:26:40 +02:00
Yorhel
26229d7a63 binfmt: Remove "rawlen" field, require use of ZSTD_getFrameContentSize()
The zstd frame format already supports this functionality and I don't
really see a benefit in not making use of that.
2024-08-11 15:56:14 +02:00
Yorhel
4ef9c3e817 bin_export: Adaptively adjust block size 2024-08-11 10:56:43 +02:00
Yorhel
c30699f93b Track which extended mode fields we have + bugfixes
This prevents displaying invalid zero values or writing such values out
in JSON/bin exports. Very old issue, actually, but with the new binfmt
experiments it's finally started annoying me.
2024-08-09 18:32:47 +02:00
Yorhel
6b7983b2f5 binfmt: Support larger (non-data) block sizes
I realized that the 16 MiB limitation implied that the index block could
only hold ((2^24)-16)/8 =~ 2 mil data block pointers. At the default
64k data block size that means an export can only reference up to
~128 GiB of uncompressed data. That's pretty limiting.

This change increases the maximum size of the index block to 256 MiB,
supporting ~33 mil data block pointers and ~2 TiB of uncompressed data
with the default data block size.
2024-08-09 09:40:29 +02:00
Yorhel
9418079da3 binfmt: Remove CBOR-null-based padding hack
Seems like unnecessary complexity.
2024-08-09 09:19:27 +02:00
Yorhel
18f322c532 Throw an error when running out of DevId numbers
I find it hard to imagine that this will happen on a real filesystem,
but it can be triggered by a malicious export file. Better protect
against that than invoke undefined behavior.
2024-08-08 15:38:56 +02:00
Yorhel
252f7fc253 Use u64 for item counts in binary export
They're still clamped to a u32 upon reading, but at least the file will
have correct counts and can be read even when it exceeds 4.2 billion
items.
2024-08-08 11:37:55 +02:00
Yorhel
49ef7cc34e Add progress indicator to loadDir()
Ugh browser.zig is becoming such a complex mess.
2024-08-07 10:39:29 +02:00
Yorhel
17e384b485 Disable refresh, delete and link list when reading from file
TODO: Add an option to re-enable these features by importing the file
into RAM?
2024-08-07 09:44:21 +02:00
Yorhel
ad166de925 Fix hardlink counting off-by-one in binary export 2024-08-06 14:51:42 +02:00
Yorhel
22dca22450 Add custom panic handler
Sadly, it doesn't seem to be called on segfaults, which means that will
still output garbage. I could install a custom segfault handler, but not
sure that's worth the effort.
2024-08-06 14:43:46 +02:00
Yorhel
30d6ddf149 Support direct browsing of a binary export
Code is more hacky than I prefer, but this approach does work and isn't
even as involved as I had anticipated.

Still a few known bugs and limitations left to resolve.
2024-08-06 09:50:10 +02:00
Yorhel
8fb2290d5e Fix division by zero in percent calculation
Broken in previous commit.
2024-08-05 07:07:46 +02:00
Yorhel
90b43755b8 Use integer formatting instead of floating points
This avoids embedding Zig's floating point formatting tables and
ancillary code, shaving 17k off the final static binary for x86_64.

Also adjusted the cut-off points for the units to be more precise.
2024-08-03 15:37:54 +02:00
Yorhel
8ad61e87c1 Stick with zstd-4 + 64k block, add --compress-level, fix 32bit build
And do dynamic buffer allocation for bin_export, removing 128k of
.rodata that I accidentally introduced earlier and reducing memory use
for parallel scans.

Static binaries now also include the minimal version of zstd, current
sizes for x86_64 are:

  582k ncdu-2.5
  601k ncdu-new-nocompress
  765k ncdu-new-zstd

That's not great, but also not awful. Even zlib or LZ4 would've resulted
in a 700k binary.
2024-08-03 13:16:44 +02:00
Yorhel
85e12beb1c Improve performance of bin format import by 30%
By calling die() instead of propagating error unions. Not surprising
that error propagation has a performance impact, but I was hoping it
wasn't this bad.

Import performance was already quite good, but now it's even better!
With the one test case I have it's faster than JSON import, but I expect
that some dir structures will be much slower.
2024-08-02 14:09:46 +02:00
Yorhel
025e5ee99e Add import function for the new binary format
This isn't the low-memory browsing experience I was hoping to implement,
yet, but it serves as a good way to test the new format and such a
sink-based import is useful to have anyway.

Performance is much better than I had expected, and I haven't even
profiled anything yet.
2024-08-02 14:03:30 +02:00
Yorhel
cd00ae50d1 refactor: Merge sink.Special and bin_export.ItemType into model.EType
Simplifies code a little bit and saves one whole byte off of file
entries.
2024-08-01 14:24:56 +02:00
Yorhel
5a0c8c6175 Add hardlink counting support for the new export format
This ended up a little different than I had originally planned.

The bad part is that my idea for the 'prevlnk' references wasn't going
to work out. For one because the reader has no efficient way to
determine the head reference of this list and implementing a lookup
table would be pretty costly and complex, and second because even with
those references working, they'd be pretty useless because there's no
way to go from an itemref to a full path. I don't see an easy way to
solve these problems, so I'm afraid the efficient hardlink list feature
will have to be disabled when reading from this new format. :(

The good news is that removing these references simplifies the hardlink
counting implementation and removes the requirement for a global inode
map and associated mutex. \o/

Performance is looking really good so far, too.
2024-08-01 07:32:38 +02:00
Yorhel
ebaa9b6a89 Add (temporary) compression support for the new export format
This is mainly for testing and benchmarking, I plan to choose a single
block size and compression library before release, to avoid bloating the
ncdu binary too much.

Currently this links against the system-provided zstd, zlib and lz4.
ncdubinexp.pl doesn't support compressed files yet.

Early benchmarks of `ncdu -f firefox-128.0.json` (407k files) with
different block sizes and compression options:

            bin8k        bin16k       bin32k       bin64k       bin128k      bin256k      bin512k      json
  algo      size  time   size  time   size  time   size  time   size  time   size  time   size  time   size  time

  none      16800  128   16760  126   16739  125   16728  124   16724  125   16722  124   16721  124   24835  127
  lz4        7844  143    7379  141    7033  140    6779  140    6689  138    6626  139    6597  139    5850  179

  zlib-1     6017  377    5681  310    5471  273    5345  262    5289  259    5257  256    5242  255    4415  164
  zlib-2     5843  386    5496  319    5273  284    5136  276    5072  271    5037  270    5020  268    4164  168
  zlib-3     5718  396    5361  339    5130  316    4977  321    4903  318    4862  324    4842  319    3976  196
  zlib-4     5536  424    5153  372    4903  341    4743  339    4665  338    4625  340    4606  336    3798  212
  zlib-5     5393  464    4993  419    4731  406    4561  414    4478  422    4434  426    4414  420    3583  261
  zlib-6     5322  516    4902  495    4628  507    4450  535    4364  558    4318  566    4297  564    3484  352
  zlib-7     5311  552    4881  559    4599  601    4417  656    4329  679    4282  696    4260  685    3393  473
  zlib-8     5305  588    4864  704    4568 1000    4374 1310    4280 1470    4230 1530    4206 1550    3315 1060
  zlib-9     5305  589    4864  704    4568 1030    4374 1360    4280 1510    4230 1600    4206 1620    3312 1230

  zstd-1     5845  177    5426  169    5215  165    5030  160    4921  156    4774  157    4788  153    3856  126
  zstd-2     5830  178    5424  170    5152  164    4963  161    4837  160    4595  162    4614  158    3820  134
  zstd-3     5683  187    5252  177    5017  172    4814  168    4674  169    4522  169    4446  170    3664  145
  zstd-4     5492  235    5056  230    4966  173    4765  170    4628  169    4368  222    4437  170    3656  145
  zstd-5     5430  270    4988  266    4815  234    4616  229    4485  224    4288  241    4258  223    3366  189
  zstd-6     5375  323    4928  322    4694  282    4481  279    4334  276    4231  275    4125  271    3234  235
  zstd-7     5322  400    4866  420    4678  319    4464  314    4315  312    4155  300    4078  295    3173  269
  zstd-8     5314  454    4848  689    4636  344    4420  346    4270  345    4137  350    4060  342    3082  330
  zstd-9     5320  567    4854  615    4596  392    4379  398    4228  401    4095  408    4060  345    3057  385
  zstd-10    5319  588    4852  662    4568  458    4350  466    4198  478    4066  491    4024  395    3005  489
  zstd-11    5310  975    4857 1040    4543  643    4318  688    4164  743    4030  803    3999  476    2967  627
  zstd-12    5171 1300    4692 1390    4539  699    4313  765    4154  854    4018  939    3999  478    2967  655
  zstd-13    5128 1760    4652 1880    4556 1070    4341 1130    4184 1230    3945 1490    3980  705    2932 1090
  zstd-14    5118 2040    4641 2180    4366 1540    4141 1620    3977 1780    3854 1810    3961  805    2893 1330

  mzstd-1    5845  206    5426  195    5215  188    5030  180    4921  176    4774  175    4788  172
  mzstd-2    5830  207    5424  196    5152  186    4963  183    4837  181    4765  178    4614  176
  mzstd-3    5830  207    5424  196    5150  187    4960  183    4831  180    4796  181    4626  180
  mzstd-4    5830  206    5427  196    5161  188    4987  185    4879  182    4714  180    4622  179
  mzstd-5    5430  347    4988  338    5161  189    4987  185    4879  181    4711  180    4620  180
  mzstd-6    5384  366    4939  359    4694  390    4481  391    4334  383    4231  399    4125  394
  mzstd-7    5328  413    4873  421    4694  390    4481  390    4334  385    4155  442    4078  435
  mzstd-8    5319  447    4854  577    4649  417    4434  421    4286  419    4155  440    4078  436
  mzstd-9    5349  386    4900  385    4606  469    4390  478    4241  478    4110  506    4078  436
  mzstd-10   5319  448    4853  597    4576  539    4360  560    4210  563    4079  597    4039  502
  mzstd-11   5430  349    4988  339    4606  468    4390  478    4241  478    4110  506    4013  590
  mzstd-12   5384  366    4939  361    4576  540    4360  556    4210  559    4079  597    4013  589
  mzstd-13   5349  387    4900  388    4694  390    4481  392    4334  386    4155  439    4078  436
  mzstd-14   5328  414    4873  420    4649  417    4434  424    4286  420    4155  444    4039  500

I'll need to do benchmarks on other directories, with hardlink support
and in extended mode as well to get more varied samples.

Another consideration in choosing a compression library is the size of
its implementation:

  zlib: 100k
  lz4:  106k
  zstd: 732k (regular), 165k (ZSTD_LIB_MINIFY, "mzstd" above)
2024-07-31 12:55:43 +02:00
Yorhel
f25bc5cbf4 Experimental new export format
The goals of this format being:
- Streaming parallel export with minimal mandatory buffering.
- Exported data includes cumulative directory stats, so reader doesn't
  have to go through the entire tree to calculate these.
- Fast-ish directory listings without reading the entire file.
- Built-in compression.

Current implementation is missing compression, hardlink counting and
actually reading the file. Also need to tune and measure stuff.
2024-07-30 14:27:41 +02:00
Yorhel
87d336baeb Add progress indicator to hardlink counting + fix import/mem UI updating 2024-07-28 10:54:58 +02:00
Yorhel
0a6bcee32b Fix counts when new link to existing inode is found on refresh
And the inode should not already have been in the directory before
refresh. Seems like a fairly obscure bug, but a bug nonetheless.
2024-07-28 10:35:59 +02:00
Yorhel
3c055810d0 Split mem import and json export out of sink.zig
Mainly to make room for another export format, though that'll take a lot
more experimenting before it'll get anywhere.
2024-07-27 11:58:08 +02:00
Yorhel
f6bffa40c7 Version 2.5 2024-07-24 14:07:17 +02:00
Yorhel
08d373881c Fix JSON export of "otherfs" excluded type
The exporter would write "othfs" while the import code was expecting
"otherfs". This bug also exists in the 1.x branch and is probably as old
as the JSON import/export feature. D'oh.

Normalized the export to use "otherfs" now (which is what all versions can
read correctly) and fixed the importer to also accept "othfs" (which
is what all previous versions exported).
2024-07-24 10:30:30 +02:00
Yorhel
dc42c91619 Fix JSON export of special entries 2024-07-24 07:34:12 +02:00
Yorhel
2b2b4473e5 mem_src: Fix setting nlink for non-hardlinks
That field is still used in sink.zig for total size estimation.
2024-07-23 10:51:21 +02:00
Yorhel
9cbe1bc91f Use slower and smaller heap sort for hardlink list
Saves 20 KiB off of the ReleaseSafe + stripped binary. That feature is
(1) rarely used and (2) rarely deals with large lists, so no point
spending that much space on an efficient sort implementation.
2024-07-18 21:43:20 +02:00
Yorhel
f28f69d831 README: Mention zig 0.13 as well 2024-07-18 10:53:55 +02:00
Yorhel
a5e57ee5ad Fix use of u64 atomic integers on 32-bit platforms 2024-07-18 10:53:27 +02:00
Yorhel
b0d4fbe94f Rename threading flag to -t,--threads + update man page 2024-07-18 07:49:41 +02:00
Yorhel
99f92934c6 Improve JSON export performance
When you improve performance in one part of the code, another part
becomes the new bottleneck. The slow JSON writer was very noticeable
with the parallel export option.

This provides a 20% improvement on total run-time when scanning a hot
directory with 8 threads.
2024-07-18 07:11:32 +02:00
Yorhel
9b517f27b1 Add support for multithreaded scanning to JSON export
by scanning into memory first.
2024-07-17 16:40:02 +02:00
Yorhel
705bd8907d Move nlink count from inode map into Link node
This adds another +4 bytes* to Link nodes, but allows for the in-memory
tree to be properly exported to JSON, which we'll need for multithreaded
export. It's also slightly nicer conceptually, as we can now detect
inconsistencies without throwing away the actual data, so have a better
chance of recovering on partial refresh. Still unlikely, anyway, but
whatever.

(* but saves 4+ bytes per unique inode in the inode map, so the memory
increase is only noticeable when links are repeated in the scanned tree.
Admittedly, that may be the common case)
2024-07-17 14:15:53 +02:00
Yorhel
e5508ba9b4 Fix OOM handling to be thread-safe 2024-07-17 11:48:58 +02:00
Yorhel
6bb31a4653 More consistent handling of directory read errors
These are now always added as a separate dir followed by setReadError().
JSON export can catch these cases when the error happens before any
entries are read, which is the common error scenario.
2024-07-17 09:09:04 +02:00
Yorhel
7558fd7f8e Re-add single-threaded JSON export
That was the easy part, next up is fixing multi-threaded JSON export.
2024-07-17 07:05:18 +02:00
Yorhel
1e56c8604e Improve JSON import performance by another 10%
Profiling showed that string parsing was a bottleneck. We rarely need
the full power of JSON strings, though, so we can optimize for the
common case of plain strings without escape codes. Keeping the slower
string parser as fallback, of course.
2024-07-16 17:36:39 +02:00
Yorhel
d2e8dd8a90 Reimplement JSON import + minor fixes
Previous import code did not correctly handle a non-empty directory with
the "read_error" flag set. I have no clue if that can ever happen in
practice, but at least ncdu 1.x can theoretically emit such JSON so we
handle it now.

Also fixes mtime display of "special" files. i.e. don't display the
mtime of the parent directory - that's confusing.

Split a generic-ish JSON parser out of the import code for easier
reasoning and implemented a few more performance improvements as well.
New code is ~30% faster in both ReleaseSafe and ReleaseFast.
2024-07-16 14:20:30 +02:00
Yorhel
ddbed8b07f Some fixes in mtime propagation and hardlink refresh counting 2024-07-15 11:00:14 +02:00
Yorhel
db51987446 Re-add hard link counting + parent suberror & stats propagation
Ended up turning the Links into a doubly-linked list, because the
current approach of refreshing a subdirectory makes it more likely to
run into problems with the O(n) removal behavior of singly-linked lists.

Also found a bug that was present in the old scanning code as well;
fixed here and in c41467f240.
2024-07-14 20:17:34 +02:00
Yorhel
cc12c90dbc Re-add scan progress UI + directory refreshing 2024-07-14 20:17:19 +02:00
Yorhel
f2541d42ba Rewrite scan/import code, experiment with multithreaded scanning (again)
Benchmarks are looking very promising this time. This commit breaks a
lot, though:
- Hard link counting
- Refreshing
- JSON import
- JSON export
- Progress UI
- OOM handling is not thread-safe

All of which needs to be reimplemented and fixed again. Also haven't
really tested this code very well yet so there's likely to be bugs.

There's also a behavioral change: --exclude-kernfs is not checked on the
given root directory anymore, meaning that the filesystem the user asked
to scan is being scanned even if that's a 'kernfs'. I suspect that's
more sensible behavior.

The old scan.zig was quite messy and hard for me to reason about and
extend, this new sink API is looking to be less confusing. I hope it
stays that way as more features are added.
2024-07-14 20:17:18 +02:00
Yorhel
c41467f240 Fix entries getting removed when their type changes on refresh
Somewhat surprised nobody reported this one yet, it is rather weird and
obviously buggy behavior. A second refresh would fix it again, but still.
2024-07-14 20:01:19 +02:00
Yorhel
2f97601736 Don't complain about stdin with --quit-ater-scan
That flag is for benchmarking, we're not expecting to have user input.
2024-07-13 09:05:47 +02:00
Yorhel
574a4348a3 Fix --one-file-system to exclude other-fs-symlink targets with --follow-symlinks 2024-07-12 12:36:17 +02:00
Yorhel
0215f3569d Fix fd leak with --exclude-caches checking 2024-07-12 12:33:45 +02:00
Yorhel
f4f4af4ee5 gitignore: Also ignore the newer .zig-cache/ 2024-07-12 09:26:37 +02:00
Yorhel
6db150cc98 Fix crash on invalid utf8 when scanning in -1 UI mode 2024-05-26 11:16:22 +02:00
Yorhel
a4484f27f3 Build: remove preferred_optimize_mode
Fixes #238
2024-04-25 14:15:46 +02:00
Yorhel
d0d064aaf9 Version 2.4 2024-04-21 10:58:35 +02:00
Yorhel
0e54ca775c Add "test" target for some linting; reorder man page sections 2024-04-20 15:56:12 +02:00
Yorhel
d60bcb2113 Copyright: remove year & use alias
Tired of bumping files every year and slowly moving stuff to my alias.
2024-04-20 15:49:51 +02:00
Yorhel
e1818430b7 Set default --color to "off" 2024-04-20 15:45:37 +02:00
Yorhel
29bbab64b3 Update Zig requirement in README + set preferred build mode
+ minor irrelevant build system changes.
2024-04-20 15:40:53 +02:00
Eric Joldasov
5944b738d0
build.zig: update to Zig 0.12.0-dev.3643+10ff81c26
* LazyPath now stores `Build` owner inside in
 https://github.com/ziglang/zig/pull/19623 and
 https://github.com/ziglang/zig/pull/19597 .

Signed-off-by: Eric Joldasov <bratishkaerik@landless-city.net>
2024-04-12 12:52:22 +05:00
Eric Joldasov
946d2a0316
src: update to standard library changes in Zig 0.12.0-dev.3385+3a836b480
* rearrangment of entries in `std.os` and `std.c`, `std.posix`
 finally extracted in https://github.com/ziglang/zig/pull/19354 .

Signed-off-by: Eric Joldasov <bratishkaerik@landless-city.net>
2024-03-20 23:06:20 +05:00
Eric Joldasov
8ce5bae872
src/ui.zig: update to language changes in Zig 0.12.0-dev.2150+63de8a598
* `name` field of std.builtin.Type struct changed type from `[]const u8` to `[:0]const u8`:
 https://github.com/ziglang/zig/pull/18470 .

 * New `'comptime var' is redundant in comptime scope` error
 introduced in https://github.com/ziglang/zig/pull/18242 .

Signed-off-by: Eric Joldasov <bratishkaerik@landless-city.net>
2024-03-20 23:06:14 +05:00
Eric Joldasov
c41e3f5828
build.zig: update to Zig 0.12.0-dev.2018+9a56228c2
* ZBS was reorganized around `Module` struct:
 https://www.github.com/ziglang/zig/pull/18160 .
 * Changes for ReleaseSafe: error return tracing is now off by default.

Signed-off-by: Eric Joldasov <bratishkaerik@landless-city.net>
2024-03-20 23:02:48 +05:00
Eric Joldasov
1fa40ae498
src/ui.zig: update to language changes in Zig 0.12.0-dev.1808+69195d0cd
* New `redundant inline keyword in comptime scope` error
 introduced in https://github.com/ziglang/zig/pull/18227 .

Signed-off-by: Eric Joldasov <bratishkaerik@landless-city.net>
2024-03-20 23:02:42 +05:00
Eric Joldasov
f03eee5443
src: update to stdlib changes in Zig 0.12.0-dev.1710+2bffd8101
* std.fs.Dir/IterableDir separation was reverted in https://www.github.com/ziglang/zig/pull/18076 ,
 fix breaks ability to compile with Zig 0.11.0. It was planned since at least October, 16th:
 https://github.com/ziglang/zig/pull/12060#issuecomment-1763671541 .

Signed-off-by: Eric Joldasov <bratishkaerik@landless-city.net>
2024-03-20 23:02:38 +05:00
Yorhel
491988d9a5 Rewrite man page in mdoc
Still not a fan of roff, but even less a fan of build system stuff and a
dependency on a tool that is getting less ubiquitous over time.

I've removed the "hard links" section from the man page for now. Such a
section might be useful, but much of it was outdated.
2024-01-21 09:51:42 +01:00
Yorhel
a2eb84e7d3 Update parent dir suberr on refresh
Fixes #233
2023-12-05 12:06:14 +01:00
Eric Joldasov
c83159f076
fix new "var never mutated" error on Zig 0.12.0-dev.1663+6b1a823b2
Fixes these errors (introduced in https://github.com/ziglang/zig/pull/18017
and 6b1a823b2b ):

```
src/main.zig:290:13: error: local variable is never mutated
        var line_ = line_fbs.getWritten();
            ^~~~~
src/main.zig:290:13: note: consider using 'const'
src/main.zig:450:17: error: local variable is never mutated
            var path = std.fs.path.joinZ(allocator, &.{p, "ncdu", "config"}) catch unreachable;
                ^~~~
src/main.zig:450:17: note: consider using 'const'

...
```

Will be included in future Zig 0.12, this fix is backward compatible:
ncdu still builds and runs fine on Zig 0.11.0.

Signed-off-by: Eric Joldasov <bratishkaerik@getgoogleoff.me>
2023-11-20 14:45:02 +06:00
Eric Joldasov
115de253a8
replace ncurses_refs.c workaround with pure Zig workaround
Signed-off-by: Eric Joldasov <bratishkaerik@getgoogleoff.me>
2023-11-19 14:37:52 +06:00
Yorhel
a71bc6eca5 Add --quit-after-scan CLI flag for benchmarking 2023-08-08 10:30:33 +02:00
Yorhel
ec99218645 Version 2.3 2023-08-04 16:05:31 +02:00
Yorhel
83d3630ca7 Makefile: Honor ZIG variable + fix static build for x86 2023-08-04 12:43:27 +02:00
Eric Joldasov
ab6dc5be75
Update to Zig 0.11.0
Signed-off-by: Eric Joldasov <bratishkaerik@getgoogleoff.me>
2023-08-04 14:41:49 +06:00
Eric Joldasov
0d99781c67
build.zig: add option for building PIE
Might be useful for package maintainers.

Signed-off-by: Eric Joldasov <bratishkaerik@getgoogleoff.me>
2023-04-09 21:41:06 +06:00
Yorhel
e6cfacfa06 scan.zig: Add explicit cast for struct statfs.f_type
Hopefully fixes #221.
2023-04-02 11:58:41 +02:00
Florian Schmaus
74be277249 Makefile: Add ZIG variable and build target
The ZIG variable helps to test ncdu with different zig installations,
and it allows Gentoo to inject the zig version that should be used to
build zig into the Makefile.

Also add a phony 'build' target as first target to the Makefile so
that it becomes the default target. This allows the Gentoo package to
use the default src_compile() function.

See also https://bugs.gentoo.org/900547
2023-03-09 16:01:40 +01:00
Yorhel
46b88bcb5c Add --(enable|disable)-natsort options 2023-03-05 08:31:31 +01:00
Yorhel
ca1f293310 UI: Add * indicator to apparent size/disk usage selection + spacing
More visible than just bold.
2023-03-03 08:42:09 +01:00
Carlo Cabrera
07a13d9c73
Set headerpad_max_install_names on Darwin
This is useful for building binary distributions because it allows
references to library dependendencies on the build machine to be
rewritten appropriately upon installation on the user's machine.

Zig also does this in their `build.zig`:

    b52be973df/build.zig (L551-L554)
2023-02-22 13:51:08 +08:00
Yorhel
54d50e0443 Oops, forgot to update the README 2023-01-19 08:14:55 +01:00
Yorhel
ec233ff33a Version 2.2.2 + copyright year bump 2023-01-19 08:00:27 +01:00
Yorhel
c002d9fa92 Work around a Zig ReleaseSafe mode performance regression
With a little help from IRC:

<ifreund> Ayo: its probaly stupidly copying that array to the stack to do the
          safety check, pretty sure there's an open issue on this still
<ifreund> you may be able to work around the compiler's stupidity by using a
          pointer to the array or slice or something
<Ayo> ifreund: Yup, (&self.rdbuf)[self.rdoff] does the trick, thanks.
<ifreund> no problem! should get fixed eventually
2023-01-11 10:39:49 +01:00
Yorhel
cebaaf0972 Minor doc formatting fix & error message fix 2023-01-11 08:42:54 +01:00
Yorhel
4d124c7c3d Fix struct copy and invalid pointer access in Link.path()
Interesting case of
https://ziglang.org/download/0.10.0/release-notes.html#Escaped-Pointer-to-Parameter
2022-11-02 14:52:41 +01:00
Yorhel
890e5a4af7 Slightly less hacky Entry struct allocation and initialization 2022-11-02 14:39:05 +01:00
Yorhel
91281ef11f Use extern instead of packed structs for the data model
Still using a few embedded packed structs for those fields that benefit
from bit packing. This isn't much cleaner than using packed structs for
everything, but it does have better semantics. In particular, all fields
(except those inside nested packed structs) are now guaranteed to be
byte-aligned and I don't have to worry about the memory representation
of integers when pointer-casting between the different Entry types.
2022-11-02 11:32:35 +01:00
Yorhel
1452b91032 Some fixes for building with Zig stage2
Building is currently broken on packed struct alignment issues. :/
2022-10-26 13:34:27 +02:00
Yorhel
f7e774ee6e Fixes for stdlib changes 2022-10-26 13:34:06 +02:00
Yorhel
f37362af36 Version 2.2.1 2022-10-25 08:14:36 +02:00
Yorhel
0d16b9f33e Fix colors on FreeBSD (and MacOS?) again
Broken in 1548f9276f because I'm an idiot.

Probably also fixes #210, but I don't have a Mac to test.
2022-10-23 09:16:05 +02:00
Yorhel
34dafffc62 Version 2.2 2022-10-17 12:37:59 +02:00
Yorhel
1548f9276f Fix type signature of ncdu_init_pair() 2022-10-16 08:49:47 +02:00
Torbjörn Lönnemark
d6728bca95 Fix incorrect format string causing invalid export files
Zig requires alignment to be specified when specifying a fill character,
as otherwise digits specified after ':' are interpreted as part of the
field width.

The missing alignment specifier caused character codes < 0x10 to be
serialized incorrectly, producing an export file ncdu could not import.

For example, a character with code 1 would be serialized as '\u00 1'
instead of '\u0001'.

A directory of test files can be generated using:

    mkdir test_files; i=1; while [ $i -le 255 ]; do c="$(printf "$(printf "\\\\x%02xZ" "$i")")"; c="${c%Z}"; touch "test_files/$c"; i=$((i+1)); done
2022-10-15 21:00:17 +02:00
Yorhel
d523a77fdc Improve exclude pattern matching performance (and behavior, a bit)
Behavioral changes:
- A single wildcard ('*') does not cross directory boundary anymore.
  Previously 'a*b' would also match 'a/b', but no other tool that I am
  aware of matches paths that way. This change breaks compatibility with
  old exclude patterns but improves consistency with other tools.
- Patterns with a trailing '/' now prevent recursing into the directory.
  Previously any directory excluded with such a pattern would show up as
  a regular directory with all its contents excluded, but now the
  directory entry itself shows up as excluded.
- If the path given to ncdu matches one of the exclude patterns, the old
  implementation would exclude every file/dir being read, this new
  implementation would instead ignore the rule. Not quite sure how to
  best handle this case, perhaps just exit with an error message?

Performance wise, I haven't yet found a scenario where this
implementation is slower than the old one and it's *significantly*
faster in some cases - in particular when using a large amount of
patterns, especially with literal paths and file names.

That's not to say this implementation is anywhere near optimal:
- A list of relevant patterns is constructed for each directory being
  scanned. It may be possible to merge pattern lists that share
  the same prefix, which could both reduce memory use and the number of
  patterns that need to be matched upon entering a directory.
- A hash table with dynamic arrays as values is just garbage from a
  memory allocation point of view.
- This still uses libc fnmatch(), but there's an opportunity to
  precompile patterns for faster matching.
2022-08-10 09:46:39 +02:00
Yorhel
f0764ea24e Fix unreferenced test in model.zig
The other files were already indirectly referenced, but it's good to
make it explicit.
2022-08-08 18:23:53 +02:00
Yorhel
058b26bf9a Set default attributes to the whole window during curses init
Based on #204.
2022-06-15 06:17:38 +02:00
Yorhel
e6806059e6 Version 2.1.2 2022-04-28 11:19:43 +02:00
Yorhel
bb98939e24 Fix build with zig 0.10.0-dev.1946+6f4343b61
I wasn't planning on (publicly) keeping up with Zig master before the
next release, but it's looking like 0.10 will mainly focus on the new
stage2 compiler and there might not be any significant language/stdlib
changes. If that's the case, might as well pull in this little change in
order to increase chances of ncdu working out of the box when 0.10 is
out.
2022-04-28 11:03:19 +02:00
Yorhel
0fc14173f2 Fix panic when shortening strings with unicode variation selectors
Fixes #199.
That's not to say it handles variation selectors or combining marks
well, though. This is kind of messy. :(
2022-04-16 20:05:43 +02:00
Yorhel
2e4f0f0bce Version 2.1.1 2022-03-25 12:38:47 +01:00
Yorhel
5f383966a9 Fix bad assertion in scan.zig:addSpecial()
While it's true that the root item can't be a special, the first item to
be added is not necessarily the root item. In particular, it isn't when
refreshing.

Probably fixes #194
2022-03-24 07:32:55 +01:00
Yorhel
3942722eba Revert default --graph-style to "hash"
Because, even in 2022, there are systems where the libc locale is not,
in fact, UTF-8. Fixes #186.
2022-03-16 09:53:02 +01:00
Yorhel
1a3de55e68 Still accept "eigth-block" typo argument for compat 2022-03-14 15:58:41 +01:00
Phil Jones
1f46dacf12 Fix typo in --graph-style option
Change "eigth-block" to "eighth-block"
2022-03-14 13:31:01 +00:00
Yorhel
35dd631e55 Version 2.1; remove 1.x changes from the ChangeLog 2022-02-07 13:59:22 +01:00
Yorhel
f79ae654f3 Fix compilation on 32bit systems
Broken in 7d2905952d
2022-02-07 13:59:22 +01:00
Yorhel
e42db579a0 scan: Add UI message when counting hard links
That *usually* doesn't take longer than a few milliseconds, but it can
take a few seconds for some extremely large dirs, on very slow computers
or with optimizations disabled. Better display a message than make it
seem as if ncdu has stopped doing anything.
2022-02-05 09:19:15 +01:00
Yorhel
7d2905952d Add --graph-style option and Unicode graph drawing
And also adjust the graph width calculation to do a better job when the
largest item is smaller than the number of columns used for the graph,
which would previously draw either nothing (if size = 0) or a full bar
(if size > 0).

Fixes #172.
2022-02-03 16:10:18 +01:00
Yorhel
edf48f6f11 Use natsort when sorting by name
Fixes #181, now also for Zig.
2022-02-03 10:59:44 +01:00
Yorhel
41f7ecafcb Mention --ignore-config flag when reading config fails 2022-02-02 12:32:41 +01:00
Yorhel
f46c7ec65d Ignore ENOTDIR when trying to open config files 2022-02-02 11:49:04 +01:00
Yorhel
1b918a5a74 browser: Fix long file name overflow + unique size display glitch 2022-02-02 10:30:36 +01:00
Yorhel
01f1e9188a Version 2.0.1 + copyright year bump 2022-01-01 16:01:47 +01:00
Yorhel
ba26e6621b Makefile: Add ZIG_FLAGS variable
Fixes #185
2022-01-01 15:49:50 +01:00
Yorhel
2b23951e4f ui.zig: Really fix import of wcwidth() this time
Fixes #183
2021-12-26 11:02:42 +01:00
Yorhel
a6f5678088 ui.zig: Fix typo in setting _XOPEN_SOURCE feature test macro 2021-12-21 15:20:07 +01:00
Yorhel
23c59f2874 Version 2.0
I'm tagging this as a "stable" 2.0 release because the 2.0-beta#
numbering will get confusing when I'm working on new features and fixes.
It's still only usable for people who can use the particular Zig version
that's required (0.9.0 currently) and it will certainly break on
different Zig versions. But once you have a working binary for a
supported arch, it's perfectly stable.
2021-12-21 10:56:51 +01:00
Yorhel
6a68cd9b89 Fixes and updates for Zig 0.9.0 2021-12-21 10:34:44 +01:00
Yorhel
14b90444c9 Version 2.0-beta3 2021-11-09 09:11:35 +01:00
Yorhel
5b462cfb7a Fix export feature
...by making sure that Context.parents is properly initialized to null
when not scanning to RAM.

Fixes #179.
2021-11-02 15:29:12 +01:00
Yorhel
7efd2c6251 Make options, keys and file flags bold in man page
Port of 96a9231927
2021-10-07 10:33:01 +02:00
Yorhel
90873ef956 Fix defaults of scan_ui and --enable-* flags
Bit pointless to make these options nullable when you never assign null
to them.
2021-10-06 15:32:49 +02:00
Yorhel
8a23525cac Fix double-slash prefix in path display when scanning root 2021-10-06 14:49:40 +02:00
Yorhel
929cc75675 Fix import of "special" dirs and excluded items 2021-10-06 14:32:02 +02:00
Yorhel
fdb93bb9e6 Fix use-after-free in argument parsing
Introduced in 53d3e4c112
2021-10-06 14:06:50 +02:00
Yorhel
d1adcde15c Add --ignore-config command line option 2021-10-06 13:59:14 +02:00
Yorhel
39a137c132 Add reference to "man ncdu" in --help text
Not going to bloat the help output with all those settings...
2021-10-06 13:52:08 +02:00
Yorhel
53d3e4c112 Make argument parsing code non-generic and simplify config file parsing
Saves about 15k on the binary size. It does allocate a bit more, but it
also frees the memory this time.
2021-10-06 11:52:37 +02:00
Yorhel
4b1da95835 Add configuration file support 2021-10-06 11:05:56 +02:00
Yorhel
88c8f13c35 Add CLI options for default sort 2021-10-06 09:21:13 +02:00
Yorhel
900d31f6fd Add CLI options for all UI settings
+ reorder manpage a bit, since the scan options tend to be more relevant
than all those UI options.

Again, these are mainly useful with a config file.
2021-10-05 17:17:01 +02:00
Yorhel
d005e7c685 Document the 'u' key
Might as well keep it. The quick-config menu popup idea can always be
implemented later on, we're not running out of keys quite yet.
2021-10-05 16:32:36 +02:00
Yorhel
b3c6f0f48a Add CLI options for individual -r features and to counter previous options
The --enable-* options also work for imported files, this fixes #120.

Most other options are not super useful on its own, but these will be
useful when there's a config file.
2021-10-05 16:27:23 +02:00
Yorhel
bfead635e4 Don't enable -x by default
That was an oversight. Especially useless when there's no option to
disable -x.
2021-09-28 17:56:09 +02:00
Yorhel
f448e8ea67 Add dark-bg color scheme + enable colors by default if !NO_COLOR
Same thing as commit 376aad0d35 in the C
version.
2021-08-16 16:33:23 +02:00
Yorhel
1de70064e7 Version 2.0-beta2 + more convenient static binary generation 2021-07-31 07:14:04 +02:00
Yorhel
5929bf57cc Keep track of uncounted hard links to speed up refresh+delete operations 2021-07-28 20:12:50 +02:00
Yorhel
ba14c0938f Fix Dir.fmtPath() when given the root dir 2021-07-28 20:09:48 +02:00
Yorhel
3acab71fce Fix reporting of fatal scan error in -0 or -1 UIs 2021-07-28 11:13:03 +02:00
Yorhel
0d314ca0ca Implement a more efficient hard link counting approach
As aluded to in the previous commit. This approach keeps track of hard
links information much the same way as ncdu 1.16, with the main
difference being that the actual /counting/ of hard link sizes is
deferred until the scan is complete, thus allowing the use of a more
efficient algorithm and amortizing the counting costs.

As an additional benefit, the links listing in the information window
now doesn't need a full scan through the in-memory tree anymore.

A few memory usage benchmarks:

              1.16  2.0-beta1  this commit
root:          429        162          164
backup:       3969       1686         1601
many links:    155        194          106
many links2*:  155        602          106

(I'm surprised my backup dir had enough hard links for this to be an
improvement)
(* this is the same as the "many links" benchmarks, but with a few
parent directories added to increase the tree depth. 2.0-beta1 doesn't
like that at all)

Performance-wise, refresh and delete operations can still be improved a
bit.
2021-07-28 10:35:56 +02:00
Yorhel
36bc405a69 Add parent node pointers to Dir struct + remove Parents abstraction
While this simplifies the code a bit, it's a regression in the sense
that it increases memory use.

This commit is yak shaving for another hard link counting approach I'd
like to try out, which should be a *LOT* less memory hungry compared to
the current approach. Even though it does, indeed, add an extra cost of
these parent node pointers.
2021-07-26 14:03:10 +02:00
Yorhel
b94db184f4 ChangeLog, too 2021-07-23 06:38:50 +02:00
Yorhel
7055903677 Fix README Zig version oopsie 2021-07-23 06:28:56 +02:00
Yorhel
e72768b86b Tagging this as a 2.0-beta1 release 2021-07-22 16:29:55 +02:00
Yorhel
a915fc0836 Fix counting of sizes for new directories 2021-07-19 16:58:34 +02:00
Yorhel
f473f3605e Fix building of static binaries
It's a bit ugly, but appears to work. I've not tested the 32bit arm
version, but the others run.

The static binaries are about twice as large as the ncdu 1.x
counterparts.
2021-07-19 16:44:53 +02:00
Yorhel
b96587c25f scan: Don't allocate directory iterator on the stack
I had planned to checkout out async functions here so I could avoid
recursing onto the stack alltogether, but it's still unclear to me how
to safely call into libc from async functions so let's wait for all that
to get fleshed out a bit more.
2021-07-18 16:43:02 +02:00
Yorhel
6f07a36923 Implement help window
The rewrite is now on feature-parity with ncdu 1.x. What remains is
bugfixing and polishing.
2021-07-18 16:39:19 +02:00
Yorhel
c8636b8982 Add REUSE-compliant copyright headers 2021-07-18 11:50:50 +02:00
Yorhel
ee92f403ef Add Makefile with some standard/handy tools
+ a failed initial attempt at producing static binaries.
2021-07-18 09:40:59 +02:00
Yorhel
e9c8d12c0f Store Ext before Entry
Which is slightly simpler and should provide a minor performance
improvement.
2021-07-16 19:13:04 +02:00
Yorhel
5a196125dc Use @errorName() fallback in ui.errorString()
Sticking to "compiletime-known" error types will essentially just bring
in *every* possible error anyway, so might as well take advantage of
@errorName.
2021-07-16 18:35:21 +02:00
Yorhel
3a21dea2cd Implement file deletion + a bunch of bug fixes 2021-07-16 16:18:13 +02:00
Yorhel
448fa9e7a6 Implement shell spawning 2021-07-14 11:24:19 +02:00
Yorhel
6c2ab5001c Implement directory refresh
This complicated the scan code more than I had anticipated and has a
few inherent bugs with respect to calculating shared hardlink sizes.

Still, the merge approach avoids creating a full copy of the subtree, so
that's another memory usage related win compared to the C version.
On the other hand, it does leak memory if nodes can't be reused.

Not quite as well tested as I should have, so I'm sure there's bugs.
2021-07-13 13:45:08 +02:00
Yorhel
ff3e3bccc6 Add link path listing to information window
Two differences compared to the C version:
- You can now select individual paths in the listing, pressing enter
  will open the selected path in the browser window.
- Creating this listing is much slower and requires, in the worst case,
  a full traversal through the in-memory tree. I've tested this without
  the same-dev and shared-parent optimizations (i.e. worst case) on an
  import with 30M files and performance was still quite acceptable - the
  listing completed in a second - so I didn't bother adding a loading
  indicator. On slower systems and even larger trees this may be a
  little annoying, though.

(also, calling nonl() apparently breaks detection of the return key,
neither \n nor KEY_ENTER are emitted for some reason)
2021-07-06 18:33:31 +02:00
Yorhel
618972b82b Add item info window
Doesn't display the item's path anymore (seems rather redundant) but
adds a few more other fields.
2021-06-11 13:12:00 +02:00
Yorhel
d910ed8b9f Add workaround for Zig bug on FreeBSD
The good news is: apart from this little thing, everything seems to just
work(tm) on FreeBSD. Think I had more trouble with C because of minor
header file differences.
2021-06-07 11:21:55 +02:00
Yorhel
40f9dff5d6 Update for Zig 0.8 HashMap changes
I had used them as a HashSet with mutable keys already in order to avoid
padding problems. This is not always necessary anymore now that Zig's
new HashMap uses separate arrays for keys and values, but I still need
the HashSet trick for the link_count nodes table, as the key itself
would otherwise have padding.
2021-06-07 10:57:30 +02:00
Yorhel
cc1966d6a9 Make some space for shared size in UI + speed up JSON import a bit
It still feels kind of sluggish, but not entirely sure how to improve
it.
2021-06-01 16:14:01 +02:00
Yorhel
e6b2cff356 Support hard link counts when importing old ncdu dumps
Under the assumption that there are no external references to files
mentioned in the dump, i.e. a file's nlink count matches the number of
times the file occurs in the dump.

This machinery could also be used for regular scans, when you want to
scan an individual directory without caring about external hard links.
Maybe that should be the default, even? Not sure...
2021-06-01 13:00:58 +02:00
Yorhel
5264be76c7 UI: Display shared/unique sizes + hide some columns when no space 2021-05-30 17:02:57 +02:00
Yorhel
59ef5fd27b Improved error reporting + minor cleanup 2021-05-29 19:22:00 +02:00
Yorhel
2390308883 Handle allocation failures
In a similar way to the C version of ncdu: by wrapping malloc(). It's
simpler to handle allocation failures at the source to allow for easy
retries, pushing the retries up the stack will complicate code somewhat
more. Likewise, this is a best-effort approach to handling OOM,
allocation failures in ncurses aren't handled and display glitches may
occur when we get an OOM inside a drawing function.

This is a somewhat un-Zig-like way of handling errors and adds
scary-looking 'catch unreachable's all over the code, but that's okay.
2021-05-29 13:18:23 +02:00
Yorhel
c077c5bed5 Implement JSON file import
Performance is looking great, but the code is rather ugly and
potentially buggy. Also doesn't handle hard links without an "nlink"
field yet.

Error handling of the import code is different from what I've been doing
until now. That's intentional, I'll change error handling of other
pieces to call ui.die() directly rather than propagating error enums.
The approach is less testable but conceptually simpler, it's perfectly
fine for a tiny application like ncdu.
2021-05-29 10:54:45 +02:00
Yorhel
9474aa4329 Only keep total_items + Zig test update + pointless churn 2021-05-24 11:02:26 +02:00
Yorhel
7b3ebf9241 Implement all existing browsing display options + some fixes
I plan to add more display options, but ran out of keys to bind.
Probably going for a quick-select menu thingy so that we can keep the
old key bindings for people accustomed to it.

The graph width algorithm is slightly different, but I think this one's
a minor improvement.
2021-05-23 17:34:40 +02:00
Yorhel
231ab1037d Implement export to file
The exported file format is fully compatible with ncdu 1.x, but has a
few minor differences. I've backported these changes in
ca51d4ed1a
2021-05-12 11:32:52 +02:00
Yorhel
4cc422d628 Implement confirm quit
(+ 2 minor crash fixes due to out-of-bounds cursor_idx)
2021-05-11 13:16:47 +02:00
Yorhel
b0e81ea4e9 Implement scanning UI (-0,-1,-2) 2021-05-09 20:59:09 +02:00
Yorhel
9b59d3dac4 README updates 2021-05-08 16:17:51 +02:00
Yorhel
e12eb4556d UI: Implement dir navigation & remember view of past dirs
Now we're getting somewhere. This works surprisingly well, too. Existing
ncdu behavior is to remember which entry was previously selected but not
which entry was displayed at the top, so the view would be slightly
different when switching directories. This new approach remembers both
the entry and the offset.
2021-05-07 17:16:39 +02:00
Yorhel
d1eb7ba007 Initial keyboard input handling + item&sort selection 2021-05-07 12:01:00 +02:00
Yorhel
27cb599e22 More UI stuff + shave off 16 bytes from model.Dir
I initially wanted to keep a directory's block count and size as a
separate field so that exporting an in-memory tree to a JSON dump would
be easier to do, but that doesn't seem like a common operation to
optimize for. We'll probably need the algorithms to subtract sub-items
from directory counts anyway, so such an export can still be
implemented, albeit slower.
2021-05-06 19:20:55 +02:00
Yorhel
a54c10bffb More UI stuff: nice string handling/shortening + Zig bug workaround
libc locale-dependent APIs are pure madness, but I can't avoid them as
long as I use ncurses. libtickit seems like a much saner alternative (at
first glance), but no popular application seems to use it. :(
2021-05-05 08:03:27 +02:00
Yorhel
a28a0788c3 Implement --exclude-kernfs and --exclude-pattern
Eaiser to implement now that we're linking against libc.

But exclude pattern matching is extremely slow, so that should really be
rewritten with a custom fnmatch implementation. It's exactly as slow as
in ncdu 1.x as well, I'm surprised nobody's complained about it yet.
And while I'm at it, supporting .gitignore-style patterns would be
pretty neat, too.
2021-05-03 14:41:50 +02:00
Yorhel
826c2fc067 Link to ncurses + some rudimentary TUI frameworky stuff
I tried playing with zbox (pure Zig termbox-like lib) for a bit, but I
don't think I want to have to deal with the terminal support issues that
will inevitably come with it. I already stumbled upon one myself: it
doesn't properly put the terminal in a sensible state after cleanup in
tmux. As much as I dislike ncurses, it /is/ ubiquitous and tends to kind
of work.
2021-05-03 08:01:18 +02:00
Yorhel
3e27d37012 Correct int truncating/saturating + avoid one toPosixPath() 2021-05-01 11:10:24 +02:00
Yorhel
097f49d9e6 Fix some scanning bugs + support --exclude-caches and --follow-symlinks
Supporting kernfs checking is going to be a bit more annoying.
And so is exclude patterns. Ugh.
2021-04-30 19:15:29 +02:00
Yorhel
e2805da076 Add CLI argument parsing 2021-04-29 18:59:25 +02:00
Yorhel
0783d35793 WIP: Experimenting with a rewrite to Zig & a new data model
The new data model is supposed to solve a few problems with ncdu 1.x's
'struct dir':
- Reduce memory overhead,
- Fix extremely slow counting of hard links in some scenarios
  (issue #121)
- Add support for counting 'shared' data with other directories
  (issue #36)

Quick memory usage comparison of my root directory with ~3.5 million
files (normal / extended mode):

  ncdu 1.15.1:     379M / 451M
  new (unaligned): 145M / 178M
  new (aligned):   155M / 200M

There's still a /lot/ of to-do's left before this is usable, however,
and there's a bunch of issues I haven't really decided on yet, such as
which TUI library to use.

Backporting this data model to the C version of ncdu is also possible,
but somewhat painful. Let's first see how far I get with Zig.
2021-04-29 12:48:52 +02:00
Yorhel
9337cdc99e Test for read error while reading the --exclude-from file
Fixes #171
2021-03-04 16:07:48 +01:00
Christian Göttsche
a216bc2d35 Scale size bar with max column size
Use 'max(10, column_size / 7)' instead of a fixed size of 10
2020-07-12 18:30:02 +02:00
Yorhel
1035aed81a Version bump for 1.15.1 2020-06-10 12:24:34 +02:00
Yorhel
a389443c9a Add --exclude-firmlinks and follow firmlinks by default
What a mess.

https://code.blicky.net/yorhel/ncdu/issues/153#issuecomment-764
2020-06-07 10:03:11 +02:00
Christian Göttsche
c340980b80 is_kernfs: Check only defined magic numbers
Avoid undeclared identifiers when compiling with older kernel headers.
2020-06-05 18:04:11 +02:00
Christian Göttsche
19cfe9b15c Correct misspellings 2020-05-30 19:26:00 +02:00
Yorhel
239bbf542f Version bump for 1.15 2020-05-30 10:02:02 +02:00
Yorhel
d018dc0be6 dir_import.c: Remove already-implemented TODO comment 2020-05-15 09:09:35 +02:00
Yorhel
1c4d191193 help.c: Mention "F" flag + make the flag list scrollable 2020-05-15 09:02:16 +02:00
Yorhel
bff5da3547 man page: Mention --follow-firmlinks 2020-05-15 08:51:08 +02:00
Yorhel
08564ec7b6 dir_scan.c: Call statfs() with relative path
So we get around the PATH_MAX limitation. Also a tiny bit more
efficient, I hope.
2020-05-15 08:43:45 +02:00
Saagar Jha
c9ce16a633 Support excluding firmlinks on macOS 2020-05-13 11:29:55 -07:00
Saagar Jha
684e9e04ad Typo: exlude → exclude 2020-05-07 16:10:07 -07:00
Yorhel
9a3727759c Fix calculating of directory apparent sizes with hard links
Silly one-character typo that causes directory apparent sizes to be very
off in some scenarios.

Reported & patched by Andrew Neitsch.
2020-05-06 07:04:36 +02:00
Yorhel
4a2def5223 dir_scan.c: Fix integer overflow when list of file names in dir exceeds 2GiB
Fixes #150
2020-04-21 14:13:51 +02:00
Yorhel
1563e56223 help.c: Mention new ^ file flag 2020-04-08 18:35:01 +02:00
Christian Göttsche
c209b012b1 Add option --exclude-kernfs to skip scanning Linux pseudo filesystems
(cherry picked from commit a076ac714a)
2020-04-08 18:32:11 +02:00
Christian Göttsche
50b48a6435 Mention supported color schemes in help text 2020-04-08 17:17:06 +02:00
Christian Göttsche
e3742f0c80 Remove redundant cast to the same type
(cherry picked from commit ef7b4e5c28)
2020-04-08 11:00:41 +02:00
Christian Göttsche
3959210051 Drop never read initialization
(cherry picked from commit 9f28920a64)
2020-04-08 10:59:25 +02:00
Christian Göttsche
84834ff370 Declare file local variables static
(cherry picked from commit ad5b7fce74)
2020-04-08 10:58:53 +02:00
Christian Göttsche
53e5080d9a Avoid using extension of variable length array folded to constant array
(cherry picked from commit 2faefc3b24)
2020-04-08 10:57:23 +02:00
Christian Göttsche
61d268764d Drop extra ';' outside of a function
(cherry picked from commit 32b77d0064)
2020-04-08 10:55:57 +02:00
Christian Göttsche
2bd83b3f22 Avoid using GNU empty initializer extension
(cherry picked from commit ce7036d249)
2020-04-08 10:55:36 +02:00
Christian Göttsche
70f439d9a9 Enforce const correctness on strings
(cherry picked from commit 9801f46ece)
2020-04-08 10:53:21 +02:00
Christian Göttsche
39709aa665 Use strict prototypes
(cherry picked from commit e4e8ebd9e0)
2020-04-08 10:52:25 +02:00
Christian Göttsche
bd22bf42ee Update configure.ac
* Use AS_HELP_STRING instead of deprecated AC_HELP_STRING
* Use AC_OUTPUT without arguments
* Enclose AC_INIT argument in brackets
* Add automake option std-options

(cherry picked from commit 53a33e1db2)
2020-04-08 10:48:14 +02:00
Christian Göttsche
227cdb35ae Ignore generated script compile in git
(cherry picked from commit fd75bd0c22)
2020-04-08 10:45:24 +02:00
Christian Göttsche
2fd4d8b406 Remove trailing whitespaces 2020-04-07 21:49:14 +02:00
Yorhel
888bd663c6 Also quit on EIO from getch()
Fixes #141
2020-04-01 16:54:57 +02:00
57 changed files with 7749 additions and 6541 deletions

31
.gitignore vendored
View file

@ -1,22 +1,11 @@
Makefile
Makefile.in
aclocal.m4
autom4te.cache/
config.h
config.h.in
config.log
config.status
configure
depcomp
install-sh
missing
.deps/
.dirstamp
*.o
stamp-h1
ncdu
ncdu.1
*~
# SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
# SPDX-License-Identifier: MIT
*.swp
static/*
!static/build.sh
*~
ncurses
zstd
static-*/
zig-cache/
zig-out/
.zig-cache/

20
COPYING
View file

@ -1,20 +0,0 @@
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

279
ChangeLog
View file

@ -1,169 +1,148 @@
1.14.2 - 2020-02-10
- Fix compilation with GCC 10 (-fno-common)
- Fix minor display issue when scanning 10M+ files
- Slightly reduce memory usage for hard link detection
# SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
# SPDX-License-Identifier: MIT
1.14.1 - 2019-08-05
- Fix occasional early exit on OS X
- Fix --exclude-caches
- Improve handling of out-of-memory situations
2.9.2 - 2025-10-24
- Still requires Zig 0.14 or 0.15
- Fix hang on loading config file when compiled with Zig 0.15.2
1.14 - 2019-02-04
- Add mtime display and sorting (Alex Wilson)
- Add (limited) --follow-symlinks option (Simon Doppler)
- Display larger file counts in browser UI
- Add -V, --version, and --help alias flags
- Fix crash when attempting to sort an empty directory
- Fix 100% CPU bug when ncdu loses the terminal
- Fix '--color=off' flag
- Fix some typos
2.9.1 - 2025-08-21
- Add support for building with Zig 0.15
- Zig 0.14 is still supported
1.13 - 2018-01-29
- Add "extended information" mode and -e flag
- Add file mode, modification time and uid/gid to info window with -e
- Add experimental color support and --color flag
- Add -rr option to disable shell spawning
- Remove directory nesting limit on file import
- Fix handling of interrupts during file import
- Fix undefined behavior that triggered crash on OS X
2.9 - 2025-08-16
- Still requires Zig 0.14
- Add --delete-command option to replace the built-in file deletion
- Move term cursor to selected option in delete confirmation window
- Support binary import on older Linux kernels lacking statx() (may break
again in the future, Zig does not officially support such old kernels)
1.12 - 2016-08-24
- Add NCDU_SHELL environment variable
- Add --confirm-quit flag
- Fix compilation due to missing sys/wait.h include
2.8.2 - 2025-05-01
- Still requires Zig 0.14
- Fix a build error on MacOS
1.11 - 2015-04-05
- Added 'b' key to spawn shell in the current directory
- Support scanning (and refreshing) of empty directories
- Added --si flag for base 10 prefixes
- Fix toggle dirs before files
2.8.1 - 2025-04-28
- Still requires Zig 0.14
- Fix integer overflow in binary export
- Fix crash when `fstatat()` returns EINVAL
- Minor build system improvements
1.10 - 2013-05-09
- Added 'c' key to display item counts
- Added 'C' key to order by item counts
- Added CACHEDIR.TAG support and --exclude-caches option
- Use locale-dependent thousand separator
- Use pkg-config to detect ncurses
- Clip file/dir sizes to 8 EiB minus one byte
- Fix buffer overflow when formatting huge file sizes
2.8 - 2025-03-05
- Now requires Zig 0.14
- Add support for @-prefixed lines to ignore errors in config file
- List all supported options in `--help`
- Use `kB` instead of `KB` in `--si` mode
1.9 - 2012-09-27
- Added option to dump scanned directory information to a file (-o)
- Added option to load scanned directory information from a file (-f)
- Added multiple scan and load interfaces (-0,-1,-2)
- Fit loading and error windows to the terminal width (#13)
- Fix symlink resolving bug (#18)
- Fix path display when scanning an empty directory (#15)
- Fix hang when terminal is resized to a too small size while loading
- Use top-level automake build
- Remove useless AUTHORS, INSTALL and NEWS files
- ncdu.1 now uses POD as source format
2.7 - 2024-11-19
- Still requires Zig 0.12 or 0.13
- Support transparent reading/writing of zstandard-compressed JSON
- Add `--compress` and `--export-block-size` options
- Perform tilde expansion on paths in the config file
- Fix JSON import of escaped UTF-16 surrogate pairs
- Fix incorrect field in root item when exporting to the binary format
- Add -Dstrip build flag
1.8 - 2011-11-03
- Use hash table to speed up hard link detection
- Added read-only option (-r)
- Use KiB instead of kiB (#3399279)
2.6 - 2024-09-27
- Still requires Zig 0.12 or 0.13
- Add dependency on libzstd
- Add new export format to support threaded export and low-memory browsing
- Add `-O` and `--compress-level` CLI flags
- Add progress indicator to hardlink counting stage
- Fix displaying and exporting zero values when extended info is not available
- Fix clearing screen in some error cases
- Fix uncommon edge case in hardlink counting on refresh
- Use integer math instead of floating point to format numbers
1.7 - 2010-08-13
- List the detected hard links in file info window
- Count the size of a hard linked file once for each directory it appears in
- Fixed crash on browsing dirs with a small window size (#2991787)
- Fixed buffer overflow when some directories can't be scanned (#2981704)
- Fixed segfault when launched on a nonexistant directory (#3012787)
- Fixed segfault when root dir only contains hidden files
- Improved browsing performance
- More intuitive multi-page browsing
- Display size graph by default
- Various minor fixes
2.5 - 2024-07-24
- Still requires Zig 0.12 or 0.13
- Add parallel scanning with `-t,--threads` CLI flags
- Improve JSON export and import performance
- `--exclude-kernfs` is no longer checked on the top-level scan path
- Fix entries sometimes not showing up after refresh
- Fix file descriptor leak with `--exclude-caches` checking
- Fix possible crash on invalid UTF8 when scanning in `-1` UI mode
- Fix JSON export and import of the "other filesystem" flag
- Fix JSON import containing directories with a read error
- Fix mtime display of 'special' files
- Fix edge case bad performance when deleting hardlinks with many links
- Increased memory use for hardlinks (by ~10% in extreme cases, sorry)
1.6 - 2009-10-23
- Implemented hard link detection
- Properly select the next item after deletion
- Removed reliance of dirfd()
- Fixed non-void return in void delete_process()
- Fixed several tiny memory leaks
- Return to previously opened directory on failed recalculation
- Properly display MiB units instead of MB (IEEE 1541 - bug #2831412)
- Link to ncursesw when available
- Improved support for non-ASCII characters
- VIM keybindings for browsing through the tree (#2788249, #1880622)
2.4 - 2024-04-21
- Now requires Zig 0.12
- Revert default color scheme back to 'off'
- Rewrite man page in mdoc, drop pod2man dependency
- Fix updating parent dir error status on refresh
1.5 - 2009-05-02
- Fixed incorrect apparent size on directory refresh
- Browsing keys now work while file info window is displayed
- Current directory is assumed when no directory is specified
- Size graph uses the apparent size if that is displayed
- Items are ordered by displayed size rather than disk usage
- Removed switching between powers of 1000/1024
- Don't rely on the availability of suseconds_t
- Correctly handle paths longer than PATH_MAX
- Fixed various bugs related to rpath()
- Major code rewrite
- Fixed line width when displaying 100%
2.3 - 2023-08-04
- Now requires Zig 0.11
- Add --(enable|disable)-natsort options
- Add indicator to apparent size/disk usage selection in the footer
- Fix build on armv7l (hopefully)
- Minor build system additions
1.4 - 2008-09-10
- Removed the startup window
- Filenames ending with a tidle (~) will now also
be hidden with the 'h'-key
- Fixed buffer overflow when supplying a path longer
than PATH_MAX (patch by Tobias Stoeckmann)
- Used S_BLKSIZE instead of a hardcoded block size of 512
- Fixed display of disk usage and apparent sizes
- Updated ncdu -h
- Included patches for Cygwin
- Cursor now follows the selected item
- Added spaces around path (debian #472194)
- Fixed segfault on empty directory (debian #472294)
- A few code rewrites and improvements
2.2.2 - 2023-01-19
- Now requires Zig 0.10 or 0.10.1
- That's it, pretty much.
1.3 - 2007-08-05
- Added 'r'-key to refresh the current directory
- Removed option to calculate apparent size: both
the disk usage and the apparent size are calculated.
- Added 'a'-key to switch between showing apparent
size and disk usage.
- Added 'i'-key to display information about the
selected item.
- Small performance improvements
- configure checks for ncurses.h (bug #1764304)
2.2.1 - 2022-10-25
- Still requires Zig 0.9.0 or 0.9.1
- Fix bug with 'dark' and 'off' color themes on FreeBSD and MacOS
1.2 - 2007-07-24
- Fixed some bugs on cygwin
- Added du-like exclude patterns
- Fixed bug #1758403: large directories work fine now
- Rewrote a large part of the code
- Fixed a bug with wide characters
- Performance improvements when browsing large dirs
2.2 - 2022-10-17
- Still requires Zig 0.9.0 or 0.9.1
- (breaking) Wildcards in exclude patterns don't cross directory boundary anymore
- Improve exclude pattern matching performance
- Set full background in default dark-bg color scheme
- Fix broken JSON export when a filename contains control characters below 0x10
1.1 - 2007-04-30
- Deleting files and directories is now possible from
within ncdu.
- The key for sorting directories between files has
changed to 't' instead of 'd'. The 'd'-key is now
used for deleting files.
2.1.2 - 2022-04-28
- Still requires Zig 0.9.0 or 0.9.1
- Fix possible crash on shortening file names with unicode variation
selectors or combining marks
1.0 - 2007-04-06
- First stable release
- Small code cleanup
- Added a key to toggle between sorting dirs before
files and dirs between files
- Added graphs and percentages to the directory
browser (can be enabled or disabled with the 'g'-key)
2.1.1 - 2022-03-25
- Still requires Zig 0.9.0 or 0.9.1
- Fix potential crash when refreshing
- Fix typo in --graph-style=eighth-block
- Revert default --graph-style to hash characters
0.3 - 2007-03-04
- When browsing back to the previous directory, the
directory you're getting back from will be selected.
- Added directory scanning in quiet mode to save
bandwidth on remote connections.
2.1 - 2022-02-07
- Still requires Zig 0.9.0
- Use natural sort order when sorting by file name
- Use Unicode box drawing characters for the file size bar
- Add --graph-style option to change drawing style for the file size bar
- Fix early exit if a configuration directory does not exist
- Fix display glitch for long file names
- Fix display glitch with drawing unique/shared size column
0.2 - 2007-02-26
- Fixed POSIX compliance: replaced realpath() with my
own implementation, and gettimeofday() is not
required anymore (but highly recommended)
- Added a warning for terminals smaller than 60x16
- Mountpoints (or any other directory pointing to
another filesystem) are now considered to be
directories rather than files.
2.0.1 - 2022-01-01
- Still requires Zig 0.9.0
- Fix build failure to find 'wcwidth' on some systems
- Add ZIG_FLAGS option to Makefile
0.1 - 2007-02-21
- Initial version
2.0 - 2021-12-21
- Requires Zig 0.9.0
- That's the only change.
2.0-beta3 - 2021-11-09
- Requires Zig 0.8 or 0.8.1
- Add lots of new CLI flags to configure ncdu
- Add configuration file support
- Add 'dark-bg' color scheme and use that by default
- Fix not enabling -x by default
- Fix export feature
- Fix import of "special" dirs and files
- Fix double-slash display in file browser
2.0-beta2 - 2021-07-31
- Requires Zig 0.8
- Significantly reduce memory usage for hard links
- Slightly increase memory usage for directory entries
- Fix reporting of fatal errors in the -0 and -1 scanning UIs
2.0-beta1 - 2021-07-22
- Full release announcement: https://dev.yorhel.nl/doc/ncdu2
- Requires Zig 0.8
- Features and UI based on ncdu 1.16
- Lower memory use in most scenarios (except with many hard links)
- Improved performance of hard link counting
- Extra column for shared/unique directory sizes

9
LICENSES/MIT.txt Normal file
View file

@ -0,0 +1,9 @@
MIT License
Copyright (c) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

114
Makefile Normal file
View file

@ -0,0 +1,114 @@
# SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
# SPDX-License-Identifier: MIT
# Optional semi-standard Makefile with some handy tools.
# Ncdu itself can be built with just the zig build system.
ZIG ?= zig
PREFIX ?= /usr/local
BINDIR ?= ${PREFIX}/bin
MANDIR ?= ${PREFIX}/share/man/man1
ZIG_FLAGS ?= --release=fast -Dstrip
NCDU_VERSION=$(shell grep 'program_version = "' src/main.zig | sed -e 's/^.*"\(.\+\)".*$$/\1/')
.PHONY: build test
build: release
release:
$(ZIG) build ${ZIG_FLAGS}
debug:
$(ZIG) build
clean:
rm -rf zig-cache zig-out
install: install-bin install-doc
install-bin: release
mkdir -p ${BINDIR}
install -m0755 zig-out/bin/ncdu ${BINDIR}/
install-doc:
mkdir -p ${MANDIR}
install -m0644 ncdu.1 ${MANDIR}/
uninstall: uninstall-bin uninstall-doc
# XXX: Ideally, these would also remove the directories created by 'install' if they are empty.
uninstall-bin:
rm -f ${BINDIR}/ncdu
uninstall-doc:
rm -f ${MANDIR}/ncdu.1
dist:
rm -f ncdu-${NCDU_VERSION}.tar.gz
mkdir ncdu-${NCDU_VERSION}
for f in `git ls-files | grep -v ^\.gitignore`; do mkdir -p ncdu-${NCDU_VERSION}/`dirname $$f`; ln -s "`pwd`/$$f" ncdu-${NCDU_VERSION}/$$f; done
tar -cophzf ncdu-${NCDU_VERSION}.tar.gz --sort=name ncdu-${NCDU_VERSION}
rm -rf ncdu-${NCDU_VERSION}
# ASSUMPTION:
# - the ncurses source tree has been extracted into ncurses/
# - the zstd source tree has been extracted into zstd/
# Would be nicer to do all this with the Zig build system, but no way am I
# going to write build.zig's for these projects.
static-%.tar.gz:
mkdir -p static-$*/nc static-$*/inst/pkg
cp -R zstd/lib static-$*/zstd
make -C static-$*/zstd -j8 libzstd.a V=1\
ZSTD_LIB_DICTBUILDER=0\
ZSTD_LIB_MINIFY=1\
ZSTD_LIB_EXCLUDE_COMPRESSORS_DFAST_AND_UP=1\
CC="${ZIG} cc --target=$*"\
LD="${ZIG} cc --target=$*"\
AR="${ZIG} ar" RANLIB="${ZIG} ranlib"
cd static-$*/nc && ../../ncurses/configure --prefix="`pwd`/../inst"\
--without-cxx --without-cxx-binding --without-ada --without-manpages --without-progs\
--without-tests --disable-pc-files --without-pkg-config --without-shared --without-debug\
--without-gpm --without-sysmouse --enable-widec --with-default-terminfo-dir=/usr/share/terminfo\
--with-terminfo-dirs=/usr/share/terminfo:/lib/terminfo:/usr/local/share/terminfo\
--with-fallbacks="screen linux vt100 xterm xterm-256color" --host=$*\
CC="${ZIG} cc --target=$*"\
LD="${ZIG} cc --target=$*"\
AR="${ZIG} ar" RANLIB="${ZIG} ranlib"\
CPPFLAGS=-D_GNU_SOURCE && make -j8
@# zig-build - cleaner approach but doesn't work, results in a dynamically linked binary.
@#cd static-$* && PKG_CONFIG_LIBDIR="`pwd`/inst/pkg" zig build -Dtarget=$*
@# --build-file ../build.zig --search-prefix inst/ --cache-dir zig -Drelease-fast=true
@# Alternative approach, bypassing zig-build
cd static-$* && ${ZIG} build-exe -target $*\
-Inc/include -Izstd -lc nc/lib/libncursesw.a zstd/libzstd.a\
--cache-dir zig-cache -static -fstrip -O ReleaseFast ../src/main.zig
@# My system's strip can't deal with arm binaries and zig doesn't wrap a strip alternative.
@# Whatever, just let it error for those.
strip -R .eh_frame -R .eh_frame_hdr static-$*/main || true
cd static-$* && mv main ncdu && tar -czf ../static-$*.tar.gz ncdu
rm -rf static-$*
static-linux-x86_64: static-x86_64-linux-musl.tar.gz
mv $< ncdu-${NCDU_VERSION}-linux-x86_64.tar.gz
static-linux-x86: static-x86-linux-musl.tar.gz
mv $< ncdu-${NCDU_VERSION}-linux-x86.tar.gz
static-linux-aarch64: static-aarch64-linux-musl.tar.gz
mv $< ncdu-${NCDU_VERSION}-linux-aarch64.tar.gz
static-linux-arm: static-arm-linux-musleabi.tar.gz
mv $< ncdu-${NCDU_VERSION}-linux-arm.tar.gz
static:\
static-linux-x86_64 \
static-linux-x86 \
static-linux-aarch64 \
static-linux-arm
test:
zig build test
mandoc -T lint ncdu.1
reuse lint

View file

@ -1,47 +0,0 @@
AM_CPPFLAGS=-I$(srcdir)/deps
bin_PROGRAMS=ncdu
ncdu_SOURCES=\
src/browser.c\
src/delete.c\
src/dirlist.c\
src/dir_common.c\
src/dir_export.c\
src/dir_import.c\
src/dir_mem.c\
src/dir_scan.c\
src/exclude.c\
src/help.c\
src/shell.c\
src/quit.c\
src/main.c\
src/path.c\
src/util.c
noinst_HEADERS=\
deps/yopt.h\
deps/khashl.h\
src/browser.h\
src/delete.h\
src/dir.h\
src/dirlist.h\
src/exclude.h\
src/global.h\
src/help.h\
src/shell.h\
src/quit.h\
src/path.h\
src/util.h
man_MANS=ncdu.1
EXTRA_DIST=ncdu.1 doc/ncdu.pod
# Don't "clean" ncdu.1, it should be in the tarball so that pod2man isn't a
# build dependency for those who use the tarball.
ncdu.1: $(srcdir)/doc/ncdu.pod
pod2man --center "ncdu manual" --release "@PACKAGE@-@VERSION@" "$(srcdir)/doc/ncdu.pod" >ncdu.1
update-deps:
wget -q https://raw.github.com/attractivechaos/klib/master/khashl.h -O "$(srcdir)/deps/khashl.h"
wget -q http://g.blicky.net/ylib.git/plain/yopt.h -O "$(srcdir)/deps/yopt.h"

55
README
View file

@ -1,55 +0,0 @@
ncdu 1.14.2
===========
DESCRIPTION
ncdu (NCurses Disk Usage) is a curses-based version of
the well-known 'du', and provides a fast way to see what
directories are using your disk space.
REQUIREMENTS
In order to compile and install ncdu, you need to have
at least...
- a POSIX-compliant operating system (Linux, BSD, etc)
- curses libraries and header files
INSTALL
The usual:
./configure --prefix=/usr
make
make install
If you're building directly from the git repository, make sure you have perl
(or rather, pod2man), pkg-config and GNU autoconf/automake installed, then
run 'autoreconf -i', and you're ready to continue with the usual ./configure
and make route.
COPYING
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

36
README.md Normal file
View file

@ -0,0 +1,36 @@
<!--
SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
SPDX-License-Identifier: MIT
-->
# ncdu-zig
## Description
Ncdu is a disk usage analyzer with an ncurses interface. It is designed to find
space hogs on a remote server where you don't have an entire graphical setup
available, but it is a useful tool even on regular desktop systems. Ncdu aims
to be fast, simple and easy to use, and should be able to run in any minimal
POSIX-like environment with ncurses installed.
See the [ncdu 2 release announcement](https://dev.yorhel.nl/doc/ncdu2) for
information about the differences between this Zig implementation (2.x) and the
C version (1.x).
## Requirements
- Zig 0.14 or 0.15
- Some sort of POSIX-like OS
- ncurses
- libzstd
## Install
You can use the Zig build system if you're familiar with that.
There's also a handy Makefile that supports the typical targets, e.g.:
```
make
sudo make install PREFIX=/usr
```

53
build.zig Normal file
View file

@ -0,0 +1,53 @@
// SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
// SPDX-License-Identifier: MIT
const std = @import("std");
pub fn build(b: *std.Build) void {
const target = b.standardTargetOptions(.{});
const optimize = b.standardOptimizeOption(.{});
const pie = b.option(bool, "pie", "Build with PIE support (by default: target-dependant)");
const strip = b.option(bool, "strip", "Strip debugging info (by default false)") orelse false;
const main_mod = b.createModule(.{
.root_source_file = b.path("src/main.zig"),
.target = target,
.optimize = optimize,
.strip = strip,
.link_libc = true,
});
main_mod.linkSystemLibrary("ncursesw", .{});
main_mod.linkSystemLibrary("zstd", .{});
const exe = b.addExecutable(.{
.name = "ncdu",
.root_module = main_mod,
});
exe.pie = pie;
// https://github.com/ziglang/zig/blob/faccd79ca5debbe22fe168193b8de54393257604/build.zig#L745-L748
if (target.result.os.tag.isDarwin()) {
// useful for package maintainers
exe.headerpad_max_install_names = true;
}
b.installArtifact(exe);
const run_cmd = b.addRunArtifact(exe);
run_cmd.step.dependOn(b.getInstallStep());
if (b.args) |args| {
run_cmd.addArgs(args);
}
const run_step = b.step("run", "Run the app");
run_step.dependOn(&run_cmd.step);
const unit_tests = b.addTest(.{
.root_module = main_mod,
});
unit_tests.pie = pie;
const run_unit_tests = b.addRunArtifact(unit_tests);
const test_step = b.step("test", "Run unit tests");
test_step.dependOn(&run_unit_tests.step);
}

View file

@ -1,68 +0,0 @@
AC_INIT(ncdu, 1.14.2, projects@yorhel.nl)
AC_CONFIG_SRCDIR([src/global.h])
AC_CONFIG_HEADER([config.h])
AM_INIT_AUTOMAKE([foreign subdir-objects])
# Check for programs.
AC_PROG_CC
AC_PROG_INSTALL
AC_PROG_RANLIB
PKG_PROG_PKG_CONFIG
# Check for header files.
AC_CHECK_HEADERS(
[limits.h sys/time.h sys/types.h sys/stat.h dirent.h unistd.h fnmatch.h ncurses.h],[],
AC_MSG_ERROR([required header file not found]))
AC_CHECK_HEADERS(locale.h)
# Check for typedefs, structures, and compiler characteristics.
AC_TYPE_INT64_T
AC_TYPE_UINT64_T
AC_SYS_LARGEFILE
AC_STRUCT_ST_BLOCKS
# Check for library functions.
AC_CHECK_FUNCS(
[getcwd gettimeofday fnmatch chdir rmdir unlink lstat system getenv],[],
AC_MSG_ERROR([required function missing]))
# Look for ncurses library to link to
ncurses=auto
AC_ARG_WITH([ncurses],
AC_HELP_STRING([--with-ncurses], [compile/link with ncurses library] ),
[ncurses=ncurses])
AC_ARG_WITH([ncursesw],
AC_HELP_STRING([--with-ncursesw], [compile/link with wide-char ncurses library @<:@default@:>@]),
[ncurses=ncursesw])
if test "$ncurses" = "auto" -o "$ncurses" = "ncursesw"; then
PKG_CHECK_MODULES([NCURSES], [ncursesw], [LIBS="$LIBS $NCURSES_LIBS"; ncurses=ncursesw],
[AC_CHECK_LIB([ncursesw],
[initscr],
[LIBS="$LIBS -lncursesw"; ncurses=ncursesw],
[ncurses=ncurses])
])
fi
if test "$ncurses" = "ncurses"; then
PKG_CHECK_MODULES([NCURSES], [ncurses], [LIBS="$LIBS $NCURSES_LIBS"],
[AC_CHECK_LIB([ncurses],
[initscr],
[LIBS="$LIBS -lncurses"],
[AC_MSG_ERROR(ncurses library is required)])
])
fi
# Configure default shell for spawning shell when $SHELL is not set
AC_ARG_WITH([shell],
[AS_HELP_STRING([--with-shell],
[used interpreter as default shell (default is /bin/sh)])],
[DEFAULT_SHELL=$withval],
[DEFAULT_SHELL=/bin/sh])
AC_MSG_NOTICE([Using $DEFAULT_SHELL as the default shell if \$SHELL is not set])
AC_DEFINE_UNQUOTED(DEFAULT_SHELL, "$DEFAULT_SHELL", [Used default shell interpreter])
AC_OUTPUT([Makefile])

349
deps/khashl.h vendored
View file

@ -1,349 +0,0 @@
/* The MIT License
Copyright (c) 2019 by Attractive Chaos <attractor@live.co.uk>
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
*/
#ifndef __AC_KHASHL_H
#define __AC_KHASHL_H
#define AC_VERSION_KHASHL_H "0.1"
#include <stdlib.h>
#include <string.h>
#include <limits.h>
/************************************
* Compiler specific configurations *
************************************/
#if UINT_MAX == 0xffffffffu
typedef unsigned int khint32_t;
#elif ULONG_MAX == 0xffffffffu
typedef unsigned long khint32_t;
#endif
#if ULONG_MAX == ULLONG_MAX
typedef unsigned long khint64_t;
#else
typedef unsigned long long khint64_t;
#endif
#ifndef kh_inline
#ifdef _MSC_VER
#define kh_inline __inline
#else
#define kh_inline inline
#endif
#endif /* kh_inline */
#ifndef klib_unused
#if (defined __clang__ && __clang_major__ >= 3) || (defined __GNUC__ && __GNUC__ >= 3)
#define klib_unused __attribute__ ((__unused__))
#else
#define klib_unused
#endif
#endif /* klib_unused */
#define KH_LOCAL static kh_inline klib_unused
typedef khint32_t khint_t;
/******************
* malloc aliases *
******************/
#ifndef kcalloc
#define kcalloc(N,Z) calloc(N,Z)
#endif
#ifndef kmalloc
#define kmalloc(Z) malloc(Z)
#endif
#ifndef krealloc
#define krealloc(P,Z) realloc(P,Z)
#endif
#ifndef kfree
#define kfree(P) free(P)
#endif
/****************************
* Simple private functions *
****************************/
#define __kh_used(flag, i) (flag[i>>5] >> (i&0x1fU) & 1U)
#define __kh_set_used(flag, i) (flag[i>>5] |= 1U<<(i&0x1fU))
#define __kh_set_unused(flag, i) (flag[i>>5] &= ~(1U<<(i&0x1fU)))
#define __kh_fsize(m) ((m) < 32? 1 : (m)>>5)
static kh_inline khint_t __kh_h2b(khint_t hash, khint_t bits) { return hash * 2654435769U >> (32 - bits); }
/*******************
* Hash table base *
*******************/
#define __KHASHL_TYPE(HType, khkey_t) \
typedef struct { \
khint_t bits, count; \
khint32_t *used; \
khkey_t *keys; \
} HType;
#define __KHASHL_PROTOTYPES(HType, prefix, khkey_t) \
extern HType *prefix##_init(void); \
extern void prefix##_destroy(HType *h); \
extern void prefix##_clear(HType *h); \
extern khint_t prefix##_getp(const HType *h, const khkey_t *key); \
extern int prefix##_resize(HType *h, khint_t new_n_buckets); \
extern khint_t prefix##_putp(HType *h, const khkey_t *key, int *absent); \
extern void prefix##_del(HType *h, khint_t k);
#define __KHASHL_IMPL_BASIC(SCOPE, HType, prefix) \
SCOPE HType *prefix##_init(void) { \
return (HType*)kcalloc(1, sizeof(HType)); \
} \
SCOPE void prefix##_destroy(HType *h) { \
if (!h) return; \
kfree((void *)h->keys); kfree(h->used); \
kfree(h); \
} \
SCOPE void prefix##_clear(HType *h) { \
if (h && h->used) { \
uint32_t n_buckets = 1U << h->bits; \
memset(h->used, 0, __kh_fsize(n_buckets) * sizeof(khint32_t)); \
h->count = 0; \
} \
}
#define __KHASHL_IMPL_GET(SCOPE, HType, prefix, khkey_t, __hash_fn, __hash_eq) \
SCOPE khint_t prefix##_getp(const HType *h, const khkey_t *key) { \
khint_t i, last, n_buckets, mask; \
if (h->keys == 0) return 0; \
n_buckets = 1U << h->bits; \
mask = n_buckets - 1U; \
i = last = __kh_h2b(__hash_fn(*key), h->bits); \
while (__kh_used(h->used, i) && !__hash_eq(h->keys[i], *key)) { \
i = (i + 1U) & mask; \
if (i == last) return n_buckets; \
} \
return !__kh_used(h->used, i)? n_buckets : i; \
} \
SCOPE khint_t prefix##_get(const HType *h, khkey_t key) { return prefix##_getp(h, &key); }
#define __KHASHL_IMPL_RESIZE(SCOPE, HType, prefix, khkey_t, __hash_fn, __hash_eq) \
SCOPE int prefix##_resize(HType *h, khint_t new_n_buckets) { \
khint32_t *new_used = 0; \
khint_t j = 0, x = new_n_buckets, n_buckets, new_bits, new_mask; \
while ((x >>= 1) != 0) ++j; \
if (new_n_buckets & (new_n_buckets - 1)) ++j; \
new_bits = j > 2? j : 2; \
new_n_buckets = 1U << new_bits; \
if (h->count > (new_n_buckets>>1) + (new_n_buckets>>2)) return 0; /* requested size is too small */ \
new_used = (khint32_t*)kmalloc(__kh_fsize(new_n_buckets) * sizeof(khint32_t)); \
memset(new_used, 0, __kh_fsize(new_n_buckets) * sizeof(khint32_t)); \
if (!new_used) return -1; /* not enough memory */ \
n_buckets = h->keys? 1U<<h->bits : 0U; \
if (n_buckets < new_n_buckets) { /* expand */ \
khkey_t *new_keys = (khkey_t*)krealloc((void*)h->keys, new_n_buckets * sizeof(khkey_t)); \
if (!new_keys) { kfree(new_used); return -1; } \
h->keys = new_keys; \
} /* otherwise shrink */ \
new_mask = new_n_buckets - 1; \
for (j = 0; j != n_buckets; ++j) { \
khkey_t key; \
if (!__kh_used(h->used, j)) continue; \
key = h->keys[j]; \
__kh_set_unused(h->used, j); \
while (1) { /* kick-out process; sort of like in Cuckoo hashing */ \
khint_t i; \
i = __kh_h2b(__hash_fn(key), new_bits); \
while (__kh_used(new_used, i)) i = (i + 1) & new_mask; \
__kh_set_used(new_used, i); \
if (i < n_buckets && __kh_used(h->used, i)) { /* kick out the existing element */ \
{ khkey_t tmp = h->keys[i]; h->keys[i] = key; key = tmp; } \
__kh_set_unused(h->used, i); /* mark it as deleted in the old hash table */ \
} else { /* write the element and jump out of the loop */ \
h->keys[i] = key; \
break; \
} \
} \
} \
if (n_buckets > new_n_buckets) /* shrink the hash table */ \
h->keys = (khkey_t*)krealloc((void *)h->keys, new_n_buckets * sizeof(khkey_t)); \
kfree(h->used); /* free the working space */ \
h->used = new_used, h->bits = new_bits; \
return 0; \
}
#define __KHASHL_IMPL_PUT(SCOPE, HType, prefix, khkey_t, __hash_fn, __hash_eq) \
SCOPE khint_t prefix##_putp(HType *h, const khkey_t *key, int *absent) { \
khint_t n_buckets, i, last, mask; \
n_buckets = h->keys? 1U<<h->bits : 0U; \
*absent = -1; \
if (h->count >= (n_buckets>>1) + (n_buckets>>2)) { /* rehashing */ \
if (prefix##_resize(h, n_buckets + 1U) < 0) \
return n_buckets; \
n_buckets = 1U<<h->bits; \
} /* TODO: to implement automatically shrinking; resize() already support shrinking */ \
mask = n_buckets - 1; \
i = last = __kh_h2b(__hash_fn(*key), h->bits); \
while (__kh_used(h->used, i) && !__hash_eq(h->keys[i], *key)) { \
i = (i + 1U) & mask; \
if (i == last) break; \
} \
if (!__kh_used(h->used, i)) { /* not present at all */ \
h->keys[i] = *key; \
__kh_set_used(h->used, i); \
++h->count; \
*absent = 1; \
} else *absent = 0; /* Don't touch h->keys[i] if present */ \
return i; \
} \
SCOPE khint_t prefix##_put(HType *h, khkey_t key, int *absent) { return prefix##_putp(h, &key, absent); }
#define __KHASHL_IMPL_DEL(SCOPE, HType, prefix, khkey_t, __hash_fn) \
SCOPE int prefix##_del(HType *h, khint_t i) { \
khint_t j = i, k, mask, n_buckets; \
if (h->keys == 0) return 0; \
n_buckets = 1U<<h->bits; \
mask = n_buckets - 1U; \
while (1) { \
j = (j + 1U) & mask; \
if (j == i || !__kh_used(h->used, j)) break; /* j==i only when the table is completely full */ \
k = __kh_h2b(__hash_fn(h->keys[j]), h->bits); \
if ((j > i && (k <= i || k > j)) || (j < i && (k <= i && k > j))) \
h->keys[i] = h->keys[j], i = j; \
} \
__kh_set_unused(h->used, i); \
--h->count; \
return 1; \
}
#define KHASHL_DECLARE(HType, prefix, khkey_t) \
__KHASHL_TYPE(HType, khkey_t) \
__KHASHL_PROTOTYPES(HType, prefix, khkey_t)
#define KHASHL_INIT(SCOPE, HType, prefix, khkey_t, __hash_fn, __hash_eq) \
__KHASHL_TYPE(HType, khkey_t) \
__KHASHL_IMPL_BASIC(SCOPE, HType, prefix) \
__KHASHL_IMPL_GET(SCOPE, HType, prefix, khkey_t, __hash_fn, __hash_eq) \
__KHASHL_IMPL_RESIZE(SCOPE, HType, prefix, khkey_t, __hash_fn, __hash_eq) \
__KHASHL_IMPL_PUT(SCOPE, HType, prefix, khkey_t, __hash_fn, __hash_eq) \
__KHASHL_IMPL_DEL(SCOPE, HType, prefix, khkey_t, __hash_fn)
/*****************************
* More convenient interface *
*****************************/
#define __kh_packed __attribute__ ((__packed__))
#define __kh_cached_hash(x) ((x).hash)
#define KHASHL_SET_INIT(SCOPE, HType, prefix, khkey_t, __hash_fn, __hash_eq) \
typedef struct { khkey_t key; } __kh_packed HType##_s_bucket_t; \
static kh_inline khint_t prefix##_s_hash(HType##_s_bucket_t x) { return __hash_fn(x.key); } \
static kh_inline int prefix##_s_eq(HType##_s_bucket_t x, HType##_s_bucket_t y) { return __hash_eq(x.key, y.key); } \
KHASHL_INIT(KH_LOCAL, HType, prefix##_s, HType##_s_bucket_t, prefix##_s_hash, prefix##_s_eq) \
SCOPE HType *prefix##_init(void) { return prefix##_s_init(); } \
SCOPE void prefix##_destroy(HType *h) { prefix##_s_destroy(h); } \
SCOPE khint_t prefix##_get(const HType *h, khkey_t key) { HType##_s_bucket_t t; t.key = key; return prefix##_s_getp(h, &t); } \
SCOPE int prefix##_del(HType *h, khint_t k) { return prefix##_s_del(h, k); } \
SCOPE khint_t prefix##_put(HType *h, khkey_t key, int *absent) { HType##_s_bucket_t t; t.key = key; return prefix##_s_putp(h, &t, absent); }
#define KHASHL_MAP_INIT(SCOPE, HType, prefix, khkey_t, kh_val_t, __hash_fn, __hash_eq) \
typedef struct { khkey_t key; kh_val_t val; } __kh_packed HType##_m_bucket_t; \
static kh_inline khint_t prefix##_m_hash(HType##_m_bucket_t x) { return __hash_fn(x.key); } \
static kh_inline int prefix##_m_eq(HType##_m_bucket_t x, HType##_m_bucket_t y) { return __hash_eq(x.key, y.key); } \
KHASHL_INIT(KH_LOCAL, HType, prefix##_m, HType##_m_bucket_t, prefix##_m_hash, prefix##_m_eq) \
SCOPE HType *prefix##_init(void) { return prefix##_m_init(); } \
SCOPE void prefix##_destroy(HType *h) { prefix##_m_destroy(h); } \
SCOPE khint_t prefix##_get(const HType *h, khkey_t key) { HType##_m_bucket_t t; t.key = key; return prefix##_m_getp(h, &t); } \
SCOPE int prefix##_del(HType *h, khint_t k) { return prefix##_m_del(h, k); } \
SCOPE khint_t prefix##_put(HType *h, khkey_t key, int *absent) { HType##_m_bucket_t t; t.key = key; return prefix##_m_putp(h, &t, absent); }
#define KHASHL_CSET_INIT(SCOPE, HType, prefix, khkey_t, __hash_fn, __hash_eq) \
typedef struct { khkey_t key; khint_t hash; } __kh_packed HType##_cs_bucket_t; \
static kh_inline int prefix##_cs_eq(HType##_cs_bucket_t x, HType##_cs_bucket_t y) { return x.hash == y.hash && __hash_eq(x.key, y.key); } \
KHASHL_INIT(KH_LOCAL, HType, prefix##_cs, HType##_cs_bucket_t, __kh_cached_hash, prefix##_cs_eq) \
SCOPE HType *prefix##_init(void) { return prefix##_cs_init(); } \
SCOPE void prefix##_destroy(HType *h) { prefix##_cs_destroy(h); } \
SCOPE khint_t prefix##_get(const HType *h, khkey_t key) { HType##_cs_bucket_t t; t.key = key; t.hash = __hash_fn(key); return prefix##_cs_getp(h, &t); } \
SCOPE int prefix##_del(HType *h, khint_t k) { return prefix##_cs_del(h, k); } \
SCOPE khint_t prefix##_put(HType *h, khkey_t key, int *absent) { HType##_cs_bucket_t t; t.key = key, t.hash = __hash_fn(key); return prefix##_cs_putp(h, &t, absent); }
#define KHASHL_CMAP_INIT(SCOPE, HType, prefix, khkey_t, kh_val_t, __hash_fn, __hash_eq) \
typedef struct { khkey_t key; kh_val_t val; khint_t hash; } __kh_packed HType##_cm_bucket_t; \
static kh_inline int prefix##_cm_eq(HType##_cm_bucket_t x, HType##_cm_bucket_t y) { return x.hash == y.hash && __hash_eq(x.key, y.key); } \
KHASHL_INIT(KH_LOCAL, HType, prefix##_cm, HType##_cm_bucket_t, __kh_cached_hash, prefix##_cm_eq) \
SCOPE HType *prefix##_init(void) { return prefix##_cm_init(); } \
SCOPE void prefix##_destroy(HType *h) { prefix##_cm_destroy(h); } \
SCOPE khint_t prefix##_get(const HType *h, khkey_t key) { HType##_cm_bucket_t t; t.key = key; t.hash = __hash_fn(key); return prefix##_cm_getp(h, &t); } \
SCOPE int prefix##_del(HType *h, khint_t k) { return prefix##_cm_del(h, k); } \
SCOPE khint_t prefix##_put(HType *h, khkey_t key, int *absent) { HType##_cm_bucket_t t; t.key = key, t.hash = __hash_fn(key); return prefix##_cm_putp(h, &t, absent); }
/**************************
* Public macro functions *
**************************/
#define kh_bucket(h, x) ((h)->keys[x])
#define kh_size(h) ((h)->count)
#define kh_capacity(h) ((h)->keys? 1U<<(h)->bits : 0U)
#define kh_end(h) kh_capacity(h)
#define kh_key(h, x) ((h)->keys[x].key)
#define kh_val(h, x) ((h)->keys[x].val)
/**************************************
* Common hash and equality functions *
**************************************/
#define kh_eq_generic(a, b) ((a) == (b))
#define kh_eq_str(a, b) (strcmp((a), (b)) == 0)
#define kh_hash_dummy(x) ((khint_t)(x))
static kh_inline khint_t kh_hash_uint32(khint_t key) {
key += ~(key << 15);
key ^= (key >> 10);
key += (key << 3);
key ^= (key >> 6);
key += ~(key << 11);
key ^= (key >> 16);
return key;
}
static kh_inline khint_t kh_hash_uint64(khint64_t key) {
key = ~key + (key << 21);
key = key ^ key >> 24;
key = (key + (key << 3)) + (key << 8);
key = key ^ key >> 14;
key = (key + (key << 2)) + (key << 4);
key = key ^ key >> 28;
key = key + (key << 31);
return (khint_t)key;
}
static kh_inline khint_t kh_hash_str(const char *s) {
khint_t h = (khint_t)*s;
if (h) for (++s ; *s; ++s) h = (h << 5) - h + (khint_t)*s;
return h;
}
#endif /* __AC_KHASHL_H */

198
deps/yopt.h vendored
View file

@ -1,198 +0,0 @@
/* Copyright (c) 2012-2013 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
/* This is a simple command-line option parser. Operation is similar to
* getopt_long(), except with a cleaner API.
*
* This is implemented in a single header file, as it's pretty small and you
* generally only use an option parser in a single .c file in your program.
*
* Supports (examples from GNU tar(1)):
* "--gzip"
* "--file <arg>"
* "--file=<arg>"
* "-z"
* "-f <arg>"
* "-f<arg>"
* "-zf <arg>"
* "-zf<arg>"
* "--" (To stop looking for futher options)
* "<arg>" (Non-option arguments)
*
* Issues/non-features:
* - An option either requires an argument or it doesn't.
* - No way to specify how often an option can/should be used.
* - No way to specify the type of an argument (filename/integer/enum/whatever)
*/
#ifndef YOPT_H
#define YOPT_H
#include <string.h>
#include <stdarg.h>
#include <stdlib.h>
#include <stdio.h>
typedef struct {
/* Value yopt_next() will return for this option */
int val;
/* Whether this option needs an argument */
int needarg;
/* Name(s) of this option, prefixed with '-' or '--' and separated by a
* comma. E.g. "-z", "--gzip", "-z,--gzip".
* An option can have any number of aliasses.
*/
const char *name;
} yopt_opt_t;
typedef struct {
int argc;
int cur;
int argsep; /* '--' found */
char **argv;
char *sh;
const yopt_opt_t *opts;
char errbuf[128];
} yopt_t;
/* opts must be an array of options, terminated with an option with val=0 */
static inline void yopt_init(yopt_t *o, int argc, char **argv, const yopt_opt_t *opts) {
o->argc = argc;
o->argv = argv;
o->opts = opts;
o->cur = 0;
o->argsep = 0;
o->sh = NULL;
}
static inline const yopt_opt_t *_yopt_find(const yopt_opt_t *o, const char *v) {
const char *tn, *tv;
for(; o->val; o++) {
tn = o->name;
while(*tn) {
tv = v;
while(*tn && *tn != ',' && *tv && *tv != '=' && *tn == *tv) {
tn++;
tv++;
}
if(!(*tn && *tn != ',') && !(*tv && *tv != '='))
return o;
while(*tn && *tn != ',')
tn++;
while(*tn == ',')
tn++;
}
}
return NULL;
}
static inline int _yopt_err(yopt_t *o, char **val, const char *fmt, ...) {
va_list va;
va_start(va, fmt);
vsnprintf(o->errbuf, sizeof(o->errbuf), fmt, va);
va_end(va);
*val = o->errbuf;
return -2;
}
/* Return values:
* 0 -> Non-option argument, val is its value
* -1 -> Last argument has been processed
* -2 -> Error, val will contain the error message.
* x -> Option with val = x found. If the option requires an argument, its
* value will be in val.
*/
static inline int yopt_next(yopt_t *o, char **val) {
const yopt_opt_t *opt;
char sh[3];
*val = NULL;
if(o->sh)
goto inshort;
if(++o->cur >= o->argc)
return -1;
if(!o->argsep && o->argv[o->cur][0] == '-' && o->argv[o->cur][1] == '-' && o->argv[o->cur][2] == 0) {
o->argsep = 1;
if(++o->cur >= o->argc)
return -1;
}
if(o->argsep || *o->argv[o->cur] != '-') {
*val = o->argv[o->cur];
return 0;
}
if(o->argv[o->cur][1] != '-') {
o->sh = o->argv[o->cur]+1;
goto inshort;
}
/* Now we're supposed to have a long option */
if(!(opt = _yopt_find(o->opts, o->argv[o->cur])))
return _yopt_err(o, val, "Unknown option '%s'", o->argv[o->cur]);
if((*val = strchr(o->argv[o->cur], '=')) != NULL)
(*val)++;
if(!opt->needarg && *val)
return _yopt_err(o, val, "Option '%s' does not accept an argument", o->argv[o->cur]);
if(opt->needarg && !*val) {
if(o->cur+1 >= o->argc)
return _yopt_err(o, val, "Option '%s' requires an argument", o->argv[o->cur]);
*val = o->argv[++o->cur];
}
return opt->val;
/* And here we're supposed to have a short option */
inshort:
sh[0] = '-';
sh[1] = *o->sh;
sh[2] = 0;
if(!(opt = _yopt_find(o->opts, sh)))
return _yopt_err(o, val, "Unknown option '%s'", sh);
o->sh++;
if(opt->needarg && *o->sh)
*val = o->sh;
else if(opt->needarg) {
if(++o->cur >= o->argc)
return _yopt_err(o, val, "Option '%s' requires an argument", sh);
*val = o->argv[o->cur];
}
if(!*o->sh || opt->needarg)
o->sh = NULL;
return opt->val;
}
#endif
/* vim: set noet sw=4 ts=4: */

View file

@ -1,425 +0,0 @@
=head1 NAME
B<ncdu> - NCurses Disk Usage
=head1 SYNOPSIS
B<ncdu> [I<options>] I<dir>
=head1 DESCRIPTION
ncdu (NCurses Disk Usage) is a curses-based version of the well-known 'du', and
provides a fast way to see what directories are using your disk space.
=head1 OPTIONS
=head2 Mode Selection
=over
=item -h, --help
Print a short help message and quit.
=item -v, -V, --version
Print ncdu version and quit.
=item -f I<FILE>
Load the given file, which has earlier been created with the C<-o> option. If
I<FILE> is equivalent to C<->, the file is read from standard input.
For the sake of preventing a screw-up, the current version of ncdu will assume
that the directory information in the imported file does not represent the
filesystem on which the file is being imported. That is, the refresh, file
deletion and shell spawning options in the browser will be disabled.
=item I<dir>
Scan the given directory.
=item -o I<FILE>
Export all necessary information to I<FILE> instead of opening the browser
interface. If I<FILE> is C<->, the data is written to standard output. See the
examples section below for some handy use cases.
Be warned that the exported data may grow quite large when exporting a
directory with many files. 10.000 files will get you an export in the order of
600 to 700 KiB uncompressed, or a little over 100 KiB when compressed with
gzip. This scales linearly, so be prepared to handle a few tens of megabytes
when dealing with millions of files.
=item -e
Enable extended information mode. This will, in addition to the usual file
information, also read the ownership, permissions and last modification time
for each file. This will result in higher memory usage (by roughly ~30%) and in
a larger output file when exporting.
When using the file export/import function, this flag will need to be added
both when exporting (to make sure the information is added to the export), and
when importing (to read this extra information in memory). This flag has no
effect when importing a file that has been exported without the extended
information.
This enables viewing and sorting by the latest child mtime, or modified time,
using 'm' and 'M', respectively.
=back
=head2 Interface options
=over
=item -0
Don't give any feedback while scanning a directory or importing a file, other
than when a fatal error occurs. Ncurses will not be initialized until the scan
is complete. When exporting the data with C<-o>, ncurses will not be
initialized at all. This option is the default when exporting to standard
output.
=item -1
Similar to C<-0>, but does give feedback on the scanning progress with a single
line of output. This option is the default when exporting to a file.
In some cases, the ncurses browser interface which you'll see after the
scan/import is complete may look garbled when using this option. If you're not
exporting to a file, C<-2> is probably a better choice.
=item -2
Provide a full-screen ncurses interface while scanning a directory or importing
a file. This is the only interface that provides feedback on any non-fatal
errors while scanning.
=item -q
Quiet mode. While scanning or importing the directory, ncdu will update the
screen 10 times a second by default, this will be decreased to once every 2
seconds in quiet mode. Use this feature to save bandwidth over remote
connections. This option has no effect when C<-0> is used.
=item -r
Read-only mode. This will disable the built-in file deletion feature. This
option has no effect when C<-o> is used, because there will not be a browser
interface in that case. It has no effect when C<-f> is used, either, because
the deletion feature is disabled in that case anyway.
WARNING: This option will only prevent deletion through the file browser. It is
still possible to spawn a shell from ncdu and delete or modify files from
there. To disable that feature as well, pass the C<-r> option twice (see
C<-rr>).
=item -rr
In addition to C<-r>, this will also disable the shell spawning feature of the
file browser.
=item --si
List sizes using base 10 prefixes, that is, powers of 1000 (KB, MB, etc), as
defined in the International System of Units (SI), instead of the usual base 2
prefixes, that is, powers of 1024 (KiB, MiB, etc).
=item --confirm-quit
Requires a confirmation before quitting ncdu. Very helpful when you
accidentally press 'q' during or after a very long scan.
=item --color I<SCHEME>
Select a color scheme. Currently only two schemes are recognized: I<off> to
disable colors (the default) and I<dark> for a color scheme intended for dark
backgrounds.
=back
=head2 Scan Options
These options affect the scanning progress, and have no effect when importing
directory information from a file.
=over
=item -x
Do not cross filesystem boundaries, i.e. only count files and directories on
the same filesystem as the directory being scanned.
=item --exclude I<PATTERN>
Exclude files that match I<PATTERN>. The files will still be displayed by
default, but are not counted towards the disk usage statistics. This argument
can be added multiple times to add more patterns.
=item -X I<FILE>, --exclude-from I<FILE>
Exclude files that match any pattern in I<FILE>. Patterns should be separated
by a newline.
=item --exclude-caches
Exclude directories containing CACHEDIR.TAG. The directories will still be
displayed, but not their content, and they are not counted towards the disk
usage statistics.
See http://www.brynosaurus.com/cachedir/
=item -L, --follow-symlinks
Follow symlinks and count the size of the file they point to. As of ncdu 1.14,
this option will not follow symlinks to directories and will count each
symlinked file as a unique file (i.e. unlike how hard links are handled). This
is subject to change in later versions.
=back
=head1 KEYS
=over
=item ?
Show help + keys + about screen
=item up, down j, k
Cycle through the items
=item right, enter, l
Open selected directory
=item left, <, h
Go to parent directory
=item n
Order by filename (press again for descending order)
=item s
Order by filesize (press again for descending order)
=item C
Order by number of items (press again for descending order)
=item a
Toggle between showing disk usage and showing apparent size.
=item M
Order by latest child mtime, or modified time. (press again for descending order)
Requires the -e flag.
=item d
Delete the selected file or directory. An error message will be shown when the
contents of the directory do not match or do not exist anymore on the
filesystem.
=item t
Toggle dirs before files when sorting.
=item g
Toggle between showing percentage, graph, both, or none. Percentage is relative
to the size of the current directory, graph is relative to the largest item in
the current directory.
=item c
Toggle display of child item counts.
=item m
Toggle display of latest child mtime, or modified time. Requires the -e flag.
=item e
Show/hide 'hidden' or 'excluded' files and directories. Please note that even
though you can't see the hidden files and directories, they are still there and
they are still included in the directory sizes. If you suspect that the totals
shown at the bottom of the screen are not correct, make sure you haven't
enabled this option.
=item i
Show information about the current selected item.
=item r
Refresh/recalculate the current directory.
=item b
Spawn shell in current directory.
Ncdu will determine your preferred shell from the C<NCDU_SHELL> or C<SHELL>
variable (in that order), or will call C</bin/sh> if neither are set. This
allows you to also configure another command to be run when he 'b' key is
pressed. For example, to spawn the L<vifm(1)> file manager instead of a shell,
run ncdu as follows:
export NCDU_SHELL=vifm
ncdu
=item q
Quit
=back
=head1 FILE FLAGS
Entries in the browser interface may be prefixed by a one-character flag. These
flags have the following meaning:
=over
=item !
An error occurred while reading this directory.
=item .
An error occurred while reading a subdirectory, so the indicated size may not be
correct.
=item <
File or directory is excluded from the statistics by using exlude patterns.
=item >
Directory is on another filesystem.
=item @
This is neither a file nor a folder (symlink, socket, ...).
=item H
Same file was already counted (hard link).
=item e
Empty directory.
=back
=head1 EXAMPLES
To scan and browse the directory you're currently in, all you need is a simple:
ncdu
If you want to scan a full filesystem, your root filesystem, for example, then
you'll want to use C<-x>:
ncdu -x /
Since scanning a large directory may take a while, you can scan a directory and
export the results for later viewing:
ncdu -1xo- / | gzip >export.gz
# ...some time later:
zcat export.gz | ncdu -f-
To export from a cron job, make sure to replace C<-1> with C<-0> to suppress
any unnecessary output.
You can also export a directory and browse it once scanning is done:
ncdu -o- | tee export.file | ./ncdu -f-
The same is possible with gzip compression, but is a bit kludgey:
ncdu -o- | gzip | tee export.gz | gunzip | ./ncdu -f-
To scan a system remotely, but browse through the files locally:
ssh -C user@system ncdu -o- / | ./ncdu -f-
The C<-C> option to ssh enables compression, which will be very useful over
slow links. Remote scanning and local viewing has two major advantages when
compared to running ncdu directly on the remote system: You can browse through
the scanned directory on the local system without any network latency, and ncdu
does not keep the entire directory structure in memory when exporting, so you
won't consume much memory on the remote system.
=head1 HARD LINKS
Every disk usage analysis utility has its own way of (not) counting hard links.
There does not seem to be any universally agreed method of handling hard links,
and it is even inconsistent among different versions of ncdu. This section
explains what each version of ncdu does.
ncdu 1.5 and below does not support any hard link detection at all: each link
is considered a separate inode and its size is counted for every link. This
means that the displayed directory sizes are incorrect when analyzing
directories which contain hard links.
ncdu 1.6 has basic hard link detection: When a link to a previously encountered
inode is detected, the link is considered to have a file size of zero bytes.
Its size is not counted again, and the link is indicated in the browser
interface with a 'H' mark. The displayed directory sizes are only correct when
all links to an inode reside within that directory. When this is not the case,
the sizes may or may not be correct, depending on which links were considered
as "duplicate" and which as "original". The indicated size of the topmost
directory (that is, the one specified on the command line upon starting ncdu)
is always correct.
ncdu 1.7 and later has improved hard link detection. Each file that has more
than two links has the "H" mark visible in the browser interface. Each hard
link is counted exactly once for every directory it appears in. The indicated
size of each directory is therefore, correctly, the sum of the sizes of all
unique inodes that can be found in that directory. Note, however, that this may
not always be same as the space that will be reclaimed after deleting the
directory, as some inodes may still be accessible from hard links outside it.
=head1 BUGS
Directory hard links are not supported. They will not be detected as being hard
links, and will thus be scanned and counted multiple times.
Some minor glitches may appear when displaying filenames that contain multibyte
or multicolumn characters.
All sizes are internally represented as a signed 64bit integer. If you have a
directory larger than 8 EiB minus one byte, ncdu will clip its size to 8 EiB
minus one byte. When deleting items in a directory with a clipped size, the
resulting sizes will be incorrect.
Item counts are stored in a signed 32-bit integer without overflow detection.
If you have a directory with more than 2 billion files, quite literally
anything can happen.
Please report any other bugs you may find at the bug tracker, which can be
found on the web site at https://dev.yorhel.nl/ncdu
=head1 AUTHOR
Written by Yoran Heling <projects@yorhel.nl>.
=head1 SEE ALSO
L<du(1)>

620
ncdu.1 Normal file
View file

@ -0,0 +1,620 @@
.\" SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
.\" SPDX-License-Identifier: MIT
.Dd August 16, 2025
.Dt NCDU 1
.Os
.Sh NAME
.Nm ncdu
.Nd NCurses Disk Usage
.
.Sh SYNOPSIS
.Nm
.Op Fl f Ar file
.Op Fl o Ar file
.Op Fl O Ar file
.Op Fl e , \-extended , \-no\-extended
.Op Fl \-ignore\-config
.Op Fl x , \-one\-file\-system , \-cross\-file\-system
.Op Fl \-exclude Ar pattern
.Op Fl X , \-exclude\-from Ar file
.Op Fl \-include\-caches , \-exclude\-caches
.Op Fl L , \-follow\-symlinks , \-no\-follow\-symlinks
.Op Fl \-include\-kernfs , \-exclude\-kernfs
.Op Fl t , \-threads Ar num
.Op Fl c , \-compress , \-no\-compress
.Op Fl \-compress\-level Ar num
.Op Fl \-export\-block\-size Ar num
.Op Fl 0 , 1 , 2
.Op Fl q , \-slow\-ui\-updates , \-fast\-ui\-updates
.Op Fl \-enable\-shell , \-disable\-shell
.Op Fl \-enable\-delete , \-disable\-delete
.Op Fl \-enable\-refresh , \-disable\-refresh
.Op Fl r
.Op Fl \-si , \-no\-si
.Op Fl \-disk\-usage , \-apparent\-size
.Op Fl \-show\-hidden , \-hide\-hidden
.Op Fl \-show\-itemcount , \-hide\-itemcount
.Op Fl \-show\-mtime , \-hide\-mtime
.Op Fl \-show\-graph , \-hide\-graph
.Op Fl \-show\-percent , \-hide\-percent
.Op Fl \-graph\-style Ar hash | half\-block | eighth\-block
.Op Fl \-shared\-column Ar off | shared | unique
.Op Fl \-sort Ar column
.Op Fl \-enable\-natsort , \-disable\-natsort
.Op Fl \-group\-directories\-first , \-no\-group\-directories\-first
.Op Fl \-confirm\-quit , \-no\-confirm\-quit
.Op Fl \-confirm\-delete , \-no\-confirm\-delete
.Op Fl \-delete\-command Ar command
.Op Fl \-color Ar off | dark | dark-bg
.Op Ar path
.Nm
.Op Fl h , \-help
.Nm
.Op Fl v , V , \-version
.
.Sh DESCRIPTION
.Nm
(NCurses Disk Usage) is an interactive curses-based version of the well-known
.Xr du 1 ,
and provides a fast way to see what directories are using your disk space.
.
.Sh OPTIONS
.Ss Mode Selection
.Bl -tag -width Ds
.It Fl h , \-help
Print a short help message and quit.
.It Fl v , V , \-version
Print version and quit.
.It Fl f Ar file
Load the given file, which has earlier been created with the
.Fl o
or
.Fl O
flag.
If
.Ar file
is equivalent to '\-', the file is read from standard input.
Reading from standard input is only supported for the JSON format.
.Pp
For the sake of preventing a screw-up, the current version of
.Nm
will assume that the directory information in the imported file does not
represent the filesystem on which the file is being imported.
That is, the refresh, file deletion and shell spawning options in the browser
will be disabled.
.It Ar dir
Scan the given directory.
.It Fl o Ar file
Export the directory tree in JSON format to
.Ar file
instead of opening the browser interface.
If
.Ar file
is '\-', the data is written to standard output.
See the examples section below for some handy use cases.
.Pp
Be warned that the exported data may grow quite large when exporting a
directory with many files.
10.000 files will get you an export in the order of 600 to 700 KiB
uncompressed, or a little over 100 KiB when compressed with gzip.
This scales linearly, so be prepared to handle a few tens of megabytes when
dealing with millions of files.
.Pp
Consider enabling
.Fl c
to output Zstandard-compressed JSON, which can significantly reduce size of the
exported data.
.Pp
When running a multi-threaded scan or when scanning a directory tree that may
not fit in memory, consider using
.Fl O
instead.
.It Fl O Ar file
Export the directory tree in binary format to
.Ar file
instead of opening the browser interface.
If
.Ar file
is '\-', the data is written to standard output.
The binary format has built-in compression, supports low-memory multi-threaded
export (in combination with
.Fl t )
and can be browsed without importing the entire directory tree into memory.
.It Fl e , \-extended , \-no\-extended
Enable/disable extended information mode.
This will, in addition to the usual file information, also read the ownership,
permissions and last modification time for each file.
This will result in higher memory usage (by roughly ~30%) and in a larger
output file when exporting.
.Pp
When using the file export/import function, this flag should be added both when
exporting (to make sure the information is added to the export) and when
importing (to read this extra information in memory).
This flag has no effect when importing a file that has been exported without
the extended information.
.Pp
This enables viewing and sorting by the latest child mtime, or modified time,
using 'm' and 'M', respectively.
.It Fl \-ignore\-config
Do not attempt to load any configuration files.
.El
.
.Ss Scan Options
These options affect the scanning progress, they have no effect when importing
directory information from a file.
.Bl -tag -width Ds
.It Fl x , \-one\-file\-system
Do not cross filesystem boundaries, i.e. only count files and directories on
the same filesystem as the directory being scanned.
.It Fl \-cross\-file\-system
Do cross filesystem boundaries.
This is the default, but can be specified to overrule a previously configured
.Fl x .
.It Fl \-exclude Ar pattern
Exclude files that match
.Ar pattern .
The files are still displayed by default, but are not counted towards the disk
usage statistics.
This argument can be added multiple times to add more patterns.
.It Fl X , \-exclude\-from Ar file
Exclude files that match any pattern in
.Ar file .
Patterns should be separated by a newline.
.It Fl \-include\-caches , \-exclude\-caches
Include (default) or exclude directories containing
.Pa CACHEDIR.TAG .
Excluded cache directories are still displayed, but their contents will not be
scanned or counted towards the disk usage statistics.
.Lk https://bford.info/cachedir/
.It Fl L , \-follow\-symlinks , \-no\-follow\-symlinks
Follow (or not) symlinks and count the size of the file they point to.
This option does not follow symlinks to directories and will cause each
symlinked file to count as a unique file.
This is different from how hard links are handled.
The exact counting behavior of this flag is subject to change in the future.
.It Fl \-include\-kernfs , \-exclude\-kernfs
(Linux only) Include (default) or exclude Linux pseudo filesystems such as
.Pa /proc
(procfs) and
.Pa /sys
(sysfs).
.Pp
The complete list of currently known pseudo filesystems is: binfmt, bpf, cgroup,
cgroup2, debug, devpts, proc, pstore, security, selinux, sys, trace.
.It Fl t , \-threads Ar num
Number of threads to use when scanning the filesystem, defaults to 1.
.Pp
In single-threaded mode, the JSON export (see
.Fl o )
can operate with very little memory, but in multi-threaded mode the entire
directory tree is first constructed in memory and written out after the
filesystem scan has completed,
This causes a delay in output and requires significantly more memory for large
directory trees.
The binary format (see
.Fl O )
does not have this problem and supports efficient exporting with any number of
threads.
.El
.
.Ss Export Options
These options affect behavior when exporting to file with the
.Fl o
or
.Fl O
options.
.Bl -tag -width Ds
.It Fl c , \-compress , \-no\-compress
Enable or disable Zstandard compression when exporting to JSON (see
.Fl o ) .
.It Fl \-compress\-level Ar num
Set the Zstandard compression level when using
.Fl O
or
.Fl c .
Valid values are 1 (fastest) to 19 (slowest).
Defaults to 4.
.It Fl \-export\-block\-size Ar num
Set the block size, in kibibytes, for the binary export format (see
.Fl O ) .
Larger blocks require more memory but result in better compression efficiency.
This option can be combined with a higher
.Fl \-compress\-level
for even better compression.
.Pp
Accepted values are between 4 and 16000.
The defaults is to start at 64 KiB and then gradually increase the block size
for large exports.
.El
.
.Ss Interface Options
.Bl -tag -width Ds
.It Fl 0
Don't give any feedback while scanning a directory or importing a file, except
when a fatal error occurs.
Ncurses will not be initialized until the scan is complete.
When exporting the data with
.Fl o ,
ncurses will not be initialized at all.
This option is the default when exporting to standard output.
.It Fl 1
Write progress information to the terminal, but don't open a full-screen
ncurses interface.
This option is the default when exporting to a file.
.Pp
In some cases, the ncurses browser interface which you'll see after the
scan/import is complete may look garbled when using this option.
If you're not exporting to a file,
.Fl 2
is usually a better choice.
.It Fl 2
Show a full-screen ncurses interface while scanning a directory or importing
a file.
This is the only interface that provides feedback on any non-fatal errors while
scanning.
.It Fl q , \-slow\-ui\-updates , \-fast\-ui\-updates
Change the UI update interval while scanning or importing.
.Nm
updates the screen 10 times a second by default (with
.Fl \-fast\-ui\-updates
), this can be decreased to once every 2 seconds with
.Fl q
or
.Fl \-slow\-ui\-updates .
This option can be used to save bandwidth over remote connections.
This option has no effect in combination with
.Fl 0 .
.It Fl \-enable\-shell , \-disable\-shell
Enable or disable shell spawning from the file browser.
This feature is enabled by default when scanning a live directory and disabled
when importing from file.
.It Fl \-enable\-delete , \-disable\-delete
Enable or disable the built-in file deletion feature.
This feature is enabled by default when scanning a live directory and disabled
when importing from file.
Explicitly disabling the deletion feature can work as a safeguard to prevent
accidental data loss.
.It Fl \-enable\-refresh , \-disable\-refresh
Enable or disable directory refreshing from the file browser.
This feature is enabled by default when scanning a live directory and disabled
when importing from file.
.It Fl r
Read-only mode.
When given once, this is an alias for
.Fl \-disable\-delete ,
when given twice it will also add
.Fl \-disable\-shell ,
thus ensuring that there is no way to modify the file system from within
.Nm .
.It Fl \-si , \-no\-si
List sizes using base 10 prefixes, that is, powers of 1000 (kB, MB, etc), as
defined in the International System of Units (SI), instead of the usual base 2
prefixes (KiB, MiB, etc).
.It Fl \-disk\-usage , \-apparent\-size
Select whether to display disk usage (default) or apparent sizes.
Can also be toggled in the file browser with the 'a' key.
.It Fl \-show\-hidden , \-hide\-hidden
Show (default) or hide "hidden" and excluded files.
Can also be toggled in the file browser with the 'e' key.
.It Fl \-show\-itemcount , \-hide\-itemcount
Show or hide (default) the item counts column.
Can also be toggled in the file browser with the 'c' key.
.It Fl \-show\-mtime , \-hide\-mtime
Show or hide (default) the last modification time column.
Can also be toggled in the file browser with the 'm' key.
This option is ignored when not in extended mode, see
.Fl e .
.It Fl \-show\-graph , \-hide\-graph
Show (default) or hide the relative size bar column.
Can also be toggled in the file browser with the 'g' key.
.It Fl \-show\-percent , \-hide\-percent
Show (default) or hide the relative size percent column.
Can also be toggled in the file browser with the 'g' key.
.It Fl \-graph\-style Ar hash | half\-block | eighth\-block
Change the way that the relative size bar column is drawn.
Recognized values are
.Ar hash
to draw ASCII '#' characters (default and most portable),
.Ar half\-block
to use half-block drawing characters or
.Ar eighth\-block
to use eighth-block drawing characters.
Eighth-block characters are the most precise but may not render correctly in
all terminals.
.It Fl \-shared\-column Ar off | shared | unique
Set to
.Ar off
to disable the shared size column for directories,
.Ar shared
(default) to display shared directory sizes as a separate column or
.Ar unique
to display unique directory sizes as a separate column.
These options can also be cycled through in the file browser with the 'u' key.
.It Fl \-sort Ar column
Change the default column to sort on.
Accepted values are
.Ar disk\-usage
(the default),
.Ar name , apparent\-size , itemcount
or
.Ar mtime .
The latter only makes sense in extended mode, see
.Fl e .
.Pp
The column name can be suffixed with
.Li \-asc
or
.Li \-desc
to change the order to ascending or descending, respectively.
For example,
.Li \-\-sort=name\-desc
to sort by name in descending order.
.It Fl \-enable\-natsort , \-disable\-natsort
Enable (default) or disable natural sort when sorting by file name.
.It Fl \-group\-directories\-first , \-no\-group\-directories\-first
Sort (or not) directories before files.
.It Fl \-confirm\-quit , \-no\-confirm\-quit
Require a confirmation before quitting ncdu.
Can be helpful when you accidentally press 'q' during or after a very long scan.
.It Fl \-confirm\-delete , \-no\-confirm\-delete
Require a confirmation before deleting a file or directory.
Enabled by default, but can be disabled if you're absolutely sure you won't
accidentally press 'd'.
.It Fl \-delete\-command Ar command
When set to a non-empty string, replace the built-in file deletion feature with
a custom shell command.
.Pp
The absolute path of the item to be deleted is appended to the given command
and the result is evaluated in a shell.
The command is run from the same directory that ncdu itself was started in.
The
.Ev NCDU_DELETE_PATH
environment variable is set to the absolute path of the item to be deleted and
.Ev NCDU_LEVEL
is set in the same fashion as when spawning a shell from within ncdu.
.Pp
After command completion, the in-memory view of the selected item is refreshed
and directory sizes are adjusted as necessary.
This is not a full refresh of the complete directory tree, so if the item has
been renamed or moved to another directory, it's new location is not
automatically picked up.
.Pp
For example, to use
.Xr rm 1
interactive mode to prompt before each deletion:
.Dl ncdu --no-confirm-delete --delete-command \[aq]rm -ri --\[aq]
Or to move files to trash:
.Dl ncdu --delete-command \[aq]gio trash --\[aq]
.It Fl \-color Ar off | dark | dark-bg
Set the color scheme.
The following schemes are recognized:
.Ar off
to disable colors,
.Ar dark
for a color scheme intended for dark backgrounds and
.Ar dark\-bg
for a variation of the
.Ar dark
color scheme that also works in terminals with a light background.
.Pp
The default is
.Ar off .
.El
.
.Sh CONFIGURATION
.Nm
can be configured by placing command-line options in
.Pa /etc/ncdu.conf
or
.Pa $HOME/.config/ncdu/config .
If both files exist, the system configuration will be loaded before the user
configuration, allowing users to override options set in the system
configuration.
Options given on the command line will override options set in the
configuration files.
The files will not be read at all when
.Fl \-ignore\-config
is given on the command line.
.Pp
The configuration file format is simply one command line option per line.
Lines starting with '#' are ignored.
A line can be prefixed with '@' to suppress errors while parsing the option.
Example configuration file:
.Bd -literal -offset indent
# Always enable extended mode
\-e
# Disable file deletion
\-\-disable\-delete
# Exclude .git directories
\-\-exclude .git
# Read excludes from ~/.ncduexcludes, ignore error if the file does not exist
@--exclude-from ~/.ncduexcludes
.Ed
.
.Sh KEYS
.Bl -tag -width Ds
.It ?
Open help + keys + about screen
.It up , down , j , k
Cycle through the items
.It right, enter, l
Open selected directory
.It left, <, h
Go to parent directory
.It n
Order by filename (press again for descending order)
.It s
Order by filesize (press again for descending order)
.It C
Order by number of items (press again for descending order)
.It a
Toggle between showing disk usage and showing apparent size.
.It M
Order by latest child mtime, or modified time (press again for descending
order).
Requires the
.Fl e
flag.
.It d
Delete the selected file or directory.
An error message will be shown when the contents of the directory do not match
or do not exist anymore on the filesystem.
.It t
Toggle dirs before files when sorting.
.It g
Toggle between showing percentage, graph, both, or none.
Percentage is relative to the size of the current directory, graph is relative
to the largest item in the current directory.
.It u
Toggle display of the shared / unique size column for directories that share
hard links.
This column is only visible if the current listing contains directories with
shared hard links.
.It c
Toggle display of child item counts.
.It m
Toggle display of latest child mtime, or modified time.
Requires the
.Fl e
flag.
.It e
Show/hide 'hidden' or 'excluded' files and directories.
Be aware that even if you can't see the hidden files and directories, they are
still there and they are still included in the directory sizes.
If you suspect that the totals shown at the bottom of the screen are not
correct, make sure you haven't enabled this option.
.It i
Show information about the current selected item.
.It r
Refresh/recalculate the current directory.
.It b
Spawn shell in current directory.
.Pp
.Nm
determines your preferred shell from the
.Ev NCDU_SHELL
or
.Ev SHELL
environment variable (in that order), or calls
.Pa /bin/sh
if neither are set.
This allows you to also configure another command to be run when he 'b' key is
pressed.
For example, to spawn the
.Xr vifm 1
file manager instead of a shell, run
.Nm
as follows:
.Dl NCDU_SHELL=vifm ncdu
The
.Ev NCDU_LEVEL
environment variable is set or incremented before spawning the shell, allowing
you to detect if your shell is running from within
.Nm .
This can be useful to avoid nesting multiple instances, although
.Nm
itself does not (currently) warn about or prevent this situation.
.It q
Quit
.El
.
.Sh FILE FLAGS
Entries in the browser interface may be prefixed by a one\-character flag.
These flags have the following meaning:
.Bl -tag -width Ds
.It !
An error occurred while reading this directory.
.It \.
An error occurred while reading a subdirectory, so the indicated size may not
be correct.
.It <
File or directory is excluded from the statistics by using exclude patterns.
.It >
Directory is on another filesystem.
.It ^
Directory is excluded from the statistics due to being a Linux pseudo
filesystem.
.It @
This is neither a file nor a folder (symlink, socket, ...).
.It H
Same file was already counted (hard link).
.It e
Empty directory.
.El
.
.Sh EXAMPLES
To scan and browse the directory you're currently in, all you need is a simple:
.Dl ncdu
To scan a full filesystem, for example your root filesystem, you'll want to use
.Fl x :
.Dl ncdu \-x /
.Pp
Since scanning a large directory may take a while, you can scan a directory and
export the results for later viewing:
.Bd -literal -offset indent
ncdu \-1xO export.ncdu /
# ...some time later:
ncdu \-f export.ncdu
.Ed
To export from a cron job, make sure to replace
.Fl 1
with
.Fl 0
to suppress unnecessary progress output.
.Pp
You can also export a directory and browse it once scanning is done:
.Dl ncdu \-co\- | tee export.json.zst | ./ncdu \-f\-
.Pp
To scan a system remotely, but browse through the files locally:
.Dl ssh user@system ncdu \-co\- / | ./ncdu \-f\-
Remote scanning and local viewing has two major advantages when
compared to running
.Nm
directly on the remote system: You can browse through the scanned directory on
the local system without any network latency, and
.Nm
does not keep the entire directory structure in memory when exporting, so this
won't consume much memory on the remote system.
.
.Sh SEE ALSO
.Xr du 1 ,
.Xr tree 1 .
.Pp
.Nm
has a website:
.Lk https://dev.yorhel.nl/ncdu
.
.Sh AUTHORS
Written by
.An Yorhel Aq Mt projects@yorhel.nl
.
.Sh BUGS
Directory hard links and firmlinks (MacOS) are not supported.
They are not detected as being hard links and will thus get scanned and counted
multiple times.
.Pp
Some minor glitches may appear when displaying filenames that contain multibyte
or multicolumn characters.
.Pp
The unique and shared directory sizes are calculated based on the assumption
that the link count of hard links does not change during a filesystem scan or
in between refreshes.
If this does happen, for example when a hard link is deleted, then these
numbers will be very much incorrect and a full refresh by restarting ncdu is
needed to get correct numbers again.
.Pp
All sizes are internally represented as a signed 64bit integer.
If you have a directory larger than 8 EiB minus one byte, ncdu will clip its
size to 8 EiB minus one byte.
When deleting or refreshing items in a directory with a clipped size, the
resulting sizes will be incorrect.
Likewise, item counts are stored in a 32-bit integer, so will be incorrect in
the unlikely event that you happen to have more than 4 billion items in a
directory.
.Pp
Please report any other bugs you may find at the bug tracker, which can be
found on the web site at
.Lk https://dev.yorhel.nl/ncdu

468
src/bin_export.zig Normal file
View file

@ -0,0 +1,468 @@
// SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
// SPDX-License-Identifier: MIT
const std = @import("std");
const main = @import("main.zig");
const model = @import("model.zig");
const sink = @import("sink.zig");
const util = @import("util.zig");
const ui = @import("ui.zig");
const c = @import("c.zig").c;
pub const global = struct {
var fd: std.fs.File = undefined;
var index: std.ArrayListUnmanaged(u8) = .empty;
var file_off: u64 = 0;
var lock: std.Thread.Mutex = .{};
var root_itemref: u64 = 0;
};
pub const SIGNATURE = "\xbfncduEX1";
pub const ItemKey = enum(u5) {
// all items
type = 0, // EType
name = 1, // bytes
prev = 2, // itemref
// Only for non-specials
asize = 3, // u64
dsize = 4, // u64
// Only for .dir
dev = 5, // u64 only if different from parent dir
rderr = 6, // bool true = error reading directory list, false = error in sub-item, absent = no error
cumasize = 7, // u64
cumdsize = 8, // u64
shrasize = 9, // u64
shrdsize = 10, // u64
items = 11, // u64
sub = 12, // itemref only if dir is not empty
// Only for .link
ino = 13, // u64
nlink = 14, // u32
// Extended mode
uid = 15, // u32
gid = 16, // u32
mode = 17, // u16
mtime = 18, // u64
_,
};
// Pessimistic upper bound on the encoded size of an item, excluding the name field.
// 2 bytes for map start/end, 11 per field (2 for the key, 9 for a full u64).
const MAX_ITEM_LEN = 2 + 11 * @typeInfo(ItemKey).@"enum".fields.len;
pub const CborMajor = enum(u3) { pos, neg, bytes, text, array, map, tag, simple };
inline fn bigu16(v: u16) [2]u8 { return @bitCast(std.mem.nativeToBig(u16, v)); }
inline fn bigu32(v: u32) [4]u8 { return @bitCast(std.mem.nativeToBig(u32, v)); }
inline fn bigu64(v: u64) [8]u8 { return @bitCast(std.mem.nativeToBig(u64, v)); }
inline fn blockHeader(id: u4, len: u28) [4]u8 { return bigu32((@as(u32, id) << 28) | len); }
inline fn cborByte(major: CborMajor, arg: u5) u8 { return (@as(u8, @intFromEnum(major)) << 5) | arg; }
// (Uncompressed) data block size.
// Start with 64k, then use increasingly larger block sizes as the export file
// grows. This is both to stay within the block number limit of the index block
// and because, with a larger index block, the reader will end up using more
// memory anyway.
fn blockSize(num: u32) usize {
// block size uncompressed data in this num range
// # mil # KiB # GiB
return main.config.export_block_size
orelse if (num < ( 1<<20)) 64<<10 // 64
else if (num < ( 2<<20)) 128<<10 // 128
else if (num < ( 4<<20)) 256<<10 // 512
else if (num < ( 8<<20)) 512<<10 // 2048
else if (num < (16<<20)) 1024<<10 // 8192
else 2048<<10; // 32768
}
// Upper bound on the return value of blockSize()
// (config.export_block_size may be larger than the sizes listed above, let's
// stick with the maximum block size supported by the file format to be safe)
const MAX_BLOCK_SIZE: usize = 1<<28;
pub const Thread = struct {
buf: []u8 = undefined,
off: usize = MAX_BLOCK_SIZE, // pretend we have a full block to trigger a flush() for the first write
block_num: u32 = std.math.maxInt(u32),
itemref: u64 = 0, // ref of item currently being written
// unused, but kept around for easy debugging
fn compressNone(in: []const u8, out: []u8) usize {
@memcpy(out[0..in.len], in);
return in.len;
}
fn compressZstd(in: []const u8, out: []u8) usize {
while (true) {
const r = c.ZSTD_compress(out.ptr, out.len, in.ptr, in.len, main.config.complevel);
if (c.ZSTD_isError(r) == 0) return r;
ui.oom(); // That *ought* to be the only reason the above call can fail.
}
}
fn createBlock(t: *Thread) std.ArrayListUnmanaged(u8) {
var out: std.ArrayListUnmanaged(u8) = .empty;
if (t.block_num == std.math.maxInt(u32) or t.off == 0) return out;
out.ensureTotalCapacityPrecise(main.allocator, 12 + @as(usize, @intCast(c.ZSTD_COMPRESSBOUND(@as(c_int, @intCast(t.off)))))) catch unreachable;
out.items.len = out.capacity;
const bodylen = compressZstd(t.buf[0..t.off], out.items[8..]);
out.items.len = 12 + bodylen;
out.items[0..4].* = blockHeader(0, @intCast(out.items.len));
out.items[4..8].* = bigu32(t.block_num);
out.items[8+bodylen..][0..4].* = blockHeader(0, @intCast(out.items.len));
return out;
}
fn flush(t: *Thread, expected_len: usize) void {
@branchHint(.unlikely);
var block = createBlock(t);
defer block.deinit(main.allocator);
global.lock.lock();
defer global.lock.unlock();
// This can only really happen when the root path exceeds our block size,
// in which case we would probably have error'ed out earlier anyway.
if (expected_len > t.buf.len) ui.die("Error writing data: path too long.\n", .{});
if (block.items.len > 0) {
if (global.file_off >= (1<<40)) ui.die("Export data file has grown too large, please report a bug.\n", .{});
global.index.items[4..][t.block_num*8..][0..8].* = bigu64((global.file_off << 24) + block.items.len);
global.file_off += block.items.len;
global.fd.writeAll(block.items) catch |e|
ui.die("Error writing to file: {s}.\n", .{ ui.errorString(e) });
}
t.off = 0;
t.block_num = @intCast((global.index.items.len - 4) / 8);
global.index.appendSlice(main.allocator, &[1]u8{0}**8) catch unreachable;
if (global.index.items.len + 12 >= (1<<28)) ui.die("Too many data blocks, please report a bug.\n", .{});
const newsize = blockSize(t.block_num);
if (t.buf.len != newsize) t.buf = main.allocator.realloc(t.buf, newsize) catch unreachable;
}
fn cborHead(t: *Thread, major: CborMajor, arg: u64) void {
if (arg <= 23) {
t.buf[t.off] = cborByte(major, @intCast(arg));
t.off += 1;
} else if (arg <= std.math.maxInt(u8)) {
t.buf[t.off] = cborByte(major, 24);
t.buf[t.off+1] = @truncate(arg);
t.off += 2;
} else if (arg <= std.math.maxInt(u16)) {
t.buf[t.off] = cborByte(major, 25);
t.buf[t.off+1..][0..2].* = bigu16(@intCast(arg));
t.off += 3;
} else if (arg <= std.math.maxInt(u32)) {
t.buf[t.off] = cborByte(major, 26);
t.buf[t.off+1..][0..4].* = bigu32(@intCast(arg));
t.off += 5;
} else {
t.buf[t.off] = cborByte(major, 27);
t.buf[t.off+1..][0..8].* = bigu64(arg);
t.off += 9;
}
}
fn cborIndef(t: *Thread, major: CborMajor) void {
t.buf[t.off] = cborByte(major, 31);
t.off += 1;
}
fn itemKey(t: *Thread, key: ItemKey) void {
t.cborHead(.pos, @intFromEnum(key));
}
fn itemRef(t: *Thread, key: ItemKey, ref: ?u64) void {
const r = ref orelse return;
t.itemKey(key);
// Full references compress like shit and most of the references point
// into the same block, so optimize that case by using a negative
// offset instead.
if ((r >> 24) == t.block_num) t.cborHead(.neg, t.itemref - r - 1)
else t.cborHead(.pos, r);
}
// Reserve space for a new item, write out the type, prev and name fields and return the itemref.
fn itemStart(t: *Thread, itype: model.EType, prev_item: ?u64, name: []const u8) u64 {
const min_len = name.len + MAX_ITEM_LEN;
if (t.off + min_len > t.buf.len) t.flush(min_len);
t.itemref = (@as(u64, t.block_num) << 24) | t.off;
t.cborIndef(.map);
t.itemKey(.type);
if (@intFromEnum(itype) >= 0) t.cborHead(.pos, @intCast(@intFromEnum(itype)))
else t.cborHead(.neg, @intCast(-1 - @intFromEnum(itype)));
t.itemKey(.name);
t.cborHead(.bytes, name.len);
@memcpy(t.buf[t.off..][0..name.len], name);
t.off += name.len;
t.itemRef(.prev, prev_item);
return t.itemref;
}
fn itemExt(t: *Thread, stat: *const sink.Stat) void {
if (!main.config.extended) return;
if (stat.ext.pack.hasuid) {
t.itemKey(.uid);
t.cborHead(.pos, stat.ext.uid);
}
if (stat.ext.pack.hasgid) {
t.itemKey(.gid);
t.cborHead(.pos, stat.ext.gid);
}
if (stat.ext.pack.hasmode) {
t.itemKey(.mode);
t.cborHead(.pos, stat.ext.mode);
}
if (stat.ext.pack.hasmtime) {
t.itemKey(.mtime);
t.cborHead(.pos, stat.ext.mtime);
}
}
fn itemEnd(t: *Thread) void {
t.cborIndef(.simple);
}
};
pub const Dir = struct {
// TODO: When items are written out into blocks depth-first, parent dirs
// will end up getting their items distributed over many blocks, which will
// significantly slow down reading that dir's listing. It may be worth
// buffering some items at the Dir level before flushing them out to the
// Thread buffer.
// The lock protects all of the below, and is necessary because final()
// accesses the parent dir and may be called from other threads.
// I'm not expecting much lock contention, but it's possible to turn
// last_item into an atomic integer and other fields could be split up for
// subdir use.
lock: std.Thread.Mutex = .{},
last_sub: ?u64 = null,
stat: sink.Stat,
items: u64 = 0,
size: u64 = 0,
blocks: u64 = 0,
err: bool = false,
suberr: bool = false,
shared_size: u64 = 0,
shared_blocks: u64 = 0,
inodes: Inodes = Inodes.init(main.allocator),
const Inodes = std.AutoHashMap(u64, Inode);
const Inode = struct {
size: u64,
blocks: u64,
nlink: u32,
nfound: u32,
};
pub fn addSpecial(d: *Dir, t: *Thread, name: []const u8, sp: model.EType) void {
d.lock.lock();
defer d.lock.unlock();
d.items += 1;
if (sp == .err) d.suberr = true;
d.last_sub = t.itemStart(sp, d.last_sub, name);
t.itemEnd();
}
pub fn addStat(d: *Dir, t: *Thread, name: []const u8, stat: *const sink.Stat) void {
d.lock.lock();
defer d.lock.unlock();
d.items += 1;
if (stat.etype != .link) {
d.size +|= stat.size;
d.blocks +|= stat.blocks;
}
d.last_sub = t.itemStart(stat.etype, d.last_sub, name);
t.itemKey(.asize);
t.cborHead(.pos, stat.size);
t.itemKey(.dsize);
t.cborHead(.pos, util.blocksToSize(stat.blocks));
if (stat.etype == .link) {
const lnk = d.inodes.getOrPut(stat.ino) catch unreachable;
if (!lnk.found_existing) lnk.value_ptr.* = .{
.size = stat.size,
.blocks = stat.blocks,
.nlink = stat.nlink,
.nfound = 1,
} else lnk.value_ptr.nfound += 1;
t.itemKey(.ino);
t.cborHead(.pos, stat.ino);
t.itemKey(.nlink);
t.cborHead(.pos, stat.nlink);
}
t.itemExt(stat);
t.itemEnd();
}
pub fn addDir(d: *Dir, stat: *const sink.Stat) Dir {
d.lock.lock();
defer d.lock.unlock();
d.items += 1;
d.size +|= stat.size;
d.blocks +|= stat.blocks;
return .{ .stat = stat.* };
}
pub fn setReadError(d: *Dir) void {
d.lock.lock();
defer d.lock.unlock();
d.err = true;
}
// XXX: older JSON exports did not include the nlink count and have
// this field set to '0'. We can deal with that when importing to
// mem_sink, but the hardlink counting algorithm used here really does need
// that information. Current code makes sure to count such links only once
// per dir, but does not count them towards the shared_* fields. That
// behavior is similar to ncdu 1.x, but the difference between memory
// import and this file export might be surprising.
fn countLinks(d: *Dir, parent: ?*Dir) void {
var parent_new: u32 = 0;
var it = d.inodes.iterator();
while (it.next()) |kv| {
const v = kv.value_ptr;
d.size +|= v.size;
d.blocks +|= v.blocks;
if (v.nlink > 1 and v.nfound < v.nlink) {
d.shared_size +|= v.size;
d.shared_blocks +|= v.blocks;
}
const p = parent orelse continue;
// All contained in this dir, no need to keep this entry around
if (v.nlink > 0 and v.nfound >= v.nlink) {
p.size +|= v.size;
p.blocks +|= v.blocks;
_ = d.inodes.remove(kv.key_ptr.*);
} else if (!p.inodes.contains(kv.key_ptr.*))
parent_new += 1;
}
// Merge remaining inodes into parent
const p = parent orelse return;
if (d.inodes.count() == 0) return;
// If parent is empty, just transfer
if (p.inodes.count() == 0) {
p.inodes.deinit();
p.inodes = d.inodes;
d.inodes = Inodes.init(main.allocator); // So we can deinit() without affecting parent
// Otherwise, merge
} else {
p.inodes.ensureUnusedCapacity(parent_new) catch unreachable;
it = d.inodes.iterator();
while (it.next()) |kv| {
const v = kv.value_ptr;
const plnk = p.inodes.getOrPutAssumeCapacity(kv.key_ptr.*);
if (!plnk.found_existing) plnk.value_ptr.* = v.*
else plnk.value_ptr.*.nfound += v.nfound;
}
}
}
pub fn final(d: *Dir, t: *Thread, name: []const u8, parent: ?*Dir) void {
if (parent) |p| p.lock.lock();
defer if (parent) |p| p.lock.unlock();
if (parent) |p| {
// Different dev? Don't merge the 'inodes' sets, just count the
// links here first so the sizes get added to the parent.
if (p.stat.dev != d.stat.dev) d.countLinks(null);
p.items += d.items;
p.size +|= d.size;
p.blocks +|= d.blocks;
if (d.suberr or d.err) p.suberr = true;
// Same dir, merge inodes
if (p.stat.dev == d.stat.dev) d.countLinks(p);
p.last_sub = t.itemStart(.dir, p.last_sub, name);
} else {
d.countLinks(null);
global.root_itemref = t.itemStart(.dir, null, name);
}
d.inodes.deinit();
t.itemKey(.asize);
t.cborHead(.pos, d.stat.size);
t.itemKey(.dsize);
t.cborHead(.pos, util.blocksToSize(d.stat.blocks));
if (parent == null or parent.?.stat.dev != d.stat.dev) {
t.itemKey(.dev);
t.cborHead(.pos, d.stat.dev);
}
if (d.err or d.suberr) {
t.itemKey(.rderr);
t.cborHead(.simple, if (d.err) 21 else 20);
}
t.itemKey(.cumasize);
t.cborHead(.pos, d.size +| d.stat.size);
t.itemKey(.cumdsize);
t.cborHead(.pos, util.blocksToSize(d.blocks +| d.stat.blocks));
if (d.shared_size > 0) {
t.itemKey(.shrasize);
t.cborHead(.pos, d.shared_size);
}
if (d.shared_blocks > 0) {
t.itemKey(.shrdsize);
t.cborHead(.pos, util.blocksToSize(d.shared_blocks));
}
t.itemKey(.items);
t.cborHead(.pos, d.items);
t.itemRef(.sub, d.last_sub);
t.itemExt(&d.stat);
t.itemEnd();
}
};
pub fn createRoot(stat: *const sink.Stat, threads: []sink.Thread) Dir {
for (threads) |*t| {
t.sink.bin.buf = main.allocator.alloc(u8, blockSize(0)) catch unreachable;
}
return .{ .stat = stat.* };
}
pub fn done(threads: []sink.Thread) void {
for (threads) |*t| {
t.sink.bin.flush(0);
main.allocator.free(t.sink.bin.buf);
}
while (std.mem.endsWith(u8, global.index.items, &[1]u8{0}**8))
global.index.shrinkRetainingCapacity(global.index.items.len - 8);
global.index.appendSlice(main.allocator, &bigu64(global.root_itemref)) catch unreachable;
global.index.appendSlice(main.allocator, &blockHeader(1, @intCast(global.index.items.len + 4))) catch unreachable;
global.index.items[0..4].* = blockHeader(1, @intCast(global.index.items.len));
global.fd.writeAll(global.index.items) catch |e|
ui.die("Error writing to file: {s}.\n", .{ ui.errorString(e) });
global.index.clearAndFree(main.allocator);
global.fd.close();
}
pub fn setupOutput(fd: std.fs.File) void {
global.fd = fd;
fd.writeAll(SIGNATURE) catch |e|
ui.die("Error writing to file: {s}.\n", .{ ui.errorString(e) });
global.file_off = 8;
// Placeholder for the index block header.
global.index.appendSlice(main.allocator, "aaaa") catch unreachable;
}

521
src/bin_reader.zig Normal file
View file

@ -0,0 +1,521 @@
// SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
// SPDX-License-Identifier: MIT
const std = @import("std");
const main = @import("main.zig");
const model = @import("model.zig");
const util = @import("util.zig");
const sink = @import("sink.zig");
const ui = @import("ui.zig");
const bin_export = @import("bin_export.zig");
const c = @import("c.zig").c;
const CborMajor = bin_export.CborMajor;
const ItemKey = bin_export.ItemKey;
// Two ways to read a bin export:
//
// 1. Streaming import
// - Read blocks sequentially, assemble items into model.Entry's and stitch
// them together on the go.
// - Does not use the sink.zig API, since sub-level items are read before their parent dirs.
// - Useful when:
// - User attempts to do a refresh or delete while browsing a file through (2)
// - Reading from a stream
//
// 2. Random access browsing
// - Read final block first to get the root item, then have browser.zig fetch
// dir listings from this file.
// - The default reader mode, requires much less memory than (1) and provides
// a snappier first-browsing experience.
//
// The approach from (2) can also be used to walk through the entire directory
// tree and stream it to sink.zig (either for importing or converting to JSON).
// That would allow for better code reuse and low-memory conversion, but
// performance will not be as good as a direct streaming read. Needs
// benchmarks.
//
// This file only implements (2) at the moment.
pub const global = struct {
var fd: std.fs.File = undefined;
var index: []u8 = undefined;
var blocks: [8]Block = [1]Block{.{}}**8;
var counter: u64 = 0;
// Last itemref being read/parsed. This is a hack to provide *some* context on error.
// Providing more context mainly just bloats the binary and decreases
// performance for fairly little benefit. Nobody's going to debug a corrupted export.
var lastitem: ?u64 = null;
};
const Block = struct {
num: u32 = std.math.maxInt(u32),
last: u64 = 0,
data: []u8 = undefined,
};
inline fn bigu16(v: [2]u8) u16 { return std.mem.bigToNative(u16, @bitCast(v)); }
inline fn bigu32(v: [4]u8) u32 { return std.mem.bigToNative(u32, @bitCast(v)); }
inline fn bigu64(v: [8]u8) u64 { return std.mem.bigToNative(u64, @bitCast(v)); }
fn die() noreturn {
@branchHint(.cold);
if (global.lastitem) |e| ui.die("Error reading item {x} from file\n", .{e})
else ui.die("Error reading from file\n", .{});
}
fn readBlock(num: u32) []const u8 {
// Simple linear search, only suitable if we keep the number of in-memory blocks small.
var block: *Block = &global.blocks[0];
for (&global.blocks) |*b| {
if (b.num == num) {
if (b.last != global.counter) {
global.counter += 1;
b.last = global.counter;
}
return b.data;
}
if (block.last > b.last) block = b;
}
if (block.num != std.math.maxInt(u32))
main.allocator.free(block.data);
block.num = num;
global.counter += 1;
block.last = global.counter;
if (num > global.index.len/8 - 1) die();
const offlen = bigu64(global.index[num*8..][0..8].*);
const off = offlen >> 24;
const len = offlen & 0xffffff;
if (len <= 12) die();
// Only read the compressed data part, assume block header, number and footer are correct.
const buf = main.allocator.alloc(u8, @intCast(len - 12)) catch unreachable;
defer main.allocator.free(buf);
const rdlen = global.fd.preadAll(buf, off + 8)
catch |e| ui.die("Error reading from file: {s}\n", .{ui.errorString(e)});
if (rdlen != buf.len) die();
const rawlen = c.ZSTD_getFrameContentSize(buf.ptr, buf.len);
if (rawlen <= 0 or rawlen >= (1<<24)) die();
block.data = main.allocator.alloc(u8, @intCast(rawlen)) catch unreachable;
const res = c.ZSTD_decompress(block.data.ptr, block.data.len, buf.ptr, buf.len);
if (res != block.data.len) ui.die("Error decompressing block {} (expected {} got {})\n", .{ num, block.data.len, res });
return block.data;
}
const CborReader = struct {
buf: []const u8,
fn head(r: *CborReader) CborVal {
if (r.buf.len < 1) die();
var v = CborVal{
.rd = r,
.major = @enumFromInt(r.buf[0] >> 5),
.indef = false,
.arg = 0,
};
switch (r.buf[0] & 0x1f) {
0x00...0x17 => |n| {
v.arg = n;
r.buf = r.buf[1..];
},
0x18 => {
if (r.buf.len < 2) die();
v.arg = r.buf[1];
r.buf = r.buf[2..];
},
0x19 => {
if (r.buf.len < 3) die();
v.arg = bigu16(r.buf[1..3].*);
r.buf = r.buf[3..];
},
0x1a => {
if (r.buf.len < 5) die();
v.arg = bigu32(r.buf[1..5].*);
r.buf = r.buf[5..];
},
0x1b => {
if (r.buf.len < 9) die();
v.arg = bigu64(r.buf[1..9].*);
r.buf = r.buf[9..];
},
0x1f => switch (v.major) {
.bytes, .text, .array, .map, .simple => {
v.indef = true;
r.buf = r.buf[1..];
},
else => die(),
},
else => die(),
}
return v;
}
// Read the next CBOR value, skipping any tags
fn next(r: *CborReader) CborVal {
while (true) {
const v = r.head();
if (v.major != .tag) return v;
}
}
};
const CborVal = struct {
rd: *CborReader,
major: CborMajor,
indef: bool,
arg: u64,
fn end(v: *const CborVal) bool {
return v.major == .simple and v.indef;
}
fn int(v: *const CborVal, T: type) T {
switch (v.major) {
.pos => return std.math.cast(T, v.arg) orelse die(),
.neg => {
if (std.math.minInt(T) == 0) die();
if (v.arg > std.math.maxInt(T)) die();
return -@as(T, @intCast(v.arg)) + (-1);
},
else => die(),
}
}
fn isTrue(v: *const CborVal) bool {
return v.major == .simple and v.arg == 21;
}
// Read either a byte or text string.
// Doesn't validate UTF-8 strings, doesn't support indefinite-length strings.
fn bytes(v: *const CborVal) []const u8 {
if (v.indef or (v.major != .bytes and v.major != .text)) die();
if (v.rd.buf.len < v.arg) die();
defer v.rd.buf = v.rd.buf[@intCast(v.arg)..];
return v.rd.buf[0..@intCast(v.arg)];
}
// Skip current value.
fn skip(v: *const CborVal) void {
// indefinite-length bytes, text, array or map; skip till break marker.
if (v.major != .simple and v.indef) {
while (true) {
const n = v.rd.next();
if (n.end()) return;
n.skip();
}
}
switch (v.major) {
.bytes, .text => {
if (v.rd.buf.len < v.arg) die();
v.rd.buf = v.rd.buf[@intCast(v.arg)..];
},
.array => {
if (v.arg > (1<<24)) die();
for (0..@intCast(v.arg)) |_| v.rd.next().skip();
},
.map => {
if (v.arg > (1<<24)) die();
for (0..@intCast(v.arg*|2)) |_| v.rd.next().skip();
},
else => {},
}
}
fn etype(v: *const CborVal) model.EType {
const n = v.int(i32);
return std.meta.intToEnum(model.EType, n)
catch if (n < 0) .pattern else .nonreg;
}
fn itemref(v: *const CborVal, cur: u64) u64 {
if (v.major == .pos) return v.arg;
if (v.major == .neg) {
if (v.arg >= (cur & 0xffffff)) die();
return cur - v.arg - 1;
}
return die();
}
};
test "CBOR int parsing" {
inline for (.{
.{ .in = "\x00", .t = u1, .exp = 0 },
.{ .in = "\x01", .t = u1, .exp = 1 },
.{ .in = "\x18\x18", .t = u8, .exp = 0x18 },
.{ .in = "\x18\xff", .t = u8, .exp = 0xff },
.{ .in = "\x19\x07\xff", .t = u64, .exp = 0x7ff },
.{ .in = "\x19\xff\xff", .t = u64, .exp = 0xffff },
.{ .in = "\x1a\x00\x01\x00\x00", .t = u64, .exp = 0x10000 },
.{ .in = "\x1b\x7f\xff\xff\xff\xff\xff\xff\xff", .t = i64, .exp = std.math.maxInt(i64) },
.{ .in = "\x1b\xff\xff\xff\xff\xff\xff\xff\xff", .t = u64, .exp = std.math.maxInt(u64) },
.{ .in = "\x1b\xff\xff\xff\xff\xff\xff\xff\xff", .t = i65, .exp = std.math.maxInt(u64) },
.{ .in = "\x20", .t = i1, .exp = -1 },
.{ .in = "\x38\x18", .t = i8, .exp = -0x19 },
.{ .in = "\x39\x01\xf3", .t = i16, .exp = -500 },
.{ .in = "\x3a\xfe\xdc\xba\x97", .t = i33, .exp = -0xfedc_ba98 },
.{ .in = "\x3b\x7f\xff\xff\xff\xff\xff\xff\xff", .t = i64, .exp = std.math.minInt(i64) },
.{ .in = "\x3b\xff\xff\xff\xff\xff\xff\xff\xff", .t = i65, .exp = std.math.minInt(i65) },
}) |t| {
var r = CborReader{.buf = t.in};
try std.testing.expectEqual(@as(t.t, t.exp), r.next().int(t.t));
try std.testing.expectEqual(0, r.buf.len);
}
}
test "CBOR string parsing" {
var r = CborReader{.buf="\x40"};
try std.testing.expectEqualStrings("", r.next().bytes());
r.buf = "\x45\x00\x01\x02\x03\x04x";
try std.testing.expectEqualStrings("\x00\x01\x02\x03\x04", r.next().bytes());
try std.testing.expectEqualStrings("x", r.buf);
r.buf = "\x78\x241234567890abcdefghijklmnopqrstuvwxyz-end";
try std.testing.expectEqualStrings("1234567890abcdefghijklmnopqrstuvwxyz", r.next().bytes());
try std.testing.expectEqualStrings("-end", r.buf);
}
test "CBOR skip parsing" {
inline for (.{
"\x00",
"\x40",
"\x41a",
"\x5f\xff",
"\x5f\x41a\xff",
"\x80",
"\x81\x00",
"\x9f\xff",
"\x9f\x9f\xff\xff",
"\x9f\x9f\x81\x00\xff\xff",
"\xa0",
"\xa1\x00\x01",
"\xbf\xff",
"\xbf\xc0\x00\x9f\xff\xff",
}) |s| {
var r = CborReader{.buf = s ++ "garbage"};
r.next().skip();
try std.testing.expectEqualStrings(r.buf, "garbage");
}
}
const ItemParser = struct {
r: CborReader,
len: ?u64 = null,
const Field = struct {
key: ItemKey,
val: CborVal,
};
fn init(buf: []const u8) ItemParser {
var r = ItemParser{.r = .{.buf = buf}};
const head = r.r.next();
if (head.major != .map) die();
if (!head.indef) r.len = head.arg;
return r;
}
fn key(r: *ItemParser) ?CborVal {
if (r.len) |*l| {
if (l.* == 0) return null;
l.* -= 1;
return r.r.next();
} else {
const v = r.r.next();
return if (v.end()) null else v;
}
}
// Skips over any fields that don't fit into an ItemKey.
fn next(r: *ItemParser) ?Field {
while (r.key()) |k| {
if (k.major == .pos and k.arg <= std.math.maxInt(@typeInfo(ItemKey).@"enum".tag_type)) return .{
.key = @enumFromInt(k.arg),
.val = r.r.next(),
} else {
k.skip();
r.r.next().skip();
}
}
return null;
}
};
// Returned buffer is valid until the next readItem().
fn readItem(ref: u64) ItemParser {
global.lastitem = ref;
if (ref >= (1 << (24 + 32))) die();
const block = readBlock(@intCast(ref >> 24));
if ((ref & 0xffffff) >= block.len) die();
return ItemParser.init(block[@intCast(ref & 0xffffff)..]);
}
const Import = struct {
sink: *sink.Thread,
stat: sink.Stat = .{},
fields: Fields = .{},
p: ItemParser = undefined,
const Fields = struct {
name: []const u8 = "",
rderr: bool = false,
prev: ?u64 = null,
sub: ?u64 = null,
};
fn readFields(ctx: *Import, ref: u64) void {
ctx.p = readItem(ref);
var hastype = false;
while (ctx.p.next()) |kv| switch (kv.key) {
.type => {
ctx.stat.etype = kv.val.etype();
hastype = true;
},
.name => ctx.fields.name = kv.val.bytes(),
.prev => ctx.fields.prev = kv.val.itemref(ref),
.asize => ctx.stat.size = kv.val.int(u64),
.dsize => ctx.stat.blocks = @intCast(kv.val.int(u64)/512),
.dev => ctx.stat.dev = kv.val.int(u64),
.rderr => ctx.fields.rderr = kv.val.isTrue(),
.sub => ctx.fields.sub = kv.val.itemref(ref),
.ino => ctx.stat.ino = kv.val.int(u64),
.nlink => ctx.stat.nlink = kv.val.int(u31),
.uid => { ctx.stat.ext.uid = kv.val.int(u32); ctx.stat.ext.pack.hasuid = true; },
.gid => { ctx.stat.ext.gid = kv.val.int(u32); ctx.stat.ext.pack.hasgid = true; },
.mode => { ctx.stat.ext.mode = kv.val.int(u16); ctx.stat.ext.pack.hasmode = true; },
.mtime => { ctx.stat.ext.mtime = kv.val.int(u64); ctx.stat.ext.pack.hasmtime = true; },
else => kv.val.skip(),
};
if (!hastype) die();
if (ctx.fields.name.len == 0) die();
}
fn import(ctx: *Import, ref: u64, parent: ?*sink.Dir, dev: u64) void {
ctx.stat = .{ .dev = dev };
ctx.fields = .{};
ctx.readFields(ref);
if (ctx.stat.etype == .dir) {
const prev = ctx.fields.prev;
const dir =
if (parent) |d| d.addDir(ctx.sink, ctx.fields.name, &ctx.stat)
else sink.createRoot(ctx.fields.name, &ctx.stat);
ctx.sink.setDir(dir);
if (ctx.fields.rderr) dir.setReadError(ctx.sink);
ctx.fields.prev = ctx.fields.sub;
while (ctx.fields.prev) |n| ctx.import(n, dir, ctx.stat.dev);
ctx.sink.setDir(parent);
dir.unref(ctx.sink);
ctx.fields.prev = prev;
} else {
const p = parent orelse die();
if (@intFromEnum(ctx.stat.etype) < 0)
p.addSpecial(ctx.sink, ctx.fields.name, ctx.stat.etype)
else
p.addStat(ctx.sink, ctx.fields.name, &ctx.stat);
}
if ((ctx.sink.files_seen.load(.monotonic) & 65) == 0)
main.handleEvent(false, false);
}
};
// Resolve an itemref and return a newly allocated entry.
// Dir.parent and Link.next/prev are left uninitialized.
pub fn get(ref: u64, alloc: std.mem.Allocator) *model.Entry {
const parser = readItem(ref);
var etype: ?model.EType = null;
var name: []const u8 = "";
var p = parser;
var ext = model.Ext{};
while (p.next()) |kv| {
switch (kv.key) {
.type => etype = kv.val.etype(),
.name => name = kv.val.bytes(),
.uid => { ext.uid = kv.val.int(u32); ext.pack.hasuid = true; },
.gid => { ext.gid = kv.val.int(u32); ext.pack.hasgid = true; },
.mode => { ext.mode = kv.val.int(u16); ext.pack.hasmode = true; },
.mtime => { ext.mtime = kv.val.int(u64); ext.pack.hasmtime = true; },
else => kv.val.skip(),
}
}
if (etype == null or name.len == 0) die();
var entry = model.Entry.create(alloc, etype.?, main.config.extended and !ext.isEmpty(), name);
entry.next = .{ .ref = std.math.maxInt(u64) };
if (entry.ext()) |e| e.* = ext;
if (entry.dir()) |d| d.sub = .{ .ref = std.math.maxInt(u64) };
p = parser;
while (p.next()) |kv| switch (kv.key) {
.prev => entry.next = .{ .ref = kv.val.itemref(ref) },
.asize => { if (entry.pack.etype != .dir) entry.size = kv.val.int(u64); },
.dsize => { if (entry.pack.etype != .dir) entry.pack.blocks = @intCast(kv.val.int(u64)/512); },
.rderr => { if (entry.dir()) |d| {
if (kv.val.isTrue()) d.pack.err = true
else d.pack.suberr = true;
} },
.dev => { if (entry.dir()) |d| d.pack.dev = model.devices.getId(kv.val.int(u64)); },
.cumasize => entry.size = kv.val.int(u64),
.cumdsize => entry.pack.blocks = @intCast(kv.val.int(u64)/512),
.shrasize => { if (entry.dir()) |d| d.shared_size = kv.val.int(u64); },
.shrdsize => { if (entry.dir()) |d| d.shared_blocks = kv.val.int(u64)/512; },
.items => { if (entry.dir()) |d| d.items = util.castClamp(u32, kv.val.int(u64)); },
.sub => { if (entry.dir()) |d| d.sub = .{ .ref = kv.val.itemref(ref) }; },
.ino => { if (entry.link()) |l| l.ino = kv.val.int(u64); },
.nlink => { if (entry.link()) |l| l.pack.nlink = kv.val.int(u31); },
else => kv.val.skip(),
};
return entry;
}
pub fn getRoot() u64 {
return bigu64(global.index[global.index.len-8..][0..8].*);
}
// Walk through the directory tree in depth-first order and pass results to sink.zig.
// Depth-first is required for JSON export, but more efficient strategies are
// possible for other sinks. Parallel import is also an option, but that's more
// complex and likely less efficient than a streaming import.
pub fn import() void {
const sink_threads = sink.createThreads(1);
var ctx = Import{.sink = &sink_threads[0]};
ctx.import(getRoot(), null, 0);
sink.done();
}
// Assumes that the file signature has already been read and validated.
pub fn open(fd: std.fs.File) !void {
global.fd = fd;
// Do not use fd.getEndPos() because that requires newer kernels supporting statx() #261.
try fd.seekFromEnd(0);
const size = try fd.getPos();
if (size < 16) return error.EndOfStream;
// Read index block
var buf: [4]u8 = undefined;
if (try fd.preadAll(&buf, size - 4) != 4) return error.EndOfStream;
const index_header = bigu32(buf);
if ((index_header >> 28) != 1 or (index_header & 7) != 0) die();
const len = (index_header & 0x0fffffff) - 8; // excluding block header & footer
if (len >= size) die();
global.index = main.allocator.alloc(u8, len) catch unreachable;
if (try fd.preadAll(global.index, size - len - 4) != global.index.len) return error.EndOfStream;
}

View file

@ -1,565 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#include "global.h"
#include <string.h>
#include <stdlib.h>
#include <ncurses.h>
#include <time.h>
static int graph = 1, show_as = 0, info_show = 0, info_page = 0, info_start = 0, show_items = 0, show_mtime = 0;
static char *message = NULL;
static void browse_draw_info(struct dir *dr) {
struct dir *t;
struct dir_ext *e = dir_ext_ptr(dr);
char mbuf[46];
int i;
nccreate(11, 60, "Item info");
if(dr->hlnk) {
nctab(41, info_page == 0, 1, "Info");
nctab(50, info_page == 1, 2, "Links");
}
switch(info_page) {
case 0:
attron(A_BOLD);
ncaddstr(2, 3, "Name:");
ncaddstr(3, 3, "Path:");
if(!e)
ncaddstr(4, 3, "Type:");
else {
ncaddstr(4, 3, "Mode:");
ncaddstr(4, 21, "UID:");
ncaddstr(4, 33, "GID:");
ncaddstr(5, 3, "Last modified:");
}
ncaddstr(6, 3, " Disk usage:");
ncaddstr(7, 3, "Apparent size:");
attroff(A_BOLD);
ncaddstr(2, 9, cropstr(dr->name, 49));
ncaddstr(3, 9, cropstr(getpath(dr->parent), 49));
ncaddstr(4, 9, dr->flags & FF_DIR ? "Directory" : dr->flags & FF_FILE ? "File" : "Other");
if(e) {
ncaddstr(4, 9, fmtmode(e->mode));
ncprint(4, 26, "%d", e->uid);
ncprint(4, 38, "%d", e->gid);
time_t t = (time_t)e->mtime;
strftime(mbuf, sizeof(mbuf), "%Y-%m-%d %H:%M:%S %z", localtime(&t));
ncaddstr(5, 18, mbuf);
}
ncmove(6, 18);
printsize(UIC_DEFAULT, dr->size);
addstrc(UIC_DEFAULT, " (");
addstrc(UIC_NUM, fullsize(dr->size));
addstrc(UIC_DEFAULT, " B)");
ncmove(7, 18);
printsize(UIC_DEFAULT, dr->asize);
addstrc(UIC_DEFAULT, " (");
addstrc(UIC_NUM, fullsize(dr->asize));
addstrc(UIC_DEFAULT, " B)");
break;
case 1:
for(i=0,t=dr->hlnk; t!=dr; t=t->hlnk,i++) {
if(info_start > i)
continue;
if(i-info_start > 5)
break;
ncaddstr(2+i-info_start, 3, cropstr(getpath(t), 54));
}
if(t!=dr)
ncaddstr(8, 25, "-- more --");
break;
}
ncaddstr(9, 31, "Press ");
addchc(UIC_KEY, 'i');
addstrc(UIC_DEFAULT, " to hide this window");
}
static void browse_draw_flag(struct dir *n, int *x) {
addchc(n->flags & FF_BSEL ? UIC_FLAG_SEL : UIC_FLAG,
n == dirlist_parent ? ' ' :
n->flags & FF_EXL ? '<' :
n->flags & FF_ERR ? '!' :
n->flags & FF_SERR ? '.' :
n->flags & FF_OTHFS ? '>' :
n->flags & FF_HLNKC ? 'H' :
!(n->flags & FF_FILE
|| n->flags & FF_DIR) ? '@' :
n->flags & FF_DIR
&& n->sub == NULL ? 'e' :
' ');
*x += 2;
}
static void browse_draw_graph(struct dir *n, int *x) {
float pc = 0.0f;
int o, i;
enum ui_coltype c = n->flags & FF_BSEL ? UIC_SEL : UIC_DEFAULT;
if(!graph)
return;
*x += graph == 1 ? 13 : graph == 2 ? 9 : 20;
if(n == dirlist_parent)
return;
addchc(c, '[');
/* percentage (6 columns) */
if(graph == 2 || graph == 3) {
pc = (float)(show_as ? n->parent->asize : n->parent->size);
if(pc < 1)
pc = 1.0f;
uic_set(c == UIC_SEL ? UIC_NUM_SEL : UIC_NUM);
printw("%5.1f", ((float)(show_as ? n->asize : n->size) / pc) * 100.0f);
addchc(c, '%');
}
if(graph == 3)
addch(' ');
/* graph (10 columns) */
if(graph == 1 || graph == 3) {
uic_set(c == UIC_SEL ? UIC_GRAPH_SEL : UIC_GRAPH);
o = (int)(10.0f*(float)(show_as ? n->asize : n->size) / (float)(show_as ? dirlist_maxa : dirlist_maxs));
for(i=0; i<10; i++)
addch(i < o ? '#' : ' ');
}
addchc(c, ']');
}
static void browse_draw_items(struct dir *n, int *x) {
enum ui_coltype c = n->flags & FF_BSEL ? UIC_SEL : UIC_DEFAULT;
enum ui_coltype cn = c == UIC_SEL ? UIC_NUM_SEL : UIC_NUM;
if(!show_items)
return;
*x += 7;
if(!n->items)
return;
else if(n->items < 100*1000) {
uic_set(cn);
printw("%6s", fullsize(n->items));
} else if(n->items < 1000*1000) {
uic_set(cn);
printw("%5.1f", n->items / 1000.0);
addstrc(c, "k");
} else if(n->items < 1000*1000*1000) {
uic_set(cn);
printw("%5.1f", n->items / 1e6);
addstrc(c, "M");
} else {
addstrc(c, " > ");
addstrc(cn, "1");
addchc(c, 'B');
}
}
static void browse_draw_mtime(struct dir *n, int *x) {
enum ui_coltype c = n->flags & FF_BSEL ? UIC_SEL : UIC_DEFAULT;
char mbuf[26];
struct dir_ext *e;
time_t t;
if (n->flags & FF_EXT) {
e = dir_ext_ptr(n);
} else if (!strcmp(n->name, "..") && (n->parent->flags & FF_EXT)) {
e = dir_ext_ptr(n->parent);
} else {
snprintf(mbuf, sizeof(mbuf), "no mtime");
goto no_mtime;
}
t = (time_t)e->mtime;
strftime(mbuf, sizeof(mbuf), "%Y-%m-%d %H:%M:%S %z", localtime(&t));
uic_set(c == UIC_SEL ? UIC_NUM_SEL : UIC_NUM);
no_mtime:
printw("%26s", mbuf);
*x += 27;
}
static void browse_draw_item(struct dir *n, int row) {
int x = 0;
enum ui_coltype c = n->flags & FF_BSEL ? UIC_SEL : UIC_DEFAULT;
uic_set(c);
mvhline(row, 0, ' ', wincols);
move(row, 0);
browse_draw_flag(n, &x);
move(row, x);
if(n != dirlist_parent)
printsize(c, show_as ? n->asize : n->size);
x += 10;
move(row, x);
browse_draw_graph(n, &x);
move(row, x);
browse_draw_items(n, &x);
move(row, x);
if (extended_info && show_mtime) {
browse_draw_mtime(n, &x);
move(row, x);
}
if(n->flags & FF_DIR)
c = c == UIC_SEL ? UIC_DIR_SEL : UIC_DIR;
addchc(c, n->flags & FF_DIR ? '/' : ' ');
addstrc(c, cropstr(n->name, wincols-x-1));
}
void browse_draw() {
struct dir *t;
char *tmp;
int selected = 0, i;
erase();
t = dirlist_get(0);
/* top line - basic info */
uic_set(UIC_HD);
mvhline(0, 0, ' ', wincols);
mvprintw(0,0,"%s %s ~ Use the arrow keys to navigate, press ", PACKAGE_NAME, PACKAGE_VERSION);
addchc(UIC_KEY_HD, '?');
addstrc(UIC_HD, " for help");
if(dir_import_active)
mvaddstr(0, wincols-10, "[imported]");
else if(read_only)
mvaddstr(0, wincols-11, "[read-only]");
/* second line - the path */
mvhlinec(UIC_DEFAULT, 1, 0, '-', wincols);
if(dirlist_par) {
mvaddchc(UIC_DEFAULT, 1, 3, ' ');
tmp = getpath(dirlist_par);
mvaddstrc(UIC_DIR, 1, 4, cropstr(tmp, wincols-8));
mvaddchc(UIC_DEFAULT, 1, 4+((int)strlen(tmp) > wincols-8 ? wincols-8 : (int)strlen(tmp)), ' ');
}
/* bottom line - stats */
uic_set(UIC_HD);
mvhline(winrows-1, 0, ' ', wincols);
if(t) {
mvaddstr(winrows-1, 0, " Total disk usage: ");
printsize(UIC_HD, t->parent->size);
addstrc(UIC_HD, " Apparent size: ");
uic_set(UIC_NUM_HD);
printsize(UIC_HD, t->parent->asize);
addstrc(UIC_HD, " Items: ");
uic_set(UIC_NUM_HD);
printw("%d", t->parent->items);
} else
mvaddstr(winrows-1, 0, " No items to display.");
uic_set(UIC_DEFAULT);
/* nothing to display? stop here. */
if(!t)
return;
/* get start position */
t = dirlist_top(0);
/* print the list to the screen */
for(i=0; t && i<winrows-3; t=dirlist_next(t),i++) {
browse_draw_item(t, 2+i);
/* save the selected row number for later */
if(t->flags & FF_BSEL)
selected = i;
}
/* draw message window */
if(message) {
nccreate(6, 60, "Message");
ncaddstr(2, 2, message);
ncaddstr(4, 34, "Press any key to continue");
}
/* draw information window */
t = dirlist_get(0);
if(!message && info_show && t != dirlist_parent)
browse_draw_info(t);
/* move cursor to selected row for accessibility */
move(selected+2, 0);
}
int browse_key(int ch) {
struct dir *t, *sel;
int i, catch = 0;
/* message window overwrites all keys */
if(message) {
message = NULL;
return 0;
}
sel = dirlist_get(0);
/* info window overwrites a few keys */
if(info_show && sel)
switch(ch) {
case '1':
info_page = 0;
break;
case '2':
if(sel->hlnk)
info_page = 1;
break;
case KEY_RIGHT:
case 'l':
if(sel->hlnk) {
info_page = 1;
catch++;
}
break;
case KEY_LEFT:
case 'h':
if(sel->hlnk) {
info_page = 0;
catch++;
}
break;
case KEY_UP:
case 'k':
if(sel->hlnk && info_page == 1) {
if(info_start > 0)
info_start--;
catch++;
}
break;
case KEY_DOWN:
case 'j':
case ' ':
if(sel->hlnk && info_page == 1) {
for(i=0,t=sel->hlnk; t!=sel; t=t->hlnk)
i++;
if(i > info_start+6)
info_start++;
catch++;
}
break;
}
if(!catch)
switch(ch) {
/* selecting items */
case KEY_UP:
case 'k':
dirlist_select(dirlist_get(-1));
dirlist_top(-1);
info_start = 0;
break;
case KEY_DOWN:
case 'j':
dirlist_select(dirlist_get(1));
dirlist_top(1);
info_start = 0;
break;
case KEY_HOME:
dirlist_select(dirlist_next(NULL));
dirlist_top(2);
info_start = 0;
break;
case KEY_LL:
case KEY_END:
dirlist_select(dirlist_get(1<<30));
dirlist_top(1);
info_start = 0;
break;
case KEY_PPAGE:
dirlist_select(dirlist_get(-1*(winrows-3)));
dirlist_top(-1);
info_start = 0;
break;
case KEY_NPAGE:
dirlist_select(dirlist_get(winrows-3));
dirlist_top(1);
info_start = 0;
break;
/* sorting items */
case 'n':
dirlist_set_sort(DL_COL_NAME, dirlist_sort_col == DL_COL_NAME ? !dirlist_sort_desc : 0, DL_NOCHANGE);
info_show = 0;
break;
case 's':
i = show_as ? DL_COL_ASIZE : DL_COL_SIZE;
dirlist_set_sort(i, dirlist_sort_col == i ? !dirlist_sort_desc : 1, DL_NOCHANGE);
info_show = 0;
break;
case 'C':
dirlist_set_sort(DL_COL_ITEMS, dirlist_sort_col == DL_COL_ITEMS ? !dirlist_sort_desc : 1, DL_NOCHANGE);
info_show = 0;
break;
case 'M':
if (extended_info) {
dirlist_set_sort(DL_COL_MTIME, dirlist_sort_col == DL_COL_MTIME ? !dirlist_sort_desc : 1, DL_NOCHANGE);
info_show = 0;
}
break;
case 'e':
dirlist_set_hidden(!dirlist_hidden);
info_show = 0;
break;
case 't':
dirlist_set_sort(DL_NOCHANGE, DL_NOCHANGE, !dirlist_sort_df);
info_show = 0;
break;
case 'a':
show_as = !show_as;
if(dirlist_sort_col == DL_COL_ASIZE || dirlist_sort_col == DL_COL_SIZE)
dirlist_set_sort(show_as ? DL_COL_ASIZE : DL_COL_SIZE, DL_NOCHANGE, DL_NOCHANGE);
info_show = 0;
break;
/* browsing */
case 10:
case KEY_RIGHT:
case 'l':
if(sel != NULL && sel->flags & FF_DIR) {
dirlist_open(sel == dirlist_parent ? dirlist_par->parent : sel);
dirlist_top(-3);
}
info_show = 0;
break;
case KEY_LEFT:
case KEY_BACKSPACE:
case 'h':
case '<':
if(dirlist_par && dirlist_par->parent != NULL) {
dirlist_open(dirlist_par->parent);
dirlist_top(-3);
}
info_show = 0;
break;
/* and other stuff */
case 'r':
if(dir_import_active) {
message = "Directory imported from file, won't refresh.";
break;
}
if(dirlist_par) {
dir_ui = 2;
dir_mem_init(dirlist_par);
dir_scan_init(getpath(dirlist_par));
}
info_show = 0;
break;
case 'q':
if(info_show)
info_show = 0;
else
if (confirm_quit)
quit_init();
else return 1;
break;
case 'g':
if(++graph > 3)
graph = 0;
info_show = 0;
break;
case 'c':
show_items = !show_items;
break;
case 'm':
if (extended_info)
show_mtime = !show_mtime;
break;
case 'i':
info_show = !info_show;
break;
case '?':
help_init();
info_show = 0;
break;
case 'd':
if(read_only >= 1 || dir_import_active) {
message = read_only >= 1
? "File deletion disabled in read-only mode."
: "File deletion not available for imported directories.";
break;
}
if(sel == NULL || sel == dirlist_parent)
break;
info_show = 0;
if((t = dirlist_get(1)) == sel)
if((t = dirlist_get(-1)) == sel || t == dirlist_parent)
t = NULL;
delete_init(sel, t);
break;
case 'b':
if(read_only >= 2 || dir_import_active) {
message = read_only >= 2
? "Shell feature disabled in read-only mode."
: "Shell feature not available for imported directories.";
break;
}
shell_init();
break;
}
/* make sure the info_* options are correct */
sel = dirlist_get(0);
if(!info_show || sel == dirlist_parent)
info_show = info_page = info_start = 0;
else if(sel && !sel->hlnk)
info_page = info_start = 0;
return 0;
}
void browse_init(struct dir *par) {
pstate = ST_BROWSE;
message = NULL;
dirlist_open(par);
}

View file

@ -1,37 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef _browser_h
#define _browser_h
#include "global.h"
int browse_key(int);
void browse_draw(void);
void browse_init(struct dir *);
#endif

1061
src/browser.zig Normal file

File diff suppressed because it is too large Load diff

20
src/c.zig Normal file
View file

@ -0,0 +1,20 @@
// SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
// SPDX-License-Identifier: MIT
pub const c = @cImport({
@cDefine("_XOPEN_SOURCE", "1"); // for wcwidth()
@cInclude("stdio.h"); // fopen(), used to initialize ncurses
@cInclude("string.h"); // strerror()
@cInclude("time.h"); // strftime()
@cInclude("wchar.h"); // wcwidth()
@cInclude("locale.h"); // setlocale() and localeconv()
@cInclude("fnmatch.h"); // fnmatch()
@cInclude("unistd.h"); // getuid()
@cInclude("sys/types.h"); // struct passwd
@cInclude("pwd.h"); // getpwnam(), getpwuid()
if (@import("builtin").os.tag == .linux) {
@cInclude("sys/vfs.h"); // statfs()
}
@cInclude("curses.h");
@cInclude("zstd.h");
});

View file

@ -1,253 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#include "global.h"
#include <string.h>
#include <errno.h>
#include <unistd.h>
#define DS_CONFIRM 0
#define DS_PROGRESS 1
#define DS_FAILED 2
static struct dir *root, *nextsel, *curdir;
static char noconfirm = 0, ignoreerr = 0, state;
static signed char seloption;
static int lasterrno;
static void delete_draw_confirm() {
nccreate(6, 60, "Confirm delete");
ncprint(1, 2, "Are you sure you want to delete \"%s\"%c",
cropstr(root->name, 21), root->flags & FF_DIR ? ' ' : '?');
if(root->flags & FF_DIR && root->sub != NULL)
ncprint(2, 18, "and all of its contents?");
if(seloption == 0)
attron(A_REVERSE);
ncaddstr(4, 15, "yes");
attroff(A_REVERSE);
if(seloption == 1)
attron(A_REVERSE);
ncaddstr(4, 24, "no");
attroff(A_REVERSE);
if(seloption == 2)
attron(A_REVERSE);
ncaddstr(4, 31, "don't ask me again");
attroff(A_REVERSE);
ncmove(4, seloption == 0 ? 15 : seloption == 1 ? 24 : 31);
}
static void delete_draw_progress() {
nccreate(6, 60, "Deleting...");
ncaddstr(1, 2, cropstr(getpath(curdir), 47));
ncaddstr(4, 41, "Press ");
addchc(UIC_KEY, 'q');
addstrc(UIC_DEFAULT, " to abort");
}
static void delete_draw_error() {
nccreate(6, 60, "Error!");
ncprint(1, 2, "Can't delete %s:", cropstr(getpath(curdir), 42));
ncaddstr(2, 4, strerror(lasterrno));
if(seloption == 0)
attron(A_REVERSE);
ncaddstr(4, 14, "abort");
attroff(A_REVERSE);
if(seloption == 1)
attron(A_REVERSE);
ncaddstr(4, 23, "ignore");
attroff(A_REVERSE);
if(seloption == 2)
attron(A_REVERSE);
ncaddstr(4, 33, "ignore all");
attroff(A_REVERSE);
}
void delete_draw() {
browse_draw();
switch(state) {
case DS_CONFIRM: delete_draw_confirm(); break;
case DS_PROGRESS: delete_draw_progress(); break;
case DS_FAILED: delete_draw_error(); break;
}
}
int delete_key(int ch) {
/* confirm */
if(state == DS_CONFIRM)
switch(ch) {
case KEY_LEFT:
case 'h':
if(--seloption < 0)
seloption = 0;
break;
case KEY_RIGHT:
case 'l':
if(++seloption > 2)
seloption = 2;
break;
case '\n':
if(seloption == 1)
return 1;
if(seloption == 2)
noconfirm++;
state = DS_PROGRESS;
break;
case 'q':
return 1;
}
/* processing deletion */
else if(state == DS_PROGRESS)
switch(ch) {
case 'q':
return 1;
}
/* error */
else if(state == DS_FAILED)
switch(ch) {
case KEY_LEFT:
case 'h':
if(--seloption < 0)
seloption = 0;
break;
case KEY_RIGHT:
case 'l':
if(++seloption > 2)
seloption = 2;
break;
case 10:
if(seloption == 0)
return 1;
if(seloption == 2)
ignoreerr++;
state = DS_PROGRESS;
break;
case 'q':
return 1;
}
return 0;
}
static int delete_dir(struct dir *dr) {
struct dir *nxt, *cur;
int r;
/* check for input or screen resizes */
curdir = dr;
if(input_handle(1))
return 1;
/* do the actual deleting */
if(dr->flags & FF_DIR) {
if((r = chdir(dr->name)) < 0)
goto delete_nxt;
if(dr->sub != NULL) {
nxt = dr->sub;
while(nxt != NULL) {
cur = nxt;
nxt = cur->next;
if(delete_dir(cur))
return 1;
}
}
if((r = chdir("..")) < 0)
goto delete_nxt;
r = dr->sub == NULL ? rmdir(dr->name) : 0;
} else
r = unlink(dr->name);
delete_nxt:
/* error occurred, ask user what to do */
if(r == -1 && !ignoreerr) {
state = DS_FAILED;
lasterrno = errno;
curdir = dr;
while(state == DS_FAILED)
if(input_handle(0))
return 1;
} else if(!(dr->flags & FF_DIR && dr->sub != NULL)) {
freedir(dr);
return 0;
}
return root == dr ? 1 : 0;
}
void delete_process() {
struct dir *par;
/* confirm */
seloption = 1;
while(state == DS_CONFIRM && !noconfirm)
if(input_handle(0)) {
browse_init(root->parent);
return;
}
/* chdir */
if(path_chdir(getpath(root->parent)) < 0) {
state = DS_FAILED;
lasterrno = errno;
while(state == DS_FAILED)
if(input_handle(0))
return;
}
/* delete */
seloption = 0;
state = DS_PROGRESS;
par = root->parent;
delete_dir(root);
if(nextsel)
nextsel->flags |= FF_BSEL;
browse_init(par);
if(nextsel)
dirlist_top(-4);
}
void delete_init(struct dir *dr, struct dir *s) {
state = DS_CONFIRM;
root = curdir = dr;
pstate = ST_DEL;
nextsel = s;
}

View file

@ -1,37 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef _delete_h
#define _delete_h
#include "global.h"
void delete_process(void);
int delete_key(int);
void delete_draw(void);
void delete_init(struct dir *, struct dir *);
#endif

301
src/delete.zig Normal file
View file

@ -0,0 +1,301 @@
// SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
// SPDX-License-Identifier: MIT
const std = @import("std");
const main = @import("main.zig");
const model = @import("model.zig");
const ui = @import("ui.zig");
const browser = @import("browser.zig");
const scan = @import("scan.zig");
const sink = @import("sink.zig");
const mem_sink = @import("mem_sink.zig");
const util = @import("util.zig");
const c = @import("c.zig").c;
var parent: *model.Dir = undefined;
var entry: *model.Entry = undefined;
var next_sel: ?*model.Entry = undefined; // Which item to select if deletion succeeds
var state: enum { confirm, busy, err } = .confirm;
var confirm: enum { yes, no, ignore } = .no;
var error_option: enum { abort, ignore, all } = .abort;
var error_code: anyerror = undefined;
pub fn setup(p: *model.Dir, e: *model.Entry, n: ?*model.Entry) void {
parent = p;
entry = e;
next_sel = n;
state = if (main.config.confirm_delete) .confirm else .busy;
confirm = .no;
}
// Returns true to abort scanning.
fn err(e: anyerror) bool {
if (main.config.ignore_delete_errors)
return false;
error_code = e;
state = .err;
while (main.state == .delete and state == .err)
main.handleEvent(true, false);
return main.state != .delete;
}
fn deleteItem(dir: std.fs.Dir, path: [:0]const u8, ptr: *align(1) ?*model.Entry) bool {
entry = ptr.*.?;
main.handleEvent(false, false);
if (main.state != .delete)
return true;
if (entry.dir()) |d| {
var fd = dir.openDirZ(path, .{ .no_follow = true, .iterate = false }) catch |e| return err(e);
var it = &d.sub.ptr;
parent = d;
defer parent = parent.parent.?;
while (it.*) |n| {
if (deleteItem(fd, n.name(), it)) {
fd.close();
return true;
}
if (it.* == n) // item deletion failed, make sure to still advance to next
it = &n.next.ptr;
}
fd.close();
dir.deleteDirZ(path) catch |e|
return if (e != error.DirNotEmpty or d.sub.ptr == null) err(e) else false;
} else
dir.deleteFileZ(path) catch |e| return err(e);
ptr.*.?.zeroStats(parent);
ptr.* = ptr.*.?.next.ptr;
return false;
}
// Returns true if the item has been deleted successfully.
fn deleteCmd(path: [:0]const u8, ptr: *align(1) ?*model.Entry) bool {
{
var env = std.process.getEnvMap(main.allocator) catch unreachable;
defer env.deinit();
env.put("NCDU_DELETE_PATH", path) catch unreachable;
// Since we're passing the path as an environment variable and go through
// the shell anyway, we can refer to the variable and avoid error-prone
// shell escaping.
const cmd = std.fmt.allocPrint(main.allocator, "{s} \"$NCDU_DELETE_PATH\"", .{main.config.delete_command}) catch unreachable;
defer main.allocator.free(cmd);
ui.runCmd(&.{"/bin/sh", "-c", cmd}, null, &env, true);
}
const stat = scan.statAt(std.fs.cwd(), path, false, null) catch {
// Stat failed. Would be nice to display an error if it's not
// 'FileNotFound', but w/e, let's just assume the item has been
// deleted as expected.
ptr.*.?.zeroStats(parent);
ptr.* = ptr.*.?.next.ptr;
return true;
};
// If either old or new entry is not a dir, remove & re-add entry in the in-memory tree.
if (ptr.*.?.pack.etype != .dir or stat.etype != .dir) {
ptr.*.?.zeroStats(parent);
const e = model.Entry.create(main.allocator, stat.etype, main.config.extended and !stat.ext.isEmpty(), ptr.*.?.name());
e.next.ptr = ptr.*.?.next.ptr;
mem_sink.statToEntry(&stat, e, parent);
ptr.* = e;
var it : ?*model.Dir = parent;
while (it) |p| : (it = p.parent) {
if (stat.etype != .link) {
p.entry.pack.blocks +|= e.pack.blocks;
p.entry.size +|= e.size;
}
p.items +|= 1;
}
}
// If new entry is a dir, recursively scan.
if (ptr.*.?.dir()) |d| {
main.state = .refresh;
sink.global.sink = .mem;
mem_sink.global.root = d;
}
return false;
}
// Returns the item that should be selected in the browser.
pub fn delete() ?*model.Entry {
while (main.state == .delete and state == .confirm)
main.handleEvent(true, false);
if (main.state != .delete)
return entry;
// Find the pointer to this entry
const e = entry;
var it = &parent.sub.ptr;
while (it.*) |n| : (it = &n.next.ptr)
if (it.* == entry)
break;
var path: std.ArrayListUnmanaged(u8) = .empty;
defer path.deinit(main.allocator);
parent.fmtPath(main.allocator, true, &path);
if (path.items.len == 0 or path.items[path.items.len-1] != '/')
path.append(main.allocator, '/') catch unreachable;
path.appendSlice(main.allocator, entry.name()) catch unreachable;
if (main.config.delete_command.len == 0) {
_ = deleteItem(std.fs.cwd(), util.arrayListBufZ(&path, main.allocator), it);
model.inodes.addAllStats();
return if (it.* == e) e else next_sel;
} else {
const isdel = deleteCmd(util.arrayListBufZ(&path, main.allocator), it);
model.inodes.addAllStats();
return if (isdel) next_sel else it.*;
}
}
fn drawConfirm() void {
browser.draw();
const box = ui.Box.create(6, 60, "Confirm delete");
box.move(1, 2);
if (main.config.delete_command.len == 0) {
ui.addstr("Are you sure you want to delete \"");
ui.addstr(ui.shorten(ui.toUtf8(entry.name()), 21));
ui.addch('"');
if (entry.pack.etype != .dir)
ui.addch('?')
else {
box.move(2, 18);
ui.addstr("and all of its contents?");
}
} else {
ui.addstr("Are you sure you want to run \"");
ui.addstr(ui.shorten(ui.toUtf8(main.config.delete_command), 25));
ui.addch('"');
box.move(2, 4);
ui.addstr("on \"");
ui.addstr(ui.shorten(ui.toUtf8(entry.name()), 50));
ui.addch('"');
}
box.move(4, 15);
ui.style(if (confirm == .yes) .sel else .default);
ui.addstr("yes");
box.move(4, 25);
ui.style(if (confirm == .no) .sel else .default);
ui.addstr("no");
box.move(4, 31);
ui.style(if (confirm == .ignore) .sel else .default);
ui.addstr("don't ask me again");
box.move(4, switch (confirm) {
.yes => 15,
.no => 25,
.ignore => 31
});
}
fn drawProgress() void {
var path: std.ArrayListUnmanaged(u8) = .empty;
defer path.deinit(main.allocator);
parent.fmtPath(main.allocator, false, &path);
path.append(main.allocator, '/') catch unreachable;
path.appendSlice(main.allocator, entry.name()) catch unreachable;
// TODO: Item counts and progress bar would be nice.
const box = ui.Box.create(6, 60, "Deleting...");
box.move(2, 2);
ui.addstr(ui.shorten(ui.toUtf8(util.arrayListBufZ(&path, main.allocator)), 56));
box.move(4, 41);
ui.addstr("Press ");
ui.style(.key);
ui.addch('q');
ui.style(.default);
ui.addstr(" to abort");
}
fn drawErr() void {
var path: std.ArrayListUnmanaged(u8) = .empty;
defer path.deinit(main.allocator);
parent.fmtPath(main.allocator, false, &path);
path.append(main.allocator, '/') catch unreachable;
path.appendSlice(main.allocator, entry.name()) catch unreachable;
const box = ui.Box.create(6, 60, "Error");
box.move(1, 2);
ui.addstr("Error deleting ");
ui.addstr(ui.shorten(ui.toUtf8(util.arrayListBufZ(&path, main.allocator)), 41));
box.move(2, 4);
ui.addstr(ui.errorString(error_code));
box.move(4, 14);
ui.style(if (error_option == .abort) .sel else .default);
ui.addstr("abort");
box.move(4, 23);
ui.style(if (error_option == .ignore) .sel else .default);
ui.addstr("ignore");
box.move(4, 33);
ui.style(if (error_option == .all) .sel else .default);
ui.addstr("ignore all");
}
pub fn draw() void {
switch (state) {
.confirm => drawConfirm(),
.busy => drawProgress(),
.err => drawErr(),
}
}
pub fn keyInput(ch: i32) void {
switch (state) {
.confirm => switch (ch) {
'h', c.KEY_LEFT => confirm = switch (confirm) {
.ignore => .no,
else => .yes,
},
'l', c.KEY_RIGHT => confirm = switch (confirm) {
.yes => .no,
else => .ignore,
},
'q' => main.state = .browse,
'\n' => switch (confirm) {
.yes => state = .busy,
.no => main.state = .browse,
.ignore => {
main.config.confirm_delete = false;
state = .busy;
},
},
else => {}
},
.busy => {
if (ch == 'q')
main.state = .browse;
},
.err => switch (ch) {
'h', c.KEY_LEFT => error_option = switch (error_option) {
.all => .ignore,
else => .abort,
},
'l', c.KEY_RIGHT => error_option = switch (error_option) {
.abort => .ignore,
else => .all,
},
'q' => main.state = .browse,
'\n' => switch (error_option) {
.abort => main.state = .browse,
.ignore => state = .busy,
.all => {
main.config.ignore_delete_errors = true;
state = .busy;
},
},
else => {}
},
}
}

137
src/dir.h
View file

@ -1,137 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef _dir_h
#define _dir_h
/* The dir_* functions and files implement the SCAN state and are organized as
* follows:
*
* Input:
* Responsible for getting a directory structure into ncdu. Will call the
* Output functions for data and the UI functions for feedback. Currently
* there is only one input implementation: dir_scan.c
* Output:
* Called by the Input handling code when there's some new file/directory
* information. The Output code is responsible for doing something with it
* and determines what action should follow after the Input is done.
* Currently there is only one output implementation: dir_mem.c.
* Common:
* Utility functions and UI code for use by the Input handling code to draw
* progress/error information on the screen, handle any user input and misc.
* stuff.
*/
/* "Interface" that Input code should call and Output code should implement. */
struct dir_output {
/* Called when there is new file/dir info. Call stack for an example
* directory structure:
* / item('/')
* /subdir item('subdir')
* /subdir/f item('f')
* .. item(NULL)
* /abc item('abc')
* .. item(NULL)
* Every opened dir is followed by a call to NULL. There is only one top-level
* dir item. The name of the top-level dir item is the absolute path to the
* scanned directory.
*
* The *item struct has the following fields set when item() is called:
* size, asize, ino, dev, flags (only DIR,FILE,ERR,OTHFS,EXL,HLNKC).
* All other fields/flags should be initialzed to NULL or 0.
* The name and dir_ext fields are given separately.
* All pointers may be overwritten or freed in subsequent calls, so this
* function should make a copy if necessary.
*
* The function should return non-zero on error, at which point errno is
* assumed to be set to something sensible.
*/
int (*item)(struct dir *, const char *, struct dir_ext *);
/* Finalizes the output to go to the next program state or exit ncdu. Called
* after item(NULL) has been called for the root item or before any item()
* has been called at all.
* Argument indicates success (0) or failure (1).
* Failure happens when the root directory couldn't be opened, chdir, lstat,
* read, when it is empty, or when the user aborted the operation.
* Return value should be 0 to continue running ncdu, 1 to exit.
*/
int (*final)(int);
/* The output code is responsible for updating these stats. Can be 0 when not
* available. */
int64_t size;
int items;
};
/* Initializes the SCAN state and dir_output for immediate browsing.
* On success:
* If a dir item is given, overwrites it with the new dir struct.
* Then calls browse_init(new_dir_struct->sub).
* On failure:
* If a dir item is given, will just call browse_init(orig).
* Otherwise, will exit ncdu.
*/
void dir_mem_init(struct dir *);
/* Initializes the SCAN state and dir_output for exporting to a file. */
int dir_export_init(const char *fn);
/* Function set by input code. Returns dir_output.final(). */
extern int (*dir_process)();
/* Scanning a live directory */
extern int dir_scan_smfs;
void dir_scan_init(const char *path);
/* Importing a file */
extern int dir_import_active;
int dir_import_init(const char *fn);
/* The currently configured output functions. */
extern struct dir_output dir_output;
/* Current path that we're working with. These are defined in dir_common.c. */
extern char *dir_curpath;
void dir_curpath_set(const char *);
void dir_curpath_enter(const char *);
void dir_curpath_leave();
/* Sets the path where the last error occurred, or reset on NULL. */
void dir_setlasterr(const char *);
/* Error message on fatal error, or NULL if there hasn't been a fatal error yet. */
extern char *dir_fatalerr;
void dir_seterr(const char *, ...);
extern int dir_ui;
int dir_key(int);
void dir_draw();
#endif

View file

@ -1,232 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#include "global.h"
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
#include <stdarg.h>
int (*dir_process)();
char *dir_curpath; /* Full path of the last seen item. */
struct dir_output dir_output;
char *dir_fatalerr; /* Error message on a fatal error. (NULL if there was no fatal error) */
int dir_ui; /* User interface to use */
static int confirm_quit_while_scanning_stage_1_passed; /* Additional check before quitting */
static char *lasterr; /* Path where the last error occurred. */
static int curpathl; /* Allocated length of dir_curpath */
static int lasterrl; /* ^ of lasterr */
static void curpath_resize(int s) {
if(curpathl < s) {
curpathl = s < 128 ? 128 : s < curpathl*2 ? curpathl*2 : s;
dir_curpath = xrealloc(dir_curpath, curpathl);
}
}
void dir_curpath_set(const char *path) {
curpath_resize(strlen(path)+1);
strcpy(dir_curpath, path);
}
void dir_curpath_enter(const char *name) {
curpath_resize(strlen(dir_curpath)+strlen(name)+2);
if(dir_curpath[1])
strcat(dir_curpath, "/");
strcat(dir_curpath, name);
}
/* removes last component from dir_curpath */
void dir_curpath_leave() {
char *tmp;
if((tmp = strrchr(dir_curpath, '/')) == NULL)
strcpy(dir_curpath, "/");
else if(tmp != dir_curpath)
tmp[0] = 0;
else
tmp[1] = 0;
}
void dir_setlasterr(const char *path) {
if(!path) {
free(lasterr);
lasterr = NULL;
lasterrl = 0;
return;
}
int req = strlen(path)+1;
if(lasterrl < req) {
lasterrl = req;
lasterr = xrealloc(lasterr, lasterrl);
}
strcpy(lasterr, path);
}
void dir_seterr(const char *fmt, ...) {
free(dir_fatalerr);
dir_fatalerr = NULL;
if(!fmt)
return;
va_list va;
va_start(va, fmt);
dir_fatalerr = xmalloc(1024); /* Should be enough for everything... */
vsnprintf(dir_fatalerr, 1023, fmt, va);
dir_fatalerr[1023] = 0;
va_end(va);
}
static void draw_progress() {
static const char scantext[] = "Scanning...";
static const char loadtext[] = "Loading...";
static size_t anpos = 0;
const char *antext = dir_import_active ? loadtext : scantext;
char ani[16] = {};
size_t i;
int width = wincols-5;
nccreate(10, width, antext);
ncaddstr(2, 2, "Total items: ");
uic_set(UIC_NUM);
printw("%-9d", dir_output.items);
if(dir_output.size) {
ncaddstrc(UIC_DEFAULT, 2, 24, "size: ");
printsize(UIC_DEFAULT, dir_output.size);
}
uic_set(UIC_DEFAULT);
ncprint(3, 2, "Current item: %s", cropstr(dir_curpath, width-18));
if(confirm_quit_while_scanning_stage_1_passed) {
ncaddstr(8, width-26, "Press ");
addchc(UIC_KEY, 'y');
addstrc(UIC_DEFAULT, " to confirm abort");
} else {
ncaddstr(8, width-18, "Press ");
addchc(UIC_KEY, 'q');
addstrc(UIC_DEFAULT, " to abort");
}
/* show warning if we couldn't open a dir */
if(lasterr) {
attron(A_BOLD);
ncaddstr(5, 2, "Warning:");
attroff(A_BOLD);
ncprint(5, 11, "error scanning %-32s", cropstr(lasterr, width-28));
ncaddstr(6, 3, "some directory sizes may not be correct");
}
/* animation - but only if the screen refreshes more than or once every second */
if(update_delay <= 1000) {
if(++anpos == strlen(antext)*2)
anpos = 0;
memset(ani, ' ', strlen(antext));
if(anpos < strlen(antext))
for(i=0; i<=anpos; i++)
ani[i] = antext[i];
else
for(i=strlen(antext)-1; i>anpos-strlen(antext); i--)
ani[i] = antext[i];
} else
strcpy(ani, antext);
ncaddstr(8, 3, ani);
}
static void draw_error(char *cur, char *msg) {
int width = wincols-5;
nccreate(7, width, "Error!");
attron(A_BOLD);
ncaddstr(2, 2, "Error:");
attroff(A_BOLD);
ncprint(2, 9, "could not open %s", cropstr(cur, width-26));
ncprint(3, 4, "%s", cropstr(msg, width-8));
ncaddstr(5, width-30, "press any key to continue...");
}
void dir_draw() {
float f;
char *unit;
switch(dir_ui) {
case 0:
if(dir_fatalerr)
fprintf(stderr, "%s.\n", dir_fatalerr);
break;
case 1:
if(dir_fatalerr)
fprintf(stderr, "\r%s.\n", dir_fatalerr);
else if(dir_output.size) {
f = formatsize(dir_output.size, &unit);
fprintf(stderr, "\r%-55s %8d files /%5.1f %s",
cropstr(dir_curpath, 55), dir_output.items, f, unit);
} else
fprintf(stderr, "\r%-65s %8d files", cropstr(dir_curpath, 65), dir_output.items);
break;
case 2:
browse_draw();
if(dir_fatalerr)
draw_error(dir_curpath, dir_fatalerr);
else
draw_progress();
break;
}
}
/* This function can't be called unless dir_ui == 2
* (Doesn't really matter either way). */
int dir_key(int ch) {
if(dir_fatalerr)
return 1;
if(confirm_quit && confirm_quit_while_scanning_stage_1_passed) {
if (ch == 'y'|| ch == 'Y') {
return 1;
} else {
confirm_quit_while_scanning_stage_1_passed = 0;
return 0;
}
} else if(ch == 'q') {
if(confirm_quit) {
confirm_quit_while_scanning_stage_1_passed = 1;
return 0;
} else
return 1;
}
return 0;
}

View file

@ -1,190 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#include "global.h"
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <time.h>
static FILE *stream;
/* Stack of device IDs, also used to keep track of the level of nesting */
struct stack {
uint64_t *list;
int size, top;
} stack;
static void output_string(const char *str) {
for(; *str; str++) {
switch(*str) {
case '\n': fputs("\\n", stream); break;
case '\r': fputs("\\r", stream); break;
case '\b': fputs("\\b", stream); break;
case '\t': fputs("\\t", stream); break;
case '\f': fputs("\\f", stream); break;
case '\\': fputs("\\\\", stream); break;
case '"': fputs("\\\"", stream); break;
default:
if((unsigned char)*str <= 31 || (unsigned char)*str == 127)
fprintf(stream, "\\u00%02x", *str);
else
fputc(*str, stream);
break;
}
}
}
static void output_int(uint64_t n) {
char tmp[20];
int i = 0;
do
tmp[i++] = n % 10;
while((n /= 10) > 0);
while(i--)
fputc(tmp[i]+'0', stream);
}
static void output_info(struct dir *d, const char *name, struct dir_ext *e) {
if(!extended_info || !(d->flags & FF_EXT))
e = NULL;
fputs("{\"name\":\"", stream);
output_string(name);
fputc('"', stream);
/* No need for asize/dsize if they're 0 (which happens with excluded or failed-to-stat files) */
if(d->asize) {
fputs(",\"asize\":", stream);
output_int((uint64_t)d->asize);
}
if(d->size) {
fputs(",\"dsize\":", stream);
output_int((uint64_t)d->size);
}
if(d->dev != nstack_top(&stack, 0)) {
fputs(",\"dev\":", stream);
output_int(d->dev);
}
fputs(",\"ino\":", stream);
output_int(d->ino);
if(e) {
fputs(",\"uid\":", stream);
output_int(e->uid);
fputs(",\"gid\":", stream);
output_int(e->gid);
fputs(",\"mode\":", stream);
output_int(e->mode);
fputs(",\"mtime\":", stream);
output_int(e->mtime);
}
/* TODO: Including the actual number of links would be nicer. */
if(d->flags & FF_HLNKC)
fputs(",\"hlnkc\":true", stream);
if(d->flags & FF_ERR)
fputs(",\"read_error\":true", stream);
/* excluded/error'd files are "unknown" with respect to the "notreg" field. */
if(!(d->flags & (FF_DIR|FF_FILE|FF_ERR|FF_EXL|FF_OTHFS)))
fputs(",\"notreg\":true", stream);
if(d->flags & FF_EXL)
fputs(",\"excluded\":\"pattern\"", stream);
else if(d->flags & FF_OTHFS)
fputs(",\"excluded\":\"othfs\"", stream);
fputc('}', stream);
}
/* Note on error handling: For convenience, we just keep writing to *stream
* without checking the return values of the functions. Only at the and of each
* item() call do we check for ferror(). This greatly simplifies the code, but
* assumes that calls to fwrite()/fput./etc don't do any weird stuff when
* called with a stream that's in an error state. */
static int item(struct dir *item, const char *name, struct dir_ext *ext) {
if(!item) {
nstack_pop(&stack);
if(!stack.top) { /* closing of the root item */
fputs("]]", stream);
return fclose(stream);
} else /* closing of a regular directory item */
fputs("]", stream);
return ferror(stream);
}
dir_output.items++;
/* File header.
* TODO: Add scan options? */
if(!stack.top) {
fputs("[1,1,{\"progname\":\""PACKAGE"\",\"progver\":\""PACKAGE_VERSION"\",\"timestamp\":", stream);
output_int((uint64_t)time(NULL));
fputc('}', stream);
}
fputs(",\n", stream);
if(item->flags & FF_DIR)
fputc('[', stream);
output_info(item, name, ext);
if(item->flags & FF_DIR)
nstack_push(&stack, item->dev);
return ferror(stream);
}
static int final(int fail) {
nstack_free(&stack);
return fail ? 1 : 1; /* Silences -Wunused-parameter */
}
int dir_export_init(const char *fn) {
if(strcmp(fn, "-") == 0)
stream = stdout;
else if((stream = fopen(fn, "w")) == NULL)
return 1;
nstack_init(&stack);
pstate = ST_CALC;
dir_output.item = item;
dir_output.final = final;
dir_output.size = 0;
dir_output.items = 0;
return 0;
}

View file

@ -1,612 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
/* This JSON parser has the following limitations:
* - No support for character encodings incompatible with ASCII (e.g.
* UTF-16/32)
* - Doesn't validate UTF-8 correctness (in fact, besides the ASCII part this
* parser doesn't know anything about encoding).
* - Doesn't validate that there are no duplicate keys in JSON objects.
* - Isn't very strict with validating non-integer numbers.
*/
#include "global.h"
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <limits.h>
/* Max. length of any JSON string we're interested in. A string may of course
* be larger, we're not going to read more than MAX_VAL in memory. If a string
* we're interested in (e.g. a file name) is longer than this, reading the
* import will results in an error. */
#define MAX_VAL (32*1024)
/* Minimum number of bytes we request from fread() */
#define MIN_READ_SIZE 1024
/* Read buffer size. Must be at least 2*MIN_READ_SIZE, everything larger
* improves performance. */
#define READ_BUF_SIZE (32*1024)
int dir_import_active = 0;
/* Use a struct for easy batch-allocation and deallocation of state data. */
struct ctx {
FILE *stream;
int line;
int byte;
int eof;
int items;
char *buf; /* points into readbuf, always zero-terminated. */
char *lastfill; /* points into readbuf, location of the zero terminator. */
/* scratch space */
struct dir *buf_dir;
struct dir_ext buf_ext[1];
char buf_name[MAX_VAL];
char val[MAX_VAL];
char readbuf[READ_BUF_SIZE];
} *ctx;
/* Fills readbuf with data from the stream. *buf will have at least n (<
* READ_BUF_SIZE) bytes available, unless the stream reached EOF or an error
* occurred. If the file data contains a null-type, this is considered an error.
* Returns 0 on success, non-zero on error. */
static int fill(int n) {
int r;
if(ctx->eof)
return 0;
r = READ_BUF_SIZE-(ctx->lastfill - ctx->readbuf); /* number of bytes left in the buffer */
if(n < r)
n = r-1;
if(n < MIN_READ_SIZE) {
r = ctx->lastfill - ctx->buf; /* number of unread bytes left in the buffer */
memcpy(ctx->readbuf, ctx->buf, r);
ctx->lastfill = ctx->readbuf + r;
ctx->buf = ctx->readbuf;
n = READ_BUF_SIZE-r-1;
}
do {
r = fread(ctx->lastfill, 1, n, ctx->stream);
if(r != n) {
if(feof(ctx->stream))
ctx->eof = 1;
else if(ferror(ctx->stream) && errno != EINTR) {
dir_seterr("Read error: %s", strerror(errno));
return 1;
}
}
ctx->lastfill[r] = 0;
if(strlen(ctx->lastfill) != (size_t)r) {
dir_seterr("Zero-byte found in JSON stream");
return 1;
}
ctx->lastfill += r;
n -= r;
} while(!ctx->eof && n > MIN_READ_SIZE);
return 0;
}
/* Two macros that break function calling behaviour, but are damn convenient */
#define E(_x, _m) do {\
if(_x) {\
if(!dir_fatalerr)\
dir_seterr("Line %d byte %d: %s", ctx->line, ctx->byte, _m);\
return 1;\
}\
} while(0)
#define C(_x) do {\
if(_x)\
return 1;\
} while(0)
/* Require at least n bytes in the buffer, throw an error on early EOF.
* (Macro to quickly handle the common case) */
#define rfill1 (!*ctx->buf && _rfill(1))
#define rfill(_n) ((ctx->lastfill - ctx->buf < (_n)) && _rfill(_n))
static int _rfill(int n) {
C(fill(n));
E(ctx->lastfill - ctx->buf < n, "Unexpected EOF");
return 0;
}
/* Consumes n bytes from the buffer. */
static inline void con(int n) {
ctx->buf += n;
ctx->byte += n;
}
/* Consumes any whitespace. If *ctx->buf == 0 after this function, we've reached EOF. */
static int cons() {
while(1) {
C(!*ctx->buf && fill(1));
switch(*ctx->buf) {
case 0x0A:
/* Special-case the newline-character with respect to consuming stuff
* from the buffer. This is the only function which *can* consume the
* newline character, so it's more efficient to handle it in here rather
* than in the more general con(). */
ctx->buf++;
ctx->line++;
ctx->byte = 0;
break;
case 0x20:
case 0x09:
case 0x0D:
con(1);
break;
default:
return 0;
}
}
}
static int rstring_esc(char **dest, int *destlen) {
unsigned int n;
C(rfill1);
#define ap(c) if(*destlen > 1) { *((*dest)++) = c; (*destlen)--; }
switch(*ctx->buf) {
case '"': ap('"'); con(1); break;
case '\\': ap('\\'); con(1); break;
case '/': ap('/'); con(1); break;
case 'b': ap(0x08); con(1); break;
case 'f': ap(0x0C); con(1); break;
case 'n': ap(0x0A); con(1); break;
case 'r': ap(0x0D); con(1); break;
case 't': ap(0x09); con(1); break;
case 'u':
C(rfill(5));
#define hn(n) (n >= '0' && n <= '9' ? n-'0' : n >= 'A' && n <= 'F' ? n-'A'+10 : n >= 'a' && n <= 'f' ? n-'a'+10 : 1<<16)
n = (hn(ctx->buf[1])<<12) + (hn(ctx->buf[2])<<8) + (hn(ctx->buf[3])<<4) + hn(ctx->buf[4]);
#undef hn
if(n <= 0x007F) {
ap(n);
} else if(n <= 0x07FF) {
ap(0xC0 | (n>>6));
ap(0x80 | (n & 0x3F));
} else if(n <= 0xFFFF) {
ap(0xE0 | (n>>12));
ap(0x80 | ((n>>6) & 0x3F));
ap(0x80 | (n & 0x3F));
} else /* this happens if there was an invalid character (n >= (1<<16)) */
E(1, "Invalid character in \\u escape");
con(5);
break;
default:
E(1, "Invalid escape sequence");
}
#undef ap
return 0;
}
/* Parse a JSON string and write it to *dest (max. destlen). Consumes but
* otherwise ignores any characters if the string is longer than destlen. *dest
* will be null-terminated, dest[destlen-1] = 0 if the string was cut just long
* enough of was cut off. That byte will be left untouched if the string is
* small enough. */
static int rstring(char *dest, int destlen) {
C(rfill1);
E(*ctx->buf != '"', "Expected string");
con(1);
while(1) {
C(rfill1);
if(*ctx->buf == '"')
break;
if(*ctx->buf == '\\') {
con(1);
C(rstring_esc(&dest, &destlen));
continue;
}
E((unsigned char)*ctx->buf <= 0x1F || (unsigned char)*ctx->buf == 0x7F, "Invalid character");
if(destlen > 1) {
*(dest++) = *ctx->buf;
destlen--;
}
con(1);
}
con(1);
if(destlen > 0)
*dest = 0;
return 0;
}
/* Parse and consume a JSON integer. Throws an error if the value does not fit
* in an uint64_t, is not an integer or is larger than 'max'. */
static int rint64(uint64_t *val, uint64_t max) {
uint64_t v;
int haschar = 0;
*val = 0;
while(1) {
C(!*ctx->buf && fill(1));
if(*ctx->buf == '0' && !haschar) {
con(1);
break;
}
if(*ctx->buf >= '0' && *ctx->buf <= '9') {
haschar = 1;
v = (*val)*10 + (*ctx->buf-'0');
E(v < *val, "Invalid (positive) integer");
*val = v;
con(1);
continue;
}
E(!haschar, "Invalid (positive) integer");
break;
}
E(*val > max, "Integer out of range");
return 0;
}
/* Parse and consume a JSON number. The result is discarded.
* TODO: Improve validation. */
static int rnum() {
int haschar = 0;
C(rfill1);
while(1) {
C(!*ctx->buf && fill(1));
if(*ctx->buf == 'e' || *ctx->buf == 'E' || *ctx->buf == '-' || *ctx->buf == '+' || *ctx->buf == '.' || (*ctx->buf >= '0' && *ctx->buf <= '9')) {
haschar = 1;
con(1);
} else {
E(!haschar, "Invalid JSON value");
break;
}
}
return 0;
}
static int rlit(const char *v, int len) {
C(rfill(len));
E(strncmp(ctx->buf, v, len) != 0, "Invalid JSON value");
con(len);
return 0;
}
/* Parse the "<space> <string> <space> : <space>" part of an object key. */
static int rkey(char *dest, int destlen) {
C(cons() || rstring(dest, destlen) || cons());
E(*ctx->buf != ':', "Expected ':'");
con(1);
return cons();
}
/* (Recursively) parse and consume any JSON value. The result is discarded. */
static int rval() {
C(rfill1);
switch(*ctx->buf) {
case 't': /* true */
C(rlit("true", 4));
break;
case 'f': /* false */
C(rlit("false", 5));
break;
case 'n': /* null */
C(rlit("null", 4));
break;
case '"': /* string */
C(rstring(NULL, 0));
break;
case '{': /* object */
con(1);
while(1) {
C(cons());
if(*ctx->buf == '}')
break;
C(rkey(NULL, 0) || rval() || cons());
if(*ctx->buf == '}')
break;
E(*ctx->buf != ',', "Expected ',' or '}'");
con(1);
}
con(1);
break;
case '[': /* array */
con(1);
while(1) {
C(cons());
if(*ctx->buf == ']')
break;
C(cons() || rval() || cons());
if(*ctx->buf == ']')
break;
E(*ctx->buf != ',', "Expected ',' or ']'");
con(1);
}
con(1);
break;
default: /* assume number */
C(rnum());
break;
}
return 0;
}
/* Consumes everything up to the root item, and checks that this item is a dir. */
static int header() {
uint64_t v;
C(cons());
E(*ctx->buf != '[', "Expected JSON array");
con(1);
C(cons() || rint64(&v, 10000) || cons());
E(v != 1, "Incompatible major format version");
E(*ctx->buf != ',', "Expected ','");
con(1);
C(cons() || rint64(&v, 10000) || cons()); /* Ignore the minor version for now */
E(*ctx->buf != ',', "Expected ','");
con(1);
/* Metadata block is currently ignored */
C(cons() || rval() || cons());
E(*ctx->buf != ',', "Expected ','");
con(1);
C(cons());
E(*ctx->buf != '[', "Top-level item must be a directory");
return 0;
}
static int item(uint64_t);
/* Read and add dir contents */
static int itemdir(uint64_t dev) {
while(1) {
C(cons());
if(*ctx->buf == ']')
break;
E(*ctx->buf != ',', "Expected ',' or ']'");
con(1);
C(cons() || item(dev));
}
con(1);
C(cons());
return 0;
}
/* Reads a JSON object representing a struct dir/dir_ext item. Writes to
* ctx->buf_dir, ctx->buf_ext and ctx->buf_name. */
static int iteminfo() {
uint64_t iv;
E(*ctx->buf != '{', "Expected JSON object");
con(1);
while(1) {
C(rkey(ctx->val, MAX_VAL));
/* TODO: strcmp() in this fashion isn't very fast. */
if(strcmp(ctx->val, "name") == 0) { /* name */
ctx->val[MAX_VAL-1] = 1;
C(rstring(ctx->val, MAX_VAL));
E(ctx->val[MAX_VAL-1] != 1, "Too large string value");
strcpy(ctx->buf_name, ctx->val);
} else if(strcmp(ctx->val, "asize") == 0) { /* asize */
C(rint64(&iv, INT64_MAX));
ctx->buf_dir->asize = iv;
} else if(strcmp(ctx->val, "dsize") == 0) { /* dsize */
C(rint64(&iv, INT64_MAX));
ctx->buf_dir->size = iv;
} else if(strcmp(ctx->val, "dev") == 0) { /* dev */
C(rint64(&iv, UINT64_MAX));
ctx->buf_dir->dev = iv;
} else if(strcmp(ctx->val, "ino") == 0) { /* ino */
C(rint64(&iv, UINT64_MAX));
ctx->buf_dir->ino = iv;
} else if(strcmp(ctx->val, "uid") == 0) { /* uid */
C(rint64(&iv, INT32_MAX));
ctx->buf_dir->flags |= FF_EXT;
ctx->buf_ext->uid = iv;
} else if(strcmp(ctx->val, "gid") == 0) { /* gid */
C(rint64(&iv, INT32_MAX));
ctx->buf_dir->flags |= FF_EXT;
ctx->buf_ext->gid = iv;
} else if(strcmp(ctx->val, "mode") == 0) { /* mode */
C(rint64(&iv, UINT16_MAX));
ctx->buf_dir->flags |= FF_EXT;
ctx->buf_ext->mode = iv;
} else if(strcmp(ctx->val, "mtime") == 0) { /* mtime */
C(rint64(&iv, UINT64_MAX));
ctx->buf_dir->flags |= FF_EXT;
ctx->buf_ext->mtime = iv;
} else if(strcmp(ctx->val, "hlnkc") == 0) { /* hlnkc */
if(*ctx->buf == 't') {
C(rlit("true", 4));
ctx->buf_dir->flags |= FF_HLNKC;
} else
C(rlit("false", 5));
} else if(strcmp(ctx->val, "read_error") == 0) { /* read_error */
if(*ctx->buf == 't') {
C(rlit("true", 4));
ctx->buf_dir->flags |= FF_ERR;
} else
C(rlit("false", 5));
} else if(strcmp(ctx->val, "excluded") == 0) { /* excluded */
C(rstring(ctx->val, 8));
if(strcmp(ctx->val, "otherfs") == 0)
ctx->buf_dir->flags |= FF_OTHFS;
else
ctx->buf_dir->flags |= FF_EXL;
} else if(strcmp(ctx->val, "notreg") == 0) { /* notreg */
if(*ctx->buf == 't') {
C(rlit("true", 4));
ctx->buf_dir->flags &= ~FF_FILE;
} else
C(rlit("false", 5));
} else
C(rval());
/* TODO: Extended attributes */
C(cons());
if(*ctx->buf == '}')
break;
E(*ctx->buf != ',', "Expected ',' or '}'");
con(1);
}
con(1);
E(!*ctx->buf_name, "No name field present in item information object");
ctx->items++;
/* Only call input_handle() once for every 32 items. Importing items is so
* fast that the time spent in input_handle() dominates when called every
* time. Don't set this value too high, either, as feedback should still be
* somewhat responsive when our import data comes from a slow-ish source. */
return !(ctx->items & 31) ? input_handle(1) : 0;
}
/* Recursively reads a file or directory item */
static int item(uint64_t dev) {
int isdir = 0;
int isroot = ctx->items == 0;
if(*ctx->buf == '[') {
isdir = 1;
con(1);
C(cons());
}
memset(ctx->buf_dir, 0, offsetof(struct dir, name));
memset(ctx->buf_ext, 0, sizeof(struct dir_ext));
*ctx->buf_name = 0;
ctx->buf_dir->flags |= isdir ? FF_DIR : FF_FILE;
ctx->buf_dir->dev = dev;
C(iteminfo());
dev = ctx->buf_dir->dev;
if(isroot)
dir_curpath_set(ctx->buf_name);
else
dir_curpath_enter(ctx->buf_name);
if(isdir) {
if(dir_output.item(ctx->buf_dir, ctx->buf_name, ctx->buf_ext)) {
dir_seterr("Output error: %s", strerror(errno));
return 1;
}
C(itemdir(dev));
if(dir_output.item(NULL, 0, NULL)) {
dir_seterr("Output error: %s", strerror(errno));
return 1;
}
} else if(dir_output.item(ctx->buf_dir, ctx->buf_name, ctx->buf_ext)) {
dir_seterr("Output error: %s", strerror(errno));
return 1;
}
if(!isroot)
dir_curpath_leave();
return 0;
}
static int footer() {
C(cons());
E(*ctx->buf != ']', "Expected ']'");
con(1);
C(cons());
E(*ctx->buf, "Trailing garbage");
return 0;
}
static int process() {
int fail = 0;
header();
if(!dir_fatalerr)
fail = item(0);
if(!dir_fatalerr && !fail)
footer();
if(fclose(ctx->stream) && !dir_fatalerr && !fail)
dir_seterr("Error closing file: %s", strerror(errno));
free(ctx->buf_dir);
free(ctx);
while(dir_fatalerr && !input_handle(0))
;
return dir_output.final(dir_fatalerr || fail);
}
int dir_import_init(const char *fn) {
FILE *stream;
if(strcmp(fn, "-") == 0)
stream = stdin;
else if((stream = fopen(fn, "r")) == NULL)
return 1;
ctx = xmalloc(sizeof(struct ctx));
ctx->stream = stream;
ctx->line = 1;
ctx->byte = ctx->eof = ctx->items = 0;
ctx->buf = ctx->lastfill = ctx->readbuf;
ctx->buf_dir = xmalloc(dir_memsize(""));
ctx->readbuf[0] = 0;
dir_curpath_set(fn);
dir_process = process;
dir_import_active = 1;
return 0;
}

View file

@ -1,215 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#include "global.h"
#include <string.h>
#include <stdlib.h>
#include <khashl.h>
static struct dir *root; /* root directory struct we're scanning */
static struct dir *curdir; /* directory item that we're currently adding items to */
static struct dir *orig; /* original directory, when refreshing an already scanned dir */
/* Table of struct dir items with more than one link (in order to detect hard links) */
#define hlink_hash(d) (kh_hash_uint64((khint64_t)d->dev) ^ kh_hash_uint64((khint64_t)d->ino))
#define hlink_equal(a, b) ((a)->dev == (b)->dev && (a)->ino == (b)->ino)
KHASHL_SET_INIT(KH_LOCAL, hl_t, hl, struct dir *, hlink_hash, hlink_equal);
static hl_t *links = NULL;
/* recursively checks a dir structure for hard links and fills the lookup array */
static void hlink_init(struct dir *d) {
struct dir *t;
for(t=d->sub; t!=NULL; t=t->next)
hlink_init(t);
if(!(d->flags & FF_HLNKC))
return;
int r;
hl_put(links, d, &r);
}
/* checks an individual file for hard links and updates its cicrular linked
* list, also updates the sizes of the parent dirs */
static void hlink_check(struct dir *d) {
struct dir *t, *pt, *par;
int i;
/* add to links table */
khint_t k = hl_put(links, d, &i);
/* found in the table? update hlnk */
if(!i) {
t = kh_key(links, k);
d->hlnk = t->hlnk == NULL ? t : t->hlnk;
t->hlnk = d;
}
/* now update the sizes of the parent directories,
* This works by only counting this file in the parent directories where this
* file hasn't been counted yet, which can be determined from the hlnk list.
* XXX: This may not be the most efficient algorithm to do this */
for(i=1,par=d->parent; i&&par; par=par->parent) {
if(d->hlnk)
for(t=d->hlnk; i&&t!=d; t=t->hlnk)
for(pt=t->parent; i&&pt; pt=pt->parent)
if(pt==par)
i=0;
if(i) {
par->size = adds64(par->size, d->size);
par->asize = adds64(par->size, d->asize);
}
}
}
/* Add item to the correct place in the memory structure */
static void item_add(struct dir *item) {
if(!root) {
root = item;
/* Make sure that the *root appears to be part of the same dir structure as
* *orig, otherwise the directory size calculation will be incorrect in the
* case of hard links. */
if(orig)
root->parent = orig->parent;
} else {
item->parent = curdir;
item->next = curdir->sub;
if(item->next)
item->next->prev = item;
curdir->sub = item;
}
}
static int item(struct dir *dir, const char *name, struct dir_ext *ext) {
struct dir *t, *item;
/* Go back to parent dir */
if(!dir) {
curdir = curdir->parent;
return 0;
}
if(!root && orig)
name = orig->name;
if(!extended_info)
dir->flags &= ~FF_EXT;
item = xmalloc(dir->flags & FF_EXT ? dir_ext_memsize(name) : dir_memsize(name));
memcpy(item, dir, offsetof(struct dir, name));
strcpy(item->name, name);
if(dir->flags & FF_EXT)
memcpy(dir_ext_ptr(item), ext, sizeof(struct dir_ext));
item_add(item);
/* Ensure that any next items will go to this directory */
if(item->flags & FF_DIR)
curdir = item;
/* Special-case the name of the root item to be empty instead of "/". This is
* what getpath() expects. */
if(item == root && strcmp(item->name, "/") == 0)
item->name[0] = 0;
/* Update stats of parents. Don't update the size/asize fields if this is a
* possible hard link, because hlnk_check() will take care of it in that
* case. */
if(item->flags & FF_HLNKC) {
addparentstats(item->parent, 0, 0, 0, 1);
hlink_check(item);
} else if(item->flags & FF_EXT) {
addparentstats(item->parent, item->size, item->asize, dir_ext_ptr(item)->mtime, 1);
} else {
addparentstats(item->parent, item->size, item->asize, 0, 1);
}
/* propagate ERR and SERR back up to the root */
if(item->flags & FF_SERR || item->flags & FF_ERR)
for(t=item->parent; t; t=t->parent)
t->flags |= FF_SERR;
dir_output.size = root->size;
dir_output.items = root->items;
return 0;
}
static int final(int fail) {
hl_destroy(links);
links = NULL;
if(fail) {
freedir(root);
if(orig) {
browse_init(orig);
return 0;
} else
return 1;
}
/* success, update references and free original item */
if(orig) {
root->next = orig->next;
root->prev = orig->prev;
if(root->parent && root->parent->sub == orig)
root->parent->sub = root;
if(root->prev)
root->prev->next = root;
if(root->next)
root->next->prev = root;
orig->next = orig->prev = NULL;
freedir(orig);
}
browse_init(root);
dirlist_top(-3);
return 0;
}
void dir_mem_init(struct dir *_orig) {
orig = _orig;
root = curdir = NULL;
pstate = ST_CALC;
dir_output.item = item;
dir_output.final = final;
dir_output.size = 0;
dir_output.items = 0;
/* Init hash table for hard link detection */
links = hl_init();
if(orig)
hlink_init(getroot(orig));
}

View file

@ -1,318 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#include "global.h"
#include <string.h>
#include <stdlib.h>
#include <errno.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <dirent.h>
/* set S_BLKSIZE if not defined already in sys/stat.h */
#ifndef S_BLKSIZE
# define S_BLKSIZE 512
#endif
int dir_scan_smfs; /* Stay on the same filesystem */
static uint64_t curdev; /* current device we're scanning on */
/* scratch space */
static struct dir *buf_dir;
static struct dir_ext buf_ext[1];
/* Populates the buf_dir and buf_ext with information from the stat struct.
* Sets everything necessary for output_dir.item() except FF_ERR and FF_EXL. */
static void stat_to_dir(struct stat *fs) {
buf_dir->flags |= FF_EXT; /* We always read extended data because it doesn't have an additional cost */
buf_dir->ino = (uint64_t)fs->st_ino;
buf_dir->dev = (uint64_t)fs->st_dev;
if(S_ISREG(fs->st_mode))
buf_dir->flags |= FF_FILE;
else if(S_ISDIR(fs->st_mode))
buf_dir->flags |= FF_DIR;
if(!S_ISDIR(fs->st_mode) && fs->st_nlink > 1)
buf_dir->flags |= FF_HLNKC;
if(dir_scan_smfs && curdev != buf_dir->dev)
buf_dir->flags |= FF_OTHFS;
if(!(buf_dir->flags & (FF_OTHFS|FF_EXL))) {
buf_dir->size = fs->st_blocks * S_BLKSIZE;
buf_dir->asize = fs->st_size;
}
buf_ext->mode = fs->st_mode;
buf_ext->mtime = fs->st_mtime;
buf_ext->uid = (int)fs->st_uid;
buf_ext->gid = (int)fs->st_gid;
}
/* Reads all filenames in the currently chdir'ed directory and stores it as a
* nul-separated list of filenames. The list ends with an empty filename (i.e.
* two nuls). . and .. are not included. Returned memory should be freed. *err
* is set to 1 if some error occurred. Returns NULL if that error was fatal.
* The reason for reading everything in memory first and then walking through
* the list is to avoid eating too many file descriptors in a deeply recursive
* directory. */
static char *dir_read(int *err) {
DIR *dir;
struct dirent *item;
char *buf = NULL;
int buflen = 512;
int off = 0;
if((dir = opendir(".")) == NULL) {
*err = 1;
return NULL;
}
buf = xmalloc(buflen);
errno = 0;
while((item = readdir(dir)) != NULL) {
if(item->d_name[0] == '.' && (item->d_name[1] == 0 || (item->d_name[1] == '.' && item->d_name[2] == 0)))
continue;
int req = off+3+strlen(item->d_name);
if(req > buflen) {
buflen = req < buflen*2 ? buflen*2 : req;
buf = xrealloc(buf, buflen);
}
strcpy(buf+off, item->d_name);
off += strlen(item->d_name)+1;
}
if(errno)
*err = 1;
if(closedir(dir) < 0)
*err = 1;
buf[off] = 0;
buf[off+1] = 0;
return buf;
}
static int dir_walk(char *);
/* Tries to recurse into the current directory item (buf_dir is assumed to be the current dir) */
static int dir_scan_recurse(const char *name) {
int fail = 0;
char *dir;
if(chdir(name)) {
dir_setlasterr(dir_curpath);
buf_dir->flags |= FF_ERR;
if(dir_output.item(buf_dir, name, buf_ext) || dir_output.item(NULL, 0, NULL)) {
dir_seterr("Output error: %s", strerror(errno));
return 1;
}
return 0;
}
if((dir = dir_read(&fail)) == NULL) {
dir_setlasterr(dir_curpath);
buf_dir->flags |= FF_ERR;
if(dir_output.item(buf_dir, name, buf_ext) || dir_output.item(NULL, 0, NULL)) {
dir_seterr("Output error: %s", strerror(errno));
return 1;
}
if(chdir("..")) {
dir_seterr("Error going back to parent directory: %s", strerror(errno));
return 1;
} else
return 0;
}
/* readdir() failed halfway, not fatal. */
if(fail)
buf_dir->flags |= FF_ERR;
if(dir_output.item(buf_dir, name, buf_ext)) {
dir_seterr("Output error: %s", strerror(errno));
return 1;
}
fail = dir_walk(dir);
if(dir_output.item(NULL, 0, NULL)) {
dir_seterr("Output error: %s", strerror(errno));
return 1;
}
/* Not being able to chdir back is fatal */
if(!fail && chdir("..")) {
dir_seterr("Error going back to parent directory: %s", strerror(errno));
return 1;
}
return fail;
}
/* Scans and adds a single item. Recurses into dir_walk() again if this is a
* directory. Assumes we're chdir'ed in the directory in which this item
* resides. */
static int dir_scan_item(const char *name) {
static struct stat st, stl;
int fail = 0;
#ifdef __CYGWIN__
/* /proc/registry names may contain slashes */
if(strchr(name, '/') || strchr(name, '\\')) {
buf_dir->flags |= FF_ERR;
dir_setlasterr(dir_curpath);
}
#endif
if(exclude_match(dir_curpath))
buf_dir->flags |= FF_EXL;
if(!(buf_dir->flags & (FF_ERR|FF_EXL)) && lstat(name, &st)) {
buf_dir->flags |= FF_ERR;
dir_setlasterr(dir_curpath);
}
if(!(buf_dir->flags & (FF_ERR|FF_EXL))) {
if(follow_symlinks && S_ISLNK(st.st_mode) && !stat(name, &stl) && !S_ISDIR(stl.st_mode))
stat_to_dir(&stl);
else
stat_to_dir(&st);
}
if(cachedir_tags && (buf_dir->flags & FF_DIR) && !(buf_dir->flags & (FF_ERR|FF_EXL|FF_OTHFS)))
if(has_cachedir_tag(name)) {
buf_dir->flags |= FF_EXL;
buf_dir->size = buf_dir->asize = 0;
}
/* Recurse into the dir or output the item */
if(buf_dir->flags & FF_DIR && !(buf_dir->flags & (FF_ERR|FF_EXL|FF_OTHFS)))
fail = dir_scan_recurse(name);
else if(buf_dir->flags & FF_DIR) {
if(dir_output.item(buf_dir, name, buf_ext) || dir_output.item(NULL, 0, NULL)) {
dir_seterr("Output error: %s", strerror(errno));
fail = 1;
}
} else if(dir_output.item(buf_dir, name, buf_ext)) {
dir_seterr("Output error: %s", strerror(errno));
fail = 1;
}
return fail || input_handle(1);
}
/* Walks through the directory that we're currently chdir'ed to. *dir contains
* the filenames as returned by dir_read(), and will be freed automatically by
* this function. */
static int dir_walk(char *dir) {
int fail = 0;
char *cur;
fail = 0;
for(cur=dir; !fail&&cur&&*cur; cur+=strlen(cur)+1) {
dir_curpath_enter(cur);
memset(buf_dir, 0, offsetof(struct dir, name));
memset(buf_ext, 0, sizeof(struct dir_ext));
fail = dir_scan_item(cur);
dir_curpath_leave();
}
free(dir);
return fail;
}
static int process() {
char *path;
char *dir;
int fail = 0;
struct stat fs;
memset(buf_dir, 0, offsetof(struct dir, name));
memset(buf_ext, 0, sizeof(struct dir_ext));
if((path = path_real(dir_curpath)) == NULL)
dir_seterr("Error obtaining full path: %s", strerror(errno));
else {
dir_curpath_set(path);
free(path);
}
if(!dir_fatalerr && path_chdir(dir_curpath) < 0)
dir_seterr("Error changing directory: %s", strerror(errno));
/* Can these even fail after a chdir? */
if(!dir_fatalerr && lstat(".", &fs) != 0)
dir_seterr("Error obtaining directory information: %s", strerror(errno));
if(!dir_fatalerr && !S_ISDIR(fs.st_mode))
dir_seterr("Not a directory");
if(!dir_fatalerr && !(dir = dir_read(&fail)))
dir_seterr("Error reading directory: %s", strerror(errno));
if(!dir_fatalerr) {
curdev = (uint64_t)fs.st_dev;
if(fail)
buf_dir->flags |= FF_ERR;
stat_to_dir(&fs);
if(dir_output.item(buf_dir, dir_curpath, buf_ext)) {
dir_seterr("Output error: %s", strerror(errno));
fail = 1;
}
if(!fail)
fail = dir_walk(dir);
if(!fail && dir_output.item(NULL, 0, NULL)) {
dir_seterr("Output error: %s", strerror(errno));
fail = 1;
}
}
while(dir_fatalerr && !input_handle(0))
;
return dir_output.final(dir_fatalerr || fail);
}
void dir_scan_init(const char *path) {
dir_curpath_set(path);
dir_setlasterr(NULL);
dir_seterr(NULL);
dir_process = process;
if (!buf_dir)
buf_dir = xmalloc(dir_memsize(""));
pstate = ST_CALC;
}

View file

@ -1,398 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#include "global.h"
#include <string.h>
#include <stdlib.h>
/* public variables */
struct dir *dirlist_parent = NULL,
*dirlist_par = NULL;
int64_t dirlist_maxs = 0,
dirlist_maxa = 0;
int dirlist_sort_desc = 1,
dirlist_sort_col = DL_COL_SIZE,
dirlist_sort_df = 0,
dirlist_hidden = 0;
/* private state vars */
static struct dir *parent_alloc, *head, *head_real, *selected, *top = NULL;
#define ISHIDDEN(d) (dirlist_hidden && (d) != dirlist_parent && (\
(d)->flags & FF_EXL || (d)->name[0] == '.' || (d)->name[strlen((d)->name)-1] == '~'\
))
static inline int cmp_mtime(struct dir *x, struct dir*y) {
int64_t x_mtime = 0, y_mtime = 0;
if (x->flags & FF_EXT)
x_mtime = dir_ext_ptr(x)->mtime;
if (y->flags & FF_EXT)
y_mtime = dir_ext_ptr(y)->mtime;
return (x_mtime > y_mtime ? 1 : (x_mtime == y_mtime ? 0 : -1));
}
static int dirlist_cmp(struct dir *x, struct dir *y) {
int r;
/* dirs are always before files when that option is set */
if(dirlist_sort_df) {
if(y->flags & FF_DIR && !(x->flags & FF_DIR))
return 1;
else if(!(y->flags & FF_DIR) && x->flags & FF_DIR)
return -1;
}
/* sort columns:
* 1 -> 2 -> 3 -> 4
* NAME: name -> size -> asize -> items
* SIZE: size -> asize -> name -> items
* ASIZE: asize -> size -> name -> items
* ITEMS: items -> size -> asize -> name
*
* Note that the method used below is supposed to be fast, not readable :-)
*/
#define CMP_NAME strcmp(x->name, y->name)
#define CMP_SIZE (x->size > y->size ? 1 : (x->size == y->size ? 0 : -1))
#define CMP_ASIZE (x->asize > y->asize ? 1 : (x->asize == y->asize ? 0 : -1))
#define CMP_ITEMS (x->items > y->items ? 1 : (x->items == y->items ? 0 : -1))
/* try 1 */
r = dirlist_sort_col == DL_COL_NAME ? CMP_NAME :
dirlist_sort_col == DL_COL_SIZE ? CMP_SIZE :
dirlist_sort_col == DL_COL_ASIZE ? CMP_ASIZE :
dirlist_sort_col == DL_COL_ITEMS ? CMP_ITEMS :
cmp_mtime(x, y);
/* try 2 */
if(!r)
r = dirlist_sort_col == DL_COL_SIZE ? CMP_ASIZE : CMP_SIZE;
/* try 3 */
if(!r)
r = (dirlist_sort_col == DL_COL_NAME || dirlist_sort_col == DL_COL_ITEMS) ?
CMP_ASIZE : CMP_NAME;
/* try 4 */
if(!r)
r = dirlist_sort_col == DL_COL_ITEMS ? CMP_NAME : CMP_ITEMS;
/* reverse when sorting in descending order */
if(dirlist_sort_desc && r != 0)
r = r < 0 ? 1 : -1;
return r;
}
static struct dir *dirlist_sort(struct dir *list) {
struct dir *p, *q, *e, *tail;
int insize, nmerges, psize, qsize, i;
insize = 1;
while(1) {
p = list;
list = NULL;
tail = NULL;
nmerges = 0;
while(p) {
nmerges++;
q = p;
psize = 0;
for(i=0; i<insize; i++) {
psize++;
q = q->next;
if(!q) break;
}
qsize = insize;
while(psize > 0 || (qsize > 0 && q)) {
if(psize == 0) {
e = q; q = q->next; qsize--;
} else if(qsize == 0 || !q) {
e = p; p = p->next; psize--;
} else if(dirlist_cmp(p,q) <= 0) {
e = p; p = p->next; psize--;
} else {
e = q; q = q->next; qsize--;
}
if(tail) tail->next = e;
else list = e;
e->prev = tail;
tail = e;
}
p = q;
}
tail->next = NULL;
if(nmerges <= 1) {
if(list->parent)
list->parent->sub = list;
return list;
}
insize *= 2;
}
}
/* passes through the dir listing once and:
* - makes sure one, and only one, visible item is selected
* - updates the dirlist_(maxs|maxa) values
* - makes sure that the FF_BSEL bits are correct */
static void dirlist_fixup() {
struct dir *t;
/* we're going to determine the selected items from the list itself, so reset this one */
selected = NULL;
for(t=head; t; t=t->next) {
/* not visible? not selected! */
if(ISHIDDEN(t))
t->flags &= ~FF_BSEL;
else {
/* visible and selected? make sure only one item is selected */
if(t->flags & FF_BSEL) {
if(!selected)
selected = t;
else
t->flags &= ~FF_BSEL;
}
}
/* update dirlist_(maxs|maxa) */
if(t->size > dirlist_maxs)
dirlist_maxs = t->size;
if(t->asize > dirlist_maxa)
dirlist_maxa = t->asize;
}
/* no selected items found after one pass? select the first visible item */
if(!selected)
if((selected = dirlist_next(NULL)))
selected->flags |= FF_BSEL;
}
void dirlist_open(struct dir *d) {
dirlist_par = d;
/* set the head of the list */
head_real = head = d == NULL ? NULL : d->sub;
/* reset internal status */
dirlist_maxs = dirlist_maxa = 0;
/* stop if this is not a directory list we can work with */
if(d == NULL) {
dirlist_parent = NULL;
return;
}
/* sort the dir listing */
if(head)
head_real = head = dirlist_sort(head);
/* set the reference to the parent dir */
if(d->parent) {
if(!parent_alloc)
parent_alloc = xcalloc(1, dir_memsize(".."));
dirlist_parent = parent_alloc;
strcpy(dirlist_parent->name, "..");
dirlist_parent->next = head;
dirlist_parent->parent = d;
dirlist_parent->sub = d;
dirlist_parent->flags = FF_DIR;
head = dirlist_parent;
} else
dirlist_parent = NULL;
dirlist_fixup();
}
struct dir *dirlist_next(struct dir *d) {
if(!head)
return NULL;
if(!d) {
if(!ISHIDDEN(head))
return head;
else
d = head;
}
while((d = d->next)) {
if(!ISHIDDEN(d))
return d;
}
return NULL;
}
static struct dir *dirlist_prev(struct dir *d) {
if(!head || !d)
return NULL;
while((d = d->prev)) {
if(!ISHIDDEN(d))
return d;
}
if(dirlist_parent)
return dirlist_parent;
return NULL;
}
struct dir *dirlist_get(int i) {
struct dir *t = selected, *d;
if(!head)
return NULL;
if(ISHIDDEN(selected)) {
selected = dirlist_next(NULL);
return selected;
}
/* i == 0? return the selected item */
if(!i)
return selected;
/* positive number? simply move forward */
while(i > 0) {
d = dirlist_next(t);
if(!d)
return t;
t = d;
if(!--i)
return t;
}
/* otherwise, backward */
while(1) {
d = dirlist_prev(t);
if(!d)
return t;
t = d;
if(!++i)
return t;
}
}
void dirlist_select(struct dir *d) {
if(!d || !head || ISHIDDEN(d) || d->parent != head->parent)
return;
selected->flags &= ~FF_BSEL;
selected = d;
selected->flags |= FF_BSEL;
}
/* We need a hint in order to figure out which item should be on top:
* 0 = only get the current top, don't set anything
* 1 = selected has moved down
* -1 = selected has moved up
* -2 = selected = first item in the list (faster version of '1')
* -3 = top should be considered as invalid (after sorting or opening another dir)
* -4 = an item has been deleted
* -5 = hidden flag has been changed
*
* Actions:
* hint = -1 or -4 -> top = selected_is_visible ? top : selected
* hint = -2 or -3 -> top = selected-(winrows-3)/2
* hint = 1 -> top = selected_is_visible ? top : selected-(winrows-4)
* hint = 0 or -5 -> top = selected_is_visible ? top : selected-(winrows-3)/2
*
* Regardless of the hint, the returned top will always be chosen such that the
* selected item is visible.
*/
struct dir *dirlist_top(int hint) {
struct dir *t;
int i = winrows-3, visible = 0;
if(hint == -2 || hint == -3)
top = NULL;
/* check whether the current selected item is within the visible window */
if(top) {
i = winrows-3;
t = dirlist_get(0);
while(t && i--) {
if(t == top) {
visible++;
break;
}
t = dirlist_prev(t);
}
}
/* otherwise, get a new top */
if(!visible)
top = hint == -1 || hint == -4 ? dirlist_get(0) :
hint == 1 ? dirlist_get(-1*(winrows-4)) :
dirlist_get(-1*(winrows-3)/2);
/* also make sure that if the list is longer than the window and the last
* item is visible, that this last item is also the last on the window */
t = top;
i = winrows-3;
while(t && i--)
t = dirlist_next(t);
t = top;
do {
top = t;
t = dirlist_prev(t);
} while(t && i-- > 0);
return top;
}
void dirlist_set_sort(int col, int desc, int df) {
/* update config */
if(col != DL_NOCHANGE)
dirlist_sort_col = col;
if(desc != DL_NOCHANGE)
dirlist_sort_desc = desc;
if(df != DL_NOCHANGE)
dirlist_sort_df = df;
/* sort the list (excluding the parent, which is always on top) */
if(head_real)
head_real = dirlist_sort(head_real);
if(dirlist_parent)
dirlist_parent->next = head_real;
else
head = head_real;
dirlist_top(-3);
}
void dirlist_set_hidden(int hidden) {
dirlist_hidden = hidden;
dirlist_fixup();
dirlist_top(-5);
}

View file

@ -1,86 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
/* Note: all functions below include a 'reference to parent dir' node at the
* top of the list. */
#ifndef _dirlist_h
#define _dirlist_h
#include "global.h"
#define DL_NOCHANGE -1
#define DL_COL_NAME 0
#define DL_COL_SIZE 1
#define DL_COL_ASIZE 2
#define DL_COL_ITEMS 3
#define DL_COL_MTIME 4
void dirlist_open(struct dir *);
/* Get the next non-hidden item,
* NULL = get first non-hidden item */
struct dir *dirlist_next(struct dir *);
/* Get the struct dir item relative to the selected item, or the item nearest to the requested item
* i = 0 get selected item
* hidden items aren't considered */
struct dir *dirlist_get(int i);
/* Get/set the first visible item in the list on the screen */
struct dir *dirlist_top(int hint);
/* Set selected dir (must be in the currently opened directory, obviously) */
void dirlist_select(struct dir *);
/* Change sort column (arguments should have a NO_CHANGE option) */
void dirlist_set_sort(int column, int desc, int df);
/* Set the hidden thingy */
void dirlist_set_hidden(int hidden);
/* DO NOT WRITE TO ANY OF THE BELOW VARIABLES FROM OUTSIDE OF dirlist.c! */
/* The 'reference to parent dir' */
extern struct dir *dirlist_parent;
/* The actual parent dir */
extern struct dir *dirlist_par;
/* current sorting configuration (set with dirlist_set_sort()) */
extern int dirlist_sort_desc, dirlist_sort_col, dirlist_sort_df;
/* set with dirlist_set_hidden() */
extern int dirlist_hidden;
/* maximum size of an item in the opened dir */
extern int64_t dirlist_maxs, dirlist_maxa;
#endif

View file

@ -1,139 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#include "global.h"
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <fnmatch.h>
struct exclude {
char *pattern;
struct exclude *next;
} *excludes = NULL;
void exclude_add(char *pat) {
struct exclude **n;
n = &excludes;
while(*n != NULL)
n = &((*n)->next);
*n = (struct exclude *) xcalloc(1, sizeof(struct exclude));
(*n)->pattern = (char *) xmalloc(strlen(pat)+1);
strcpy((*n)->pattern, pat);
}
int exclude_addfile(char *file) {
FILE *f;
char buf[256];
int len;
if((f = fopen(file, "r")) == NULL)
return 1;
while(fgets(buf, 256, f) != NULL) {
len = strlen(buf)-1;
while(len >=0 && (buf[len] == '\r' || buf[len] == '\n'))
buf[len--] = '\0';
if(len < 0)
continue;
exclude_add(buf);
}
fclose(f);
return 0;
}
int exclude_match(char *path) {
struct exclude *n;
char *c;
for(n=excludes; n!=NULL; n=n->next) {
if(!fnmatch(n->pattern, path, 0))
return 1;
for(c = path; *c; c++)
if(*c == '/' && c[1] != '/' && !fnmatch(n->pattern, c+1, 0))
return 1;
}
return 0;
}
void exclude_clear() {
struct exclude *n, *l;
for(n=excludes; n!=NULL; n=l) {
l = n->next;
free(n->pattern);
free(n);
}
excludes = NULL;
}
/*
* Exclusion of directories that contain only cached information.
* See http://www.brynosaurus.com/cachedir/
*/
#define CACHEDIR_TAG_FILENAME "CACHEDIR.TAG"
#define CACHEDIR_TAG_SIGNATURE "Signature: 8a477f597d28d172789f06886806bc55"
int has_cachedir_tag(const char *name) {
static int path_l = 1024;
static char *path = NULL;
int l;
const size_t signature_l = sizeof CACHEDIR_TAG_SIGNATURE - 1;
char buf[signature_l];
FILE *f;
int match = 0;
/* Compute the required length for `path`. */
l = strlen(name) + sizeof CACHEDIR_TAG_FILENAME + 2;
if(l > path_l || path == NULL) {
path_l = path_l * 2;
if(path_l < l)
path_l = l;
/* We don't need to copy the content of `path`, so it's more efficient to
* use `free` + `malloc`. */
free(path);
path = xmalloc(path_l);
}
snprintf(path, path_l, "%s/%s", name, CACHEDIR_TAG_FILENAME);
f = fopen(path, "rb");
if(f != NULL) {
match = ((fread(buf, 1, signature_l, f) == signature_l) &&
!memcmp(buf, CACHEDIR_TAG_SIGNATURE, signature_l));
fclose(f);
}
return match;
}

View file

@ -1,35 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef _exclude_h
#define _exclude_h
void exclude_add(char *);
int exclude_addfile(char *);
int exclude_match(char *);
void exclude_clear();
int has_cachedir_tag(const char *name);
#endif

322
src/exclude.zig Normal file
View file

@ -0,0 +1,322 @@
// SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
// SPDX-License-Identifier: MIT
const std = @import("std");
const main = @import("main.zig");
const c = @import("c.zig").c;
// Reference:
// https://manned.org/glob.7
// https://manned.org/man.b4c7391e/rsync#head17
// https://manned.org/man.401d6ade/arch/gitignore#head4
// Patterns:
// Single component (none of these patterns match a '/'):
// * -> match any character sequence
// ? -> match single character
// [abc] -> match a single character in the given list
// [a-c] -> match a single character in the given range
// [!a-c] -> match a single character not in the given range
// # (these are currently still handled by calling libc fnmatch())
// Anchored patterns:
// /pattern
// /dir/pattern
// /dir/subdir/pattern
// # In both rsync and gitignore, anchored patterns are relative to the
// # directory under consideration. In ncdu they are instead anchored to
// # the filesystem root (i.e. matched against the absolute path).
// Non-anchored patterns:
// somefile
// subdir/foo
// sub*/bar
// # In .gitignore, non-anchored patterns with a slash are implicitely anchored,
// # in rsync they can match anywhere in a path. We follow rsync here.
// Dir patterns (trailing '/' matches only dirs):
// /pattern/
// somedir/
// subdir/pattern/
//
// BREAKING CHANGE:
// ncdu < 2.2 single-component matches may cross directory boundary, e.g.
// 'a*b' matches 'a/b'. This is an old bug, the fix breaks compatibility with
// old exlude patterns.
const Pattern = struct {
isdir: bool = undefined,
isliteral: bool = undefined,
pattern: [:0]const u8,
sub: ?*const Pattern = undefined,
fn isLiteral(str: []const u8) bool {
for (str) |chr| switch (chr) {
'[', '*', '?', '\\' => return false,
else => {},
};
return true;
}
fn parse(pat_: []const u8) *const Pattern {
var pat = std.mem.trimLeft(u8, pat_, "/");
const top = main.allocator.create(Pattern) catch unreachable;
var tail = top;
tail.sub = null;
while (std.mem.indexOfScalar(u8, pat, '/')) |idx| {
tail.pattern = main.allocator.dupeZ(u8, pat[0..idx]) catch unreachable;
tail.isdir = true;
tail.isliteral = isLiteral(tail.pattern);
pat = pat[idx+1..];
if (std.mem.allEqual(u8, pat, '/')) return top;
const next = main.allocator.create(Pattern) catch unreachable;
tail.sub = next;
tail = next;
tail.sub = null;
}
tail.pattern = main.allocator.dupeZ(u8, pat) catch unreachable;
tail.isdir = false;
tail.isliteral = isLiteral(tail.pattern);
return top;
}
};
test "parse" {
const t1 = Pattern.parse("");
try std.testing.expectEqualStrings(t1.pattern, "");
try std.testing.expectEqual(t1.isdir, false);
try std.testing.expectEqual(t1.isliteral, true);
try std.testing.expectEqual(t1.sub, null);
const t2 = Pattern.parse("//a//");
try std.testing.expectEqualStrings(t2.pattern, "a");
try std.testing.expectEqual(t2.isdir, true);
try std.testing.expectEqual(t2.isliteral, true);
try std.testing.expectEqual(t2.sub, null);
const t3 = Pattern.parse("foo*/bar.zig");
try std.testing.expectEqualStrings(t3.pattern, "foo*");
try std.testing.expectEqual(t3.isdir, true);
try std.testing.expectEqual(t3.isliteral, false);
try std.testing.expectEqualStrings(t3.sub.?.pattern, "bar.zig");
try std.testing.expectEqual(t3.sub.?.isdir, false);
try std.testing.expectEqual(t3.sub.?.isliteral, true);
try std.testing.expectEqual(t3.sub.?.sub, null);
const t4 = Pattern.parse("/?/sub/dir/");
try std.testing.expectEqualStrings(t4.pattern, "?");
try std.testing.expectEqual(t4.isdir, true);
try std.testing.expectEqual(t4.isliteral, false);
try std.testing.expectEqualStrings(t4.sub.?.pattern, "sub");
try std.testing.expectEqual(t4.sub.?.isdir, true);
try std.testing.expectEqual(t4.sub.?.isliteral, true);
try std.testing.expectEqualStrings(t4.sub.?.sub.?.pattern, "dir");
try std.testing.expectEqual(t4.sub.?.sub.?.isdir, true);
try std.testing.expectEqual(t4.sub.?.sub.?.isliteral, true);
try std.testing.expectEqual(t4.sub.?.sub.?.sub, null);
}
// List of patterns to be matched at one particular level.
// There are 2 different types of lists: those where all patterns have a
// sub-pointer (where the pattern only matches directories at this level, and
// the match result is only used to construct the PatternList of the
// subdirectory) and patterns without a sub-pointer (where the match result
// determines whether the file/dir at this level should be included or not).
fn PatternList(comptime withsub: bool) type {
return struct {
literals: std.HashMapUnmanaged(*const Pattern, Val, Ctx, 80) = .{},
wild: std.ArrayListUnmanaged(*const Pattern) = .empty,
// Not a fan of the map-of-arrays approach in the 'withsub' case, it
// has a lot of extra allocations. Linking the Patterns together in a
// list would be nicer, but that involves mutable Patterns, which in
// turn prevents multithreaded scanning. An alternative would be a
// sorted array + binary search, but that slows down lookups. Perhaps a
// custom hashmap with support for duplicate keys?
const Val = if (withsub) std.ArrayListUnmanaged(*const Pattern) else void;
const Ctx = struct {
pub fn hash(_: Ctx, p: *const Pattern) u64 {
return std.hash.Wyhash.hash(0, p.pattern);
}
pub fn eql(_: Ctx, a: *const Pattern, b: *const Pattern) bool {
return std.mem.eql(u8, a.pattern, b.pattern);
}
};
const Self = @This();
fn append(self: *Self, pat: *const Pattern) void {
std.debug.assert((pat.sub != null) == withsub);
if (pat.isliteral) {
const e = self.literals.getOrPut(main.allocator, pat) catch unreachable;
if (!e.found_existing) {
e.key_ptr.* = pat;
e.value_ptr.* = if (withsub) .{} else {};
}
if (!withsub and !pat.isdir and e.key_ptr.*.isdir) e.key_ptr.* = pat;
if (withsub) {
if (pat.sub) |s| e.value_ptr.*.append(main.allocator, s) catch unreachable;
}
} else self.wild.append(main.allocator, pat) catch unreachable;
}
fn match(self: *const Self, name: [:0]const u8) ?bool {
var ret: ?bool = null;
if (self.literals.getKey(&.{ .pattern = name })) |p| ret = p.isdir;
for (self.wild.items) |p| {
if (ret == false) return ret;
if (c.fnmatch(p.pattern.ptr, name.ptr, 0) == 0) ret = p.isdir;
}
return ret;
}
fn enter(self: *const Self, out: *Patterns, name: [:0]const u8) void {
if (self.literals.get(&.{ .pattern = name })) |lst| for (lst.items) |sub| out.append(sub);
for (self.wild.items) |p| if (c.fnmatch(p.pattern.ptr, name.ptr, 0) == 0) out.append(p.sub.?);
}
fn deinit(self: *Self) void {
if (withsub) {
var it = self.literals.valueIterator();
while (it.next()) |e| e.deinit(main.allocator);
}
self.literals.deinit(main.allocator);
self.wild.deinit(main.allocator);
self.* = undefined;
}
};
}
// List of all patterns that should be matched at one level.
pub const Patterns = struct {
nonsub: PatternList(false) = .{},
sub: PatternList(true) = .{},
isroot: bool = false,
fn append(self: *Patterns, pat: *const Pattern) void {
if (pat.sub == null) self.nonsub.append(pat)
else self.sub.append(pat);
}
// Matches patterns in this level plus unanchored patterns.
// Returns null if nothing matches, otherwise whether the given item should
// only be exluced if it's a directory.
// (Should not be called on root_unanchored)
pub fn match(self: *const Patterns, name: [:0]const u8) ?bool {
const a = self.nonsub.match(name);
if (a == false) return false;
const b = root_unanchored.nonsub.match(name);
if (b == false) return false;
return a orelse b;
}
// Construct the list of patterns for a subdirectory.
pub fn enter(self: *const Patterns, name: [:0]const u8) Patterns {
var ret = Patterns{};
self.sub.enter(&ret, name);
root_unanchored.sub.enter(&ret, name);
return ret;
}
pub fn deinit(self: *Patterns) void {
// getPatterns() result should be deinit()ed, except when it returns the root,
// let's simplify that and simply don't deinit root.
if (self.isroot) return;
self.nonsub.deinit();
self.sub.deinit();
self.* = undefined;
}
};
// Unanchored patterns that should be checked at every level
var root_unanchored: Patterns = .{};
// Patterns anchored at the root
var root: Patterns = .{ .isroot = true };
pub fn addPattern(pattern: []const u8) void {
if (pattern.len == 0) return;
const p = Pattern.parse(pattern);
if (pattern[0] == '/') root.append(p)
else root_unanchored.append(p);
}
// Get the patterns for the given (absolute) path, assuming the given path
// itself hasn't been excluded. This function is slow, directory walking code
// should use Patterns.enter() instead.
pub fn getPatterns(path_: []const u8) Patterns {
var path = std.mem.trim(u8, path_, "/");
if (path.len == 0) return root;
var pat = root;
defer pat.deinit();
while (std.mem.indexOfScalar(u8, path, '/')) |idx| {
const name = main.allocator.dupeZ(u8, path[0..idx]) catch unreachable;
defer main.allocator.free(name);
path = path[idx+1..];
const sub = pat.enter(name);
pat.deinit();
pat = sub;
}
const name = main.allocator.dupeZ(u8, path) catch unreachable;
defer main.allocator.free(name);
return pat.enter(name);
}
fn testfoo(p: *const Patterns) !void {
try std.testing.expectEqual(p.match("root"), null);
try std.testing.expectEqual(p.match("bar"), false);
try std.testing.expectEqual(p.match("qoo"), false);
try std.testing.expectEqual(p.match("xyz"), false);
try std.testing.expectEqual(p.match("okay"), null);
try std.testing.expectEqual(p.match("somefile"), false);
var s = p.enter("okay");
try std.testing.expectEqual(s.match("bar"), null);
try std.testing.expectEqual(s.match("xyz"), null);
try std.testing.expectEqual(s.match("notokay"), false);
s.deinit();
}
test "Matching" {
addPattern("/foo/bar");
addPattern("/foo/qoo/");
addPattern("/foo/qoo");
addPattern("/foo/qoo/");
addPattern("/f??/xyz");
addPattern("/f??/xyz/");
addPattern("/*o/somefile");
addPattern("/a??/okay");
addPattern("/roo?");
addPattern("/root/");
addPattern("excluded");
addPattern("somefile/");
addPattern("o*y/not[o]kay");
var a0 = getPatterns("/");
try std.testing.expectEqual(a0.match("a"), null);
try std.testing.expectEqual(a0.match("excluded"), false);
try std.testing.expectEqual(a0.match("somefile"), true);
try std.testing.expectEqual(a0.match("root"), false);
var a1 = a0.enter("foo");
a0.deinit();
try testfoo(&a1);
a1.deinit();
var b0 = getPatterns("/somedir/somewhere");
try std.testing.expectEqual(b0.match("a"), null);
try std.testing.expectEqual(b0.match("excluded"), false);
try std.testing.expectEqual(b0.match("root"), null);
try std.testing.expectEqual(b0.match("okay"), null);
var b1 = b0.enter("okay");
b0.deinit();
try std.testing.expectEqual(b1.match("excluded"), false);
try std.testing.expectEqual(b1.match("okay"), null);
try std.testing.expectEqual(b1.match("notokay"), false);
b1.deinit();
var c0 = getPatterns("/foo/");
try testfoo(&c0);
c0.deinit();
}

View file

@ -1,128 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef _global_h
#define _global_h
#include "config.h"
#include <stdio.h>
#include <stddef.h>
#include <limits.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
#ifdef HAVE_INTTYPES_H
# include <inttypes.h>
#endif
#ifdef HAVE_STDINT_H
# include <stdint.h>
#endif
/* File Flags (struct dir -> flags) */
#define FF_DIR 0x01
#define FF_FILE 0x02
#define FF_ERR 0x04 /* error while reading this item */
#define FF_OTHFS 0x08 /* excluded because it was another filesystem */
#define FF_EXL 0x10 /* excluded using exlude patterns */
#define FF_SERR 0x20 /* error in subdirectory */
#define FF_HLNKC 0x40 /* hard link candidate (file with st_nlink > 1) */
#define FF_BSEL 0x80 /* selected */
#define FF_EXT 0x100 /* extended struct available */
/* Program states */
#define ST_CALC 0
#define ST_BROWSE 1
#define ST_DEL 2
#define ST_HELP 3
#define ST_SHELL 4
#define ST_QUIT 5
/* structure representing a file or directory */
struct dir {
int64_t size, asize;
uint64_t ino, dev;
struct dir *parent, *next, *prev, *sub, *hlnk;
int items;
unsigned short flags;
char name[];
};
/* A note on the ino and dev fields above: ino is usually represented as ino_t,
* which POSIX specifies to be an unsigned integer. dev is usually represented
* as dev_t, which may be either a signed or unsigned integer, and in practice
* both are used. dev represents an index / identifier of a device or
* filesystem, and I'm unsure whether a negative value has any meaning in that
* context. Hence my choice of using an unsigned integer. Negative values, if
* we encounter them, will just get typecasted into a positive value. No
* information is lost in this conversion, and the semantics remain the same.
*/
/* Extended information for a struct dir. This struct is stored in the same
* memory region as struct dir, placed after the name field. See util.h for
* macros to help manage this. */
struct dir_ext {
uint64_t mtime;
int uid, gid;
unsigned short mode;
};
/* program state */
extern int pstate;
/* read-only flag, 1+ = disable deletion, 2+ = also disable shell */
extern int read_only;
/* minimum screen update interval when calculating, in ms */
extern long update_delay;
/* filter directories with CACHEDIR.TAG */
extern int cachedir_tags;
/* flag if we should ask for confirmation when quitting */
extern int confirm_quit;
/* flag whether we want to enable use of struct dir_ext */
extern int extended_info;
/* flag whether we want to follow symlinks */
extern int follow_symlinks;
/* handle input from keyboard and update display */
int input_handle(int);
/* de-initialize ncurses */
void close_nc();
/* import all other global functions and variables */
#include "browser.h"
#include "delete.h"
#include "dir.h"
#include "dirlist.h"
#include "exclude.h"
#include "help.h"
#include "path.h"
#include "util.h"
#include "shell.h"
#include "quit.h"
#endif

View file

@ -1,206 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#include "global.h"
#include <ncurses.h>
#include <string.h>
int page, start;
#define KEYS 19
char *keys[KEYS*2] = {
/*|----key----| |----------------description----------------|*/
"up, k", "Move cursor up",
"down, j", "Move cursor down",
"right/enter", "Open selected directory",
"left, <, h", "Open parent directory",
"n", "Sort by name (ascending/descending)",
"s", "Sort by size (ascending/descending)",
"C", "Sort by items (ascending/descending)",
"M", "Sort by mtime (-e flag)",
"d", "Delete selected file or directory",
"t", "Toggle dirs before files when sorting",
"g", "Show percentage and/or graph",
"a", "Toggle between apparent size and disk usage",
"c", "Toggle display of child item counts",
"m", "Toggle display of latest mtime (-e flag)",
"e", "Show/hide hidden or excluded files",
"i", "Show information about selected item",
"r", "Recalculate the current directory",
"b", "Spawn shell in current directory",
"q", "Quit ncdu"
};
void help_draw() {
int i, line;
browse_draw();
nccreate(15, 60, "ncdu help");
ncaddstr(13, 42, "Press ");
uic_set(UIC_KEY);
addch('q');
uic_set(UIC_DEFAULT);
addstr(" to close");
nctab(30, page == 1, 1, "Keys");
nctab(39, page == 2, 2, "Format");
nctab(50, page == 3, 3, "About");
switch(page) {
case 1:
line = 1;
for(i=start*2; i<start*2+20; i+=2) {
uic_set(UIC_KEY);
ncaddstr(++line, 13-strlen(keys[i]), keys[i]);
uic_set(UIC_DEFAULT);
ncaddstr(line, 15, keys[i+1]);
}
if(start != KEYS-10)
ncaddstr(12, 25, "-- more --");
break;
case 2:
attron(A_BOLD);
ncaddstr(2, 3, "X [size] [graph] [file or directory]");
attroff(A_BOLD);
ncaddstr(3, 4, "The X is only present in the following cases:");
uic_set(UIC_FLAG);
ncaddch( 5, 4, '!');
ncaddch( 6, 4, '.');
ncaddch( 7, 4, '<');
ncaddch( 8, 4, '>');
ncaddch( 9, 4, '@');
ncaddch(10, 4, 'H');
ncaddch(11, 4, 'e');
uic_set(UIC_DEFAULT);
ncaddstr( 5, 7, "An error occurred while reading this directory");
ncaddstr( 6, 7, "An error occurred while reading a subdirectory");
ncaddstr( 7, 7, "File or directory is excluded from the statistics");
ncaddstr( 8, 7, "Directory was on another filesystem");
ncaddstr( 9, 7, "This is not a file nor a dir (symlink, socket, ...)");
ncaddstr(10, 7, "Same file was already counted (hard link)");
ncaddstr(11, 7, "Empty directory");
break;
case 3:
/* Indeed, too much spare time */
attron(A_REVERSE);
#define x 12
#define y 3
/* N */
ncaddstr(y+0, x+0, " ");
ncaddstr(y+1, x+0, " ");
ncaddstr(y+2, x+0, " ");
ncaddstr(y+3, x+0, " ");
ncaddstr(y+4, x+0, " ");
ncaddstr(y+1, x+4, " ");
ncaddstr(y+2, x+4, " ");
ncaddstr(y+3, x+4, " ");
ncaddstr(y+4, x+4, " ");
/* C */
ncaddstr(y+0, x+8, " ");
ncaddstr(y+1, x+8, " ");
ncaddstr(y+2, x+8, " ");
ncaddstr(y+3, x+8, " ");
ncaddstr(y+4, x+8, " ");
/* D */
ncaddstr(y+0, x+19, " ");
ncaddstr(y+1, x+19, " ");
ncaddstr(y+2, x+15, " ");
ncaddstr(y+3, x+15, " ");
ncaddstr(y+3, x+19, " ");
ncaddstr(y+4, x+15, " ");
/* U */
ncaddstr(y+0, x+23, " ");
ncaddstr(y+1, x+23, " ");
ncaddstr(y+2, x+23, " ");
ncaddstr(y+3, x+23, " ");
ncaddstr(y+0, x+27, " ");
ncaddstr(y+1, x+27, " ");
ncaddstr(y+2, x+27, " ");
ncaddstr(y+3, x+27, " ");
ncaddstr(y+4, x+23, " ");
attroff(A_REVERSE);
ncaddstr(y+0, x+30, "NCurses");
ncaddstr(y+1, x+30, "Disk");
ncaddstr(y+2, x+30, "Usage");
ncprint( y+4, x+30, "%s", PACKAGE_VERSION);
ncaddstr( 9, 7, "Written by Yoran Heling <projects@yorhel.nl>");
ncaddstr(10, 16, "https://dev.yorhel.nl/ncdu/");
break;
}
}
int help_key(int ch) {
switch(ch) {
case '1':
case '2':
case '3':
page = ch-'0';
start = 0;
break;
case KEY_RIGHT:
case KEY_NPAGE:
case 'l':
if(++page > 3)
page = 3;
start = 0;
break;
case KEY_LEFT:
case KEY_PPAGE:
case 'h':
if(--page < 1)
page = 1;
start = 0;
break;
case KEY_DOWN:
case ' ':
case 'j':
if(start < KEYS-10)
start++;
break;
case KEY_UP:
case 'k':
if(start > 0)
start--;
break;
default:
pstate = ST_BROWSE;
}
return 0;
}
void help_init() {
page = 1;
start = 0;
pstate = ST_HELP;
}

View file

@ -1,37 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef _help_h
#define _help_h
#include "global.h"
int help_key(int);
void help_draw(void);
void help_init();
#endif

270
src/json_export.zig Normal file
View file

@ -0,0 +1,270 @@
// SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
// SPDX-License-Identifier: MIT
const std = @import("std");
const main = @import("main.zig");
const model = @import("model.zig");
const sink = @import("sink.zig");
const util = @import("util.zig");
const ui = @import("ui.zig");
const c = @import("c.zig").c;
// JSON output is necessarily single-threaded and items MUST be added depth-first.
pub const global = struct {
var writer: *Writer = undefined;
};
const ZstdWriter = struct {
ctx: ?*c.ZSTD_CStream,
out: c.ZSTD_outBuffer,
outbuf: [c.ZSTD_BLOCKSIZE_MAX + 64]u8,
fn create() *ZstdWriter {
const w = main.allocator.create(ZstdWriter) catch unreachable;
w.out = .{
.dst = &w.outbuf,
.size = w.outbuf.len,
.pos = 0,
};
while (true) {
w.ctx = c.ZSTD_createCStream();
if (w.ctx != null) break;
ui.oom();
}
_ = c.ZSTD_CCtx_setParameter(w.ctx, c.ZSTD_c_compressionLevel, main.config.complevel);
return w;
}
fn destroy(w: *ZstdWriter) void {
_ = c.ZSTD_freeCStream(w.ctx);
main.allocator.destroy(w);
}
fn write(w: *ZstdWriter, f: std.fs.File, in: []const u8, flush: bool) !void {
var arg = c.ZSTD_inBuffer{
.src = in.ptr,
.size = in.len,
.pos = 0,
};
while (true) {
const v = c.ZSTD_compressStream2(w.ctx, &w.out, &arg, if (flush) c.ZSTD_e_end else c.ZSTD_e_continue);
if (c.ZSTD_isError(v) != 0) return error.ZstdCompressError;
if (flush or w.out.pos > w.outbuf.len / 2) {
try f.writeAll(w.outbuf[0..w.out.pos]);
w.out.pos = 0;
}
if (!flush and arg.pos == arg.size) break;
if (flush and v == 0) break;
}
}
};
pub const Writer = struct {
fd: std.fs.File,
zstd: ?*ZstdWriter = null,
// Must be large enough to hold PATH_MAX*6 plus some overhead.
// (The 6 is because, in the worst case, every byte expands to a "\u####"
// escape, and we do pessimistic estimates here in order to avoid checking
// buffer lengths for each and every write operation)
buf: [64*1024]u8 = undefined,
off: usize = 0,
dir_entry_open: bool = false,
fn flush(ctx: *Writer, bytes: usize) void {
@branchHint(.unlikely);
// This can only really happen when the root path exceeds PATH_MAX,
// in which case we would probably have error'ed out earlier anyway.
if (bytes > ctx.buf.len) ui.die("Error writing JSON export: path too long.\n", .{});
const buf = ctx.buf[0..ctx.off];
(if (ctx.zstd) |z| z.write(ctx.fd, buf, bytes == 0) else ctx.fd.writeAll(buf)) catch |e|
ui.die("Error writing to file: {s}.\n", .{ ui.errorString(e) });
ctx.off = 0;
}
fn ensureSpace(ctx: *Writer, bytes: usize) void {
if (bytes > ctx.buf.len - ctx.off) ctx.flush(bytes);
}
fn write(ctx: *Writer, s: []const u8) void {
@memcpy(ctx.buf[ctx.off..][0..s.len], s);
ctx.off += s.len;
}
fn writeByte(ctx: *Writer, b: u8) void {
ctx.buf[ctx.off] = b;
ctx.off += 1;
}
// Write escaped string contents, excluding the quotes.
fn writeStr(ctx: *Writer, s: []const u8) void {
for (s) |b| {
if (b >= 0x20 and b != '"' and b != '\\' and b != 127) ctx.writeByte(b)
else switch (b) {
'\n' => ctx.write("\\n"),
'\r' => ctx.write("\\r"),
0x8 => ctx.write("\\b"),
'\t' => ctx.write("\\t"),
0xC => ctx.write("\\f"),
'\\' => ctx.write("\\\\"),
'"' => ctx.write("\\\""),
else => {
ctx.write("\\u00");
const hexdig = "0123456789abcdef";
ctx.writeByte(hexdig[b>>4]);
ctx.writeByte(hexdig[b&0xf]);
},
}
}
}
fn writeUint(ctx: *Writer, n: u64) void {
// Based on std.fmt.formatInt
var a = n;
var buf: [24]u8 = undefined;
var index: usize = buf.len;
while (a >= 100) : (a = @divTrunc(a, 100)) {
index -= 2;
buf[index..][0..2].* = std.fmt.digits2(@as(u8, @intCast(a % 100)));
}
if (a < 10) {
index -= 1;
buf[index] = '0' + @as(u8, @intCast(a));
} else {
index -= 2;
buf[index..][0..2].* = std.fmt.digits2(@as(u8, @intCast(a)));
}
ctx.write(buf[index..]);
}
fn init(out: std.fs.File) *Writer {
var ctx = main.allocator.create(Writer) catch unreachable;
ctx.* = .{ .fd = out };
if (main.config.compress) ctx.zstd = ZstdWriter.create();
ctx.write("[1,2,{\"progname\":\"ncdu\",\"progver\":\"" ++ main.program_version ++ "\",\"timestamp\":");
ctx.writeUint(@intCast(@max(0, std.time.timestamp())));
ctx.writeByte('}');
return ctx;
}
// A newly written directory entry is left "open", i.e. the '}' to close
// the item object is not written, to allow for a setReadError() to be
// caught if one happens before the first sub entry.
// Any read errors after the first sub entry are thrown away, but that's
// just a limitation of the JSON format.
fn closeDirEntry(ctx: *Writer, rderr: bool) void {
if (ctx.dir_entry_open) {
ctx.dir_entry_open = false;
if (rderr) ctx.write(",\"read_error\":true");
ctx.writeByte('}');
}
}
fn writeSpecial(ctx: *Writer, name: []const u8, t: model.EType) void {
ctx.closeDirEntry(false);
ctx.ensureSpace(name.len*6 + 1000);
ctx.write(if (t.isDirectory()) ",\n[{\"name\":\"" else ",\n{\"name\":\"");
ctx.writeStr(name);
ctx.write(switch (t) {
.err => "\",\"read_error\":true}",
.otherfs => "\",\"excluded\":\"otherfs\"}",
.kernfs => "\",\"excluded\":\"kernfs\"}",
.pattern => "\",\"excluded\":\"pattern\"}",
else => unreachable,
});
if (t.isDirectory()) ctx.writeByte(']');
}
fn writeStat(ctx: *Writer, name: []const u8, stat: *const sink.Stat, parent_dev: u64) void {
ctx.ensureSpace(name.len*6 + 1000);
ctx.write(if (stat.etype == .dir) ",\n[{\"name\":\"" else ",\n{\"name\":\"");
ctx.writeStr(name);
ctx.writeByte('"');
if (stat.size > 0) {
ctx.write(",\"asize\":");
ctx.writeUint(stat.size);
}
if (stat.blocks > 0) {
ctx.write(",\"dsize\":");
ctx.writeUint(util.blocksToSize(stat.blocks));
}
if (stat.etype == .dir and stat.dev != parent_dev) {
ctx.write(",\"dev\":");
ctx.writeUint(stat.dev);
}
if (stat.etype == .link) {
ctx.write(",\"ino\":");
ctx.writeUint(stat.ino);
ctx.write(",\"hlnkc\":true,\"nlink\":");
ctx.writeUint(stat.nlink);
}
if (stat.etype == .nonreg) ctx.write(",\"notreg\":true");
if (main.config.extended) {
if (stat.ext.pack.hasuid) {
ctx.write(",\"uid\":");
ctx.writeUint(stat.ext.uid);
}
if (stat.ext.pack.hasgid) {
ctx.write(",\"gid\":");
ctx.writeUint(stat.ext.gid);
}
if (stat.ext.pack.hasmode) {
ctx.write(",\"mode\":");
ctx.writeUint(stat.ext.mode);
}
if (stat.ext.pack.hasmtime) {
ctx.write(",\"mtime\":");
ctx.writeUint(stat.ext.mtime);
}
}
}
};
pub const Dir = struct {
dev: u64,
pub fn addSpecial(_: *Dir, name: []const u8, sp: model.EType) void {
global.writer.writeSpecial(name, sp);
}
pub fn addStat(_: *Dir, name: []const u8, stat: *const sink.Stat) void {
global.writer.closeDirEntry(false);
global.writer.writeStat(name, stat, undefined);
global.writer.writeByte('}');
}
pub fn addDir(d: *Dir, name: []const u8, stat: *const sink.Stat) Dir {
global.writer.closeDirEntry(false);
global.writer.writeStat(name, stat, d.dev);
global.writer.dir_entry_open = true;
return .{ .dev = stat.dev };
}
pub fn setReadError(_: *Dir) void {
global.writer.closeDirEntry(true);
}
pub fn final(_: *Dir) void {
global.writer.ensureSpace(1000);
global.writer.closeDirEntry(false);
global.writer.writeByte(']');
}
};
pub fn createRoot(path: []const u8, stat: *const sink.Stat) Dir {
var root = Dir{.dev=0};
return root.addDir(path, stat);
}
pub fn done() void {
global.writer.write("]\n");
global.writer.flush(0);
if (global.writer.zstd) |z| z.destroy();
global.writer.fd.close();
main.allocator.destroy(global.writer);
}
pub fn setupOutput(out: std.fs.File) void {
global.writer = Writer.init(out);
}

562
src/json_import.zig Normal file
View file

@ -0,0 +1,562 @@
// SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
// SPDX-License-Identifier: MIT
const std = @import("std");
const main = @import("main.zig");
const util = @import("util.zig");
const model = @import("model.zig");
const sink = @import("sink.zig");
const ui = @import("ui.zig");
const c = @import("c.zig").c;
const ZstdReader = struct {
ctx: ?*c.ZSTD_DStream,
in: c.ZSTD_inBuffer,
lastret: usize = 0,
inbuf: [c.ZSTD_BLOCKSIZE_MAX + 16]u8, // This ZSTD_DStreamInSize() + a little bit extra
fn create(head: []const u8) *ZstdReader {
const r = main.allocator.create(ZstdReader) catch unreachable;
@memcpy(r.inbuf[0..head.len], head);
r.in = .{
.src = &r.inbuf,
.size = head.len,
.pos = 0,
};
while (true) {
r.ctx = c.ZSTD_createDStream();
if (r.ctx != null) break;
ui.oom();
}
return r;
}
fn destroy(r: *ZstdReader) void {
_ = c.ZSTD_freeDStream(r.ctx);
main.allocator.destroy(r);
}
fn read(r: *ZstdReader, f: std.fs.File, out: []u8) !usize {
while (true) {
if (r.in.size == r.in.pos) {
r.in.pos = 0;
r.in.size = try f.read(&r.inbuf);
if (r.in.size == 0) {
if (r.lastret == 0) return 0;
return error.ZstdDecompressError; // Early EOF
}
}
var arg = c.ZSTD_outBuffer{ .dst = out.ptr, .size = out.len, .pos = 0 };
r.lastret = c.ZSTD_decompressStream(r.ctx, &arg, &r.in);
if (c.ZSTD_isError(r.lastret) != 0) return error.ZstdDecompressError;
if (arg.pos > 0) return arg.pos;
}
}
};
// Using a custom JSON parser here because, while std.json is great, it does
// perform strict UTF-8 validation. Which is correct, of course, but ncdu dumps
// are not always correct JSON as they may contain non-UTF-8 paths encoded as
// strings.
const Parser = struct {
rd: std.fs.File,
zstd: ?*ZstdReader = null,
rdoff: usize = 0,
rdsize: usize = 0,
byte: u64 = 1,
line: u64 = 1,
buf: [129*1024]u8 = undefined,
fn die(p: *Parser, str: []const u8) noreturn {
ui.die("Error importing file on line {}:{}: {s}.\n", .{ p.line, p.byte, str });
}
// Feed back a byte that has just been returned by nextByte()
fn undoNextByte(p: *Parser, b: u8) void {
p.byte -= 1;
p.rdoff -= 1;
p.buf[p.rdoff] = b;
}
fn fill(p: *Parser) void {
p.rdoff = 0;
p.rdsize = (if (p.zstd) |z| z.read(p.rd, &p.buf) else p.rd.read(&p.buf)) catch |e| switch (e) {
error.IsDir => p.die("not a file"), // should be detected at open() time, but no flag for that...
error.SystemResources => p.die("out of memory"),
error.ZstdDecompressError => p.die("decompression error"),
else => p.die("I/O error"),
};
}
// Returns 0 on EOF.
// (or if the file contains a 0 byte, but that's invalid anyway)
// (Returning a '?u8' here is nicer but kills performance by about +30%)
fn nextByte(p: *Parser) u8 {
if (p.rdoff == p.rdsize) {
@branchHint(.unlikely);
p.fill();
if (p.rdsize == 0) return 0;
}
p.byte += 1;
defer p.rdoff += 1;
return (&p.buf)[p.rdoff];
}
// next non-whitespace byte
fn nextChr(p: *Parser) u8 {
while (true) switch (p.nextByte()) {
'\n' => {
p.line += 1;
p.byte = 1;
},
' ', '\t', '\r' => {},
else => |b| return b,
};
}
fn expectLit(p: *Parser, lit: []const u8) void {
for (lit) |b| if (b != p.nextByte()) p.die("invalid JSON");
}
fn hexdig(p: *Parser) u16 {
const b = p.nextByte();
return switch (b) {
'0'...'9' => b - '0',
'a'...'f' => b - 'a' + 10,
'A'...'F' => b - 'A' + 10,
else => p.die("invalid hex digit"),
};
}
fn stringContentSlow(p: *Parser, buf: []u8, head: u8, off: usize) []u8 {
@branchHint(.unlikely);
var b = head;
var n = off;
while (true) {
switch (b) {
'"' => break,
'\\' => switch (p.nextByte()) {
'"' => if (n < buf.len) { buf[n] = '"'; n += 1; },
'\\'=> if (n < buf.len) { buf[n] = '\\';n += 1; },
'/' => if (n < buf.len) { buf[n] = '/'; n += 1; },
'b' => if (n < buf.len) { buf[n] = 0x8; n += 1; },
'f' => if (n < buf.len) { buf[n] = 0xc; n += 1; },
'n' => if (n < buf.len) { buf[n] = 0xa; n += 1; },
'r' => if (n < buf.len) { buf[n] = 0xd; n += 1; },
't' => if (n < buf.len) { buf[n] = 0x9; n += 1; },
'u' => {
const first = (p.hexdig()<<12) + (p.hexdig()<<8) + (p.hexdig()<<4) + p.hexdig();
var unit = @as(u21, first);
if (std.unicode.utf16IsLowSurrogate(first)) p.die("Unexpected low surrogate");
if (std.unicode.utf16IsHighSurrogate(first)) {
p.expectLit("\\u");
const second = (p.hexdig()<<12) + (p.hexdig()<<8) + (p.hexdig()<<4) + p.hexdig();
unit = std.unicode.utf16DecodeSurrogatePair(&.{first, second}) catch p.die("Invalid low surrogate");
}
if (n + 6 < buf.len)
n += std.unicode.utf8Encode(unit, buf[n..n+5]) catch unreachable;
},
else => p.die("invalid escape sequence"),
},
0x20, 0x21, 0x23...0x5b, 0x5d...0xff => if (n < buf.len) { buf[n] = b; n += 1; },
else => p.die("invalid character in string"),
}
b = p.nextByte();
}
return buf[0..n];
}
// Read a string (after the ") into buf.
// Any characters beyond the size of the buffer are consumed but otherwise discarded.
fn stringContent(p: *Parser, buf: []u8) []u8 {
// The common case (for ncdu dumps): string fits in the given buffer and does not contain any escapes.
var n: usize = 0;
var b = p.nextByte();
while (n < buf.len and b >= 0x20 and b != '"' and b != '\\') {
buf[n] = b;
n += 1;
b = p.nextByte();
}
if (b == '"') return buf[0..n];
return p.stringContentSlow(buf, b, n);
}
fn string(p: *Parser, buf: []u8) []u8 {
if (p.nextChr() != '"') p.die("expected string");
return p.stringContent(buf);
}
fn uintTail(p: *Parser, head: u8, T: anytype) T {
if (head == '0') return 0;
var v: T = head - '0'; // Assumption: T >= u8
// Assumption: we don't parse JSON "documents" that are a bare uint.
while (true) switch (p.nextByte()) {
'0'...'9' => |b| {
const newv = v *% 10 +% (b - '0');
if (newv < v) p.die("integer out of range");
v = newv;
},
else => |b| break p.undoNextByte(b),
};
if (v == 0) p.die("expected number");
return v;
}
fn uint(p: *Parser, T: anytype) T {
switch (p.nextChr()) {
'0'...'9' => |b| return p.uintTail(b, T),
else => p.die("expected number"),
}
}
fn boolean(p: *Parser) bool {
switch (p.nextChr()) {
't' => { p.expectLit("rue"); return true; },
'f' => { p.expectLit("alse"); return false; },
else => p.die("expected boolean"),
}
}
fn obj(p: *Parser) void {
if (p.nextChr() != '{') p.die("expected object");
}
fn key(p: *Parser, first: bool, buf: []u8) ?[]u8 {
const k = switch (p.nextChr()) {
',' => blk: {
if (first) p.die("invalid JSON");
break :blk p.string(buf);
},
'"' => blk: {
if (!first) p.die("invalid JSON");
break :blk p.stringContent(buf);
},
'}' => return null,
else => p.die("invalid JSON"),
};
if (p.nextChr() != ':') p.die("invalid JSON");
return k;
}
fn array(p: *Parser) void {
if (p.nextChr() != '[') p.die("expected array");
}
fn elem(p: *Parser, first: bool) bool {
switch (p.nextChr()) {
',' => if (first) p.die("invalid JSON") else return true,
']' => return false,
else => |b| {
if (!first) p.die("invalid JSON");
p.undoNextByte(b);
return true;
},
}
}
fn skipContent(p: *Parser, head: u8) void {
switch (head) {
't' => p.expectLit("rue"),
'f' => p.expectLit("alse"),
'n' => p.expectLit("ull"),
'-', '0'...'9' =>
// Numbers are kind of annoying, this "parsing" is invalid and ultra-lazy.
while (true) switch (p.nextByte()) {
'-', '+', 'e', 'E', '.', '0'...'9' => {},
else => |b| return p.undoNextByte(b),
},
'"' => _ = p.stringContent(&[0]u8{}),
'[' => {
var first = true;
while (p.elem(first)) {
first = false;
p.skip();
}
},
'{' => {
var first = true;
while (p.key(first, &[0]u8{})) |_| {
first = false;
p.skip();
}
},
else => p.die("invalid JSON"),
}
}
fn skip(p: *Parser) void {
p.skipContent(p.nextChr());
}
fn eof(p: *Parser) void {
if (p.nextChr() != 0) p.die("trailing garbage");
}
};
// Should really add some invalid JSON test cases as well, but I'd first like
// to benchmark the performance impact of using error returns instead of
// calling ui.die().
test "JSON parser" {
const json =
\\{
\\ "null": null,
\\ "true": true,
\\ "false": false,
\\ "zero":0 ,"uint": 123,
\\ "emptyObj": {},
\\ "emptyArray": [],
\\ "emptyString": "",
\\ "encString": "\"\\\/\b\f\n\uBe3F",
\\ "numbers": [0,1,20,-300, 3.4 ,0e-10 , -100.023e+13 ]
\\}
;
var p = Parser{ .rd = undefined, .rdsize = json.len };
@memcpy(p.buf[0..json.len], json);
p.skip();
p = Parser{ .rd = undefined, .rdsize = json.len };
@memcpy(p.buf[0..json.len], json);
var buf: [128]u8 = undefined;
p.obj();
try std.testing.expectEqualStrings(p.key(true, &buf).?, "null");
p.skip();
try std.testing.expectEqualStrings(p.key(false, &buf).?, "true");
try std.testing.expect(p.boolean());
try std.testing.expectEqualStrings(p.key(false, &buf).?, "false");
try std.testing.expect(!p.boolean());
try std.testing.expectEqualStrings(p.key(false, &buf).?, "zero");
try std.testing.expectEqual(0, p.uint(u8));
try std.testing.expectEqualStrings(p.key(false, &buf).?, "uint");
try std.testing.expectEqual(123, p.uint(u8));
try std.testing.expectEqualStrings(p.key(false, &buf).?, "emptyObj");
p.obj();
try std.testing.expect(p.key(true, &buf) == null);
try std.testing.expectEqualStrings(p.key(false, &buf).?, "emptyArray");
p.array();
try std.testing.expect(!p.elem(true));
try std.testing.expectEqualStrings(p.key(false, &buf).?, "emptyString");
try std.testing.expectEqualStrings(p.string(&buf), "");
try std.testing.expectEqualStrings(p.key(false, &buf).?, "encString");
try std.testing.expectEqualStrings(p.string(&buf), "\"\\/\x08\x0c\n\u{be3f}");
try std.testing.expectEqualStrings(p.key(false, &buf).?, "numbers");
p.skip();
try std.testing.expect(p.key(true, &buf) == null);
}
const Ctx = struct {
p: *Parser,
sink: *sink.Thread,
stat: sink.Stat = .{},
rderr: bool = false,
namelen: usize = 0,
namebuf: [32*1024]u8 = undefined,
};
fn itemkey(ctx: *Ctx, key: []const u8) void {
const eq = std.mem.eql;
switch (if (key.len > 0) key[0] else @as(u8,0)) {
'a' => {
if (eq(u8, key, "asize")) {
ctx.stat.size = ctx.p.uint(u64);
return;
}
},
'd' => {
if (eq(u8, key, "dsize")) {
ctx.stat.blocks = @intCast(ctx.p.uint(u64)>>9);
return;
}
if (eq(u8, key, "dev")) {
ctx.stat.dev = ctx.p.uint(u64);
return;
}
},
'e' => {
if (eq(u8, key, "excluded")) {
var buf: [32]u8 = undefined;
const typ = ctx.p.string(&buf);
// "frmlnk" is also possible, but currently considered equivalent to "pattern".
ctx.stat.etype =
if (eq(u8, typ, "otherfs") or eq(u8, typ, "othfs")) .otherfs
else if (eq(u8, typ, "kernfs")) .kernfs
else .pattern;
return;
}
},
'g' => {
if (eq(u8, key, "gid")) {
ctx.stat.ext.gid = ctx.p.uint(u32);
ctx.stat.ext.pack.hasgid = true;
return;
}
},
'h' => {
if (eq(u8, key, "hlnkc")) {
if (ctx.p.boolean()) ctx.stat.etype = .link;
return;
}
},
'i' => {
if (eq(u8, key, "ino")) {
ctx.stat.ino = ctx.p.uint(u64);
return;
}
},
'm' => {
if (eq(u8, key, "mode")) {
ctx.stat.ext.mode = ctx.p.uint(u16);
ctx.stat.ext.pack.hasmode = true;
return;
}
if (eq(u8, key, "mtime")) {
ctx.stat.ext.mtime = ctx.p.uint(u64);
ctx.stat.ext.pack.hasmtime = true;
// Accept decimal numbers, but discard the fractional part because our data model doesn't support it.
switch (ctx.p.nextByte()) {
'.' =>
while (true) switch (ctx.p.nextByte()) {
'0'...'9' => {},
else => |b| return ctx.p.undoNextByte(b),
},
else => |b| return ctx.p.undoNextByte(b),
}
}
},
'n' => {
if (eq(u8, key, "name")) {
if (ctx.namelen != 0) ctx.p.die("duplicate key");
ctx.namelen = ctx.p.string(&ctx.namebuf).len;
if (ctx.namelen > ctx.namebuf.len-5) ctx.p.die("too long file name");
return;
}
if (eq(u8, key, "nlink")) {
ctx.stat.nlink = ctx.p.uint(u31);
if (ctx.stat.etype != .dir and ctx.stat.nlink > 1)
ctx.stat.etype = .link;
return;
}
if (eq(u8, key, "notreg")) {
if (ctx.p.boolean()) ctx.stat.etype = .nonreg;
return;
}
},
'r' => {
if (eq(u8, key, "read_error")) {
if (ctx.p.boolean()) {
if (ctx.stat.etype == .dir) ctx.rderr = true
else ctx.stat.etype = .err;
}
return;
}
},
'u' => {
if (eq(u8, key, "uid")) {
ctx.stat.ext.uid = ctx.p.uint(u32);
ctx.stat.ext.pack.hasuid = true;
return;
}
},
else => {},
}
ctx.p.skip();
}
fn item(ctx: *Ctx, parent: ?*sink.Dir, dev: u64) void {
ctx.stat = .{ .dev = dev };
ctx.namelen = 0;
ctx.rderr = false;
const isdir = switch (ctx.p.nextChr()) {
'[' => blk: {
ctx.p.obj();
break :blk true;
},
'{' => false,
else => ctx.p.die("expected object or array"),
};
if (parent == null and !isdir) ctx.p.die("parent item must be a directory");
ctx.stat.etype = if (isdir) .dir else .reg;
var keybuf: [32]u8 = undefined;
var first = true;
while (ctx.p.key(first, &keybuf)) |k| {
first = false;
itemkey(ctx, k);
}
if (ctx.namelen == 0) ctx.p.die("missing \"name\" field");
const name = (&ctx.namebuf)[0..ctx.namelen];
if (ctx.stat.etype == .dir) {
const ndev = ctx.stat.dev;
const dir =
if (parent) |d| d.addDir(ctx.sink, name, &ctx.stat)
else sink.createRoot(name, &ctx.stat);
ctx.sink.setDir(dir);
if (ctx.rderr) dir.setReadError(ctx.sink);
while (ctx.p.elem(false)) item(ctx, dir, ndev);
ctx.sink.setDir(parent);
dir.unref(ctx.sink);
} else {
if (@intFromEnum(ctx.stat.etype) < 0)
parent.?.addSpecial(ctx.sink, name, ctx.stat.etype)
else
parent.?.addStat(ctx.sink, name, &ctx.stat);
if (isdir and ctx.p.elem(false)) ctx.p.die("unexpected contents in an excluded directory");
}
if ((ctx.sink.files_seen.load(.monotonic) & 65) == 0)
main.handleEvent(false, false);
}
pub fn import(fd: std.fs.File, head: []const u8) void {
const sink_threads = sink.createThreads(1);
defer sink.done();
var p = Parser{.rd = fd};
defer if (p.zstd) |z| z.destroy();
if (head.len >= 4 and std.mem.eql(u8, head[0..4], "\x28\xb5\x2f\xfd")) {
p.zstd = ZstdReader.create(head);
} else {
p.rdsize = head.len;
@memcpy(p.buf[0..head.len], head);
}
p.array();
if (p.uint(u16) != 1) p.die("incompatible major format version");
if (!p.elem(false)) p.die("expected array element");
_ = p.uint(u16); // minor version, ignored for now
if (!p.elem(false)) p.die("expected array element");
// metadata object
p.obj();
p.skipContent('{');
// Items
if (!p.elem(false)) p.die("expected array element");
var ctx = Ctx{.p = &p, .sink = &sink_threads[0]};
item(&ctx, null, 0);
// accept more trailing elements
while (p.elem(false)) p.skip();
p.eof();
}

View file

@ -1,327 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#include "global.h"
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <unistd.h>
#include <sys/time.h>
#include <yopt.h>
int pstate;
int read_only = 0;
long update_delay = 100;
int cachedir_tags = 0;
int extended_info = 0;
int follow_symlinks = 0;
int confirm_quit = 0;
static int min_rows = 17, min_cols = 60;
static int ncurses_init = 0;
static int ncurses_tty = 0; /* Explicitely open /dev/tty instead of using stdio */
static long lastupdate = 999;
static void screen_draw() {
switch(pstate) {
case ST_CALC: dir_draw(); break;
case ST_BROWSE: browse_draw(); break;
case ST_HELP: help_draw(); break;
case ST_SHELL: shell_draw(); break;
case ST_DEL: delete_draw(); break;
case ST_QUIT: quit_draw(); break;
}
}
/* wait:
* -1: non-blocking, always draw screen
* 0: blocking wait for input and always draw screen
* 1: non-blocking, draw screen only if a configured delay has passed or after keypress
*/
int input_handle(int wait) {
int ch;
struct timeval tv;
if(wait != 1)
screen_draw();
else {
gettimeofday(&tv, (void *)NULL);
tv.tv_usec = (1000*(tv.tv_sec % 1000) + (tv.tv_usec / 1000)) / update_delay;
if(lastupdate != tv.tv_usec) {
screen_draw();
lastupdate = tv.tv_usec;
}
}
/* No actual input handling is done if ncurses hasn't been initialized yet. */
if(!ncurses_init)
return wait == 0 ? 1 : 0;
nodelay(stdscr, wait?1:0);
errno = 0;
while((ch = getch()) != ERR) {
if(ch == KEY_RESIZE) {
if(ncresize(min_rows, min_cols))
min_rows = min_cols = 0;
/* ncresize() may change nodelay state, make sure to revert it. */
nodelay(stdscr, wait?1:0);
screen_draw();
continue;
}
switch(pstate) {
case ST_CALC: return dir_key(ch);
case ST_BROWSE: return browse_key(ch);
case ST_HELP: return help_key(ch);
case ST_DEL: return delete_key(ch);
case ST_QUIT: return quit_key(ch);
}
screen_draw();
}
if(errno == EPIPE || errno == EBADF)
return 1;
return 0;
}
/* parse command line */
static void argv_parse(int argc, char **argv) {
yopt_t yopt;
int v;
char *val;
char *export = NULL;
char *import = NULL;
char *dir = NULL;
static yopt_opt_t opts[] = {
{ 'h', 0, "-h,-?,--help" },
{ 'q', 0, "-q" },
{ 'v', 0, "-v,-V,--version" },
{ 'x', 0, "-x" },
{ 'e', 0, "-e" },
{ 'r', 0, "-r" },
{ 'o', 1, "-o" },
{ 'f', 1, "-f" },
{ '0', 0, "-0" },
{ '1', 0, "-1" },
{ '2', 0, "-2" },
{ 1, 1, "--exclude" },
{ 'X', 1, "-X,--exclude-from" },
{ 'L', 0, "-L,--follow-symlinks" },
{ 'C', 0, "--exclude-caches" },
{ 's', 0, "--si" },
{ 'Q', 0, "--confirm-quit" },
{ 'c', 1, "--color" },
{0,0,NULL}
};
dir_ui = -1;
si = 0;
yopt_init(&yopt, argc, argv, opts);
while((v = yopt_next(&yopt, &val)) != -1) {
switch(v) {
case 0 : dir = val; break;
case 'h':
printf("ncdu <options> <directory>\n\n");
printf(" -h,--help This help message\n");
printf(" -q Quiet mode, refresh interval 2 seconds\n");
printf(" -v,-V,--version Print version\n");
printf(" -x Same filesystem\n");
printf(" -e Enable extended information\n");
printf(" -r Read only\n");
printf(" -o FILE Export scanned directory to FILE\n");
printf(" -f FILE Import scanned directory from FILE\n");
printf(" -0,-1,-2 UI to use when scanning (0=none,2=full ncurses)\n");
printf(" --si Use base 10 (SI) prefixes instead of base 2\n");
printf(" --exclude PATTERN Exclude files that match PATTERN\n");
printf(" -X, --exclude-from FILE Exclude files that match any pattern in FILE\n");
printf(" -L, --follow-symlinks Follow symbolic links (excluding directories)\n");
printf(" --exclude-caches Exclude directories containing CACHEDIR.TAG\n");
printf(" --confirm-quit Confirm quitting ncdu\n");
printf(" --color SCHEME Set color scheme\n");
exit(0);
case 'q': update_delay = 2000; break;
case 'v':
printf("ncdu %s\n", PACKAGE_VERSION);
exit(0);
case 'x': dir_scan_smfs = 1; break;
case 'e': extended_info = 1; break;
case 'r': read_only++; break;
case 's': si = 1; break;
case 'o': export = val; break;
case 'f': import = val; break;
case '0': dir_ui = 0; break;
case '1': dir_ui = 1; break;
case '2': dir_ui = 2; break;
case 'Q': confirm_quit = 1; break;
case 1 : exclude_add(val); break; /* --exclude */
case 'X':
if(exclude_addfile(val)) {
fprintf(stderr, "Can't open %s: %s\n", val, strerror(errno));
exit(1);
}
break;
case 'L': follow_symlinks = 1; break;
case 'C':
cachedir_tags = 1;
break;
case 'c':
if(strcmp(val, "off") == 0) { uic_theme = 0; }
else if(strcmp(val, "dark") == 0) { uic_theme = 1; }
else {
fprintf(stderr, "Unknown --color option: %s\n", val);
exit(1);
}
break;
case -2:
fprintf(stderr, "ncdu: %s.\n", val);
exit(1);
}
}
if(export) {
if(dir_export_init(export)) {
fprintf(stderr, "Can't open %s: %s\n", export, strerror(errno));
exit(1);
}
if(strcmp(export, "-") == 0)
ncurses_tty = 1;
} else
dir_mem_init(NULL);
if(import) {
if(dir_import_init(import)) {
fprintf(stderr, "Can't open %s: %s\n", import, strerror(errno));
exit(1);
}
if(strcmp(import, "-") == 0)
ncurses_tty = 1;
} else
dir_scan_init(dir ? dir : ".");
/* Use the single-line scan feedback by default when exporting to file, no
* feedback when exporting to stdout. */
if(dir_ui == -1)
dir_ui = export && strcmp(export, "-") == 0 ? 0 : export ? 1 : 2;
}
/* Initializes ncurses only when not done yet. */
static void init_nc() {
int ok = 0;
FILE *tty;
SCREEN *term;
if(ncurses_init)
return;
ncurses_init = 1;
if(ncurses_tty) {
tty = fopen("/dev/tty", "r+");
if(!tty) {
fprintf(stderr, "Error opening /dev/tty: %s\n", strerror(errno));
exit(1);
}
term = newterm(NULL, tty, tty);
if(term)
set_term(term);
ok = !!term;
} else {
/* Make sure the user doesn't accidentally pipe in data to ncdu's standard
* input without using "-f -". An annoying input sequence could result in
* the deletion of your files, which we want to prevent at all costs. */
if(!isatty(0)) {
fprintf(stderr, "Standard input is not a TTY. Did you mean to import a file using '-f -'?\n");
exit(1);
}
ok = !!initscr();
}
if(!ok) {
fprintf(stderr, "Error while initializing ncurses.\n");
exit(1);
}
uic_init();
cbreak();
noecho();
curs_set(0);
keypad(stdscr, TRUE);
if(ncresize(min_rows, min_cols))
min_rows = min_cols = 0;
}
void close_nc() {
if(ncurses_init) {
erase();
refresh();
endwin();
}
}
/* main program */
int main(int argc, char **argv) {
read_locale();
argv_parse(argc, argv);
if(dir_ui == 2)
init_nc();
while(1) {
/* We may need to initialize/clean up the screen when switching from the
* (sometimes non-ncurses) CALC state to something else. */
if(pstate != ST_CALC) {
if(dir_ui == 1)
fputc('\n', stderr);
init_nc();
}
if(pstate == ST_CALC) {
if(dir_process()) {
if(dir_ui == 1)
fputc('\n', stderr);
break;
}
} else if(pstate == ST_DEL)
delete_process();
else if(input_handle(0))
break;
}
close_nc();
exclude_clear();
return 0;
}

693
src/main.zig Normal file
View file

@ -0,0 +1,693 @@
// SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
// SPDX-License-Identifier: MIT
pub const program_version = "2.9.2";
const std = @import("std");
const model = @import("model.zig");
const scan = @import("scan.zig");
const json_import = @import("json_import.zig");
const json_export = @import("json_export.zig");
const bin_export = @import("bin_export.zig");
const bin_reader = @import("bin_reader.zig");
const sink = @import("sink.zig");
const mem_src = @import("mem_src.zig");
const mem_sink = @import("mem_sink.zig");
const ui = @import("ui.zig");
const browser = @import("browser.zig");
const delete = @import("delete.zig");
const util = @import("util.zig");
const exclude = @import("exclude.zig");
const c = @import("c.zig").c;
test "imports" {
_ = model;
_ = scan;
_ = json_import;
_ = json_export;
_ = bin_export;
_ = bin_reader;
_ = sink;
_ = mem_src;
_ = mem_sink;
_ = ui;
_ = browser;
_ = delete;
_ = util;
_ = exclude;
}
// "Custom" allocator that wraps the libc allocator and calls ui.oom() on error.
// This allocator never returns an error, it either succeeds or causes ncdu to quit.
// (Which means you'll find a lot of "catch unreachable" sprinkled through the code,
// they look scarier than they are)
fn wrapAlloc(_: *anyopaque, len: usize, ptr_alignment: std.mem.Alignment, return_address: usize) ?[*]u8 {
while (true) {
if (std.heap.c_allocator.vtable.alloc(undefined, len, ptr_alignment, return_address)) |r|
return r
else {}
ui.oom();
}
}
pub const allocator = std.mem.Allocator{
.ptr = undefined,
.vtable = &.{
.alloc = wrapAlloc,
// AFAIK, all uses of resize() to grow an allocation will fall back to alloc() on failure.
.resize = std.heap.c_allocator.vtable.resize,
.remap = std.heap.c_allocator.vtable.remap,
.free = std.heap.c_allocator.vtable.free,
},
};
// Custom panic impl to reset the terminal before spewing out an error message.
pub const panic = std.debug.FullPanic(struct {
pub fn panicFn(msg: []const u8, first_trace_addr: ?usize) noreturn {
@branchHint(.cold);
ui.deinit();
std.debug.defaultPanic(msg, first_trace_addr);
}
}.panicFn);
pub const config = struct {
pub const SortCol = enum { name, blocks, size, items, mtime };
pub const SortOrder = enum { asc, desc };
pub var same_fs: bool = false;
pub var extended: bool = false;
pub var follow_symlinks: bool = false;
pub var exclude_caches: bool = false;
pub var exclude_kernfs: bool = false;
pub var threads: usize = 1;
pub var complevel: u8 = 4;
pub var compress: bool = false;
pub var export_block_size: ?usize = null;
pub var update_delay: u64 = 100*std.time.ns_per_ms;
pub var scan_ui: ?enum { none, line, full } = null;
pub var si: bool = false;
pub var nc_tty: bool = false;
pub var ui_color: enum { off, dark, darkbg } = .off;
pub var thousands_sep: []const u8 = ",";
pub var show_hidden: bool = true;
pub var show_blocks: bool = true;
pub var show_shared: enum { off, shared, unique } = .shared;
pub var show_items: bool = false;
pub var show_mtime: bool = false;
pub var show_graph: bool = true;
pub var show_percent: bool = false;
pub var graph_style: enum { hash, half, eighth } = .hash;
pub var sort_col: SortCol = .blocks;
pub var sort_order: SortOrder = .desc;
pub var sort_dirsfirst: bool = false;
pub var sort_natural: bool = true;
pub var imported: bool = false;
pub var binreader: bool = false;
pub var can_delete: ?bool = null;
pub var can_shell: ?bool = null;
pub var can_refresh: ?bool = null;
pub var confirm_quit: bool = false;
pub var confirm_delete: bool = true;
pub var ignore_delete_errors: bool = false;
pub var delete_command: [:0]const u8 = "";
};
pub var state: enum { scan, browse, refresh, shell, delete } = .scan;
const stdin = if (@hasDecl(std.io, "getStdIn")) std.io.getStdIn() else std.fs.File.stdin();
const stdout = if (@hasDecl(std.io, "getStdOut")) std.io.getStdOut() else std.fs.File.stdout();
// Simple generic argument parser, supports getopt_long() style arguments.
const Args = struct {
lst: []const [:0]const u8,
short: ?[:0]const u8 = null, // Remainder after a short option, e.g. -x<stuff> (which may be either more short options or an argument)
last: ?[]const u8 = null,
last_arg: ?[:0]const u8 = null, // In the case of --option=<arg>
shortbuf: [2]u8 = undefined,
argsep: bool = false,
ignerror: bool = false,
const Self = @This();
const Option = struct {
opt: bool,
val: []const u8,
fn is(self: @This(), cmp: []const u8) bool {
return self.opt and std.mem.eql(u8, self.val, cmp);
}
};
fn init(lst: []const [:0]const u8) Self {
return Self{ .lst = lst };
}
fn pop(self: *Self) ?[:0]const u8 {
if (self.lst.len == 0) return null;
defer self.lst = self.lst[1..];
return self.lst[0];
}
fn shortopt(self: *Self, s: [:0]const u8) Option {
self.shortbuf[0] = '-';
self.shortbuf[1] = s[0];
self.short = if (s.len > 1) s[1.. :0] else null;
self.last = &self.shortbuf;
return .{ .opt = true, .val = &self.shortbuf };
}
pub fn die(self: *const Self, comptime msg: []const u8, args: anytype) !noreturn {
if (self.ignerror) return error.InvalidArg;
ui.die(msg, args);
}
/// Return the next option or positional argument.
/// 'opt' indicates whether it's an option or positional argument,
/// 'val' will be either -x, --something or the argument.
pub fn next(self: *Self) !?Option {
if (self.last_arg != null) try self.die("Option '{s}' does not expect an argument.\n", .{ self.last.? });
if (self.short) |s| return self.shortopt(s);
const val = self.pop() orelse return null;
if (self.argsep or val.len == 0 or val[0] != '-') return Option{ .opt = false, .val = val };
if (val.len == 1) try self.die("Invalid option '-'.\n", .{});
if (val.len == 2 and val[1] == '-') {
self.argsep = true;
return self.next();
}
if (val[1] == '-') {
if (std.mem.indexOfScalar(u8, val, '=')) |sep| {
if (sep == 2) try self.die("Invalid option '{s}'.\n", .{val});
self.last_arg = val[sep+1.. :0];
self.last = val[0..sep];
return Option{ .opt = true, .val = self.last.? };
}
self.last = val;
return Option{ .opt = true, .val = val };
}
return self.shortopt(val[1..:0]);
}
/// Returns the argument given to the last returned option. Dies with an error if no argument is provided.
pub fn arg(self: *Self) ![:0]const u8 {
if (self.short) |a| {
defer self.short = null;
return a;
}
if (self.last_arg) |a| {
defer self.last_arg = null;
return a;
}
if (self.pop()) |o| return o;
try self.die("Option '{s}' requires an argument.\n", .{ self.last.? });
}
};
fn argConfig(args: *Args, opt: Args.Option, infile: bool) !void {
if (opt.is("-q") or opt.is("--slow-ui-updates")) config.update_delay = 2*std.time.ns_per_s
else if (opt.is("--fast-ui-updates")) config.update_delay = 100*std.time.ns_per_ms
else if (opt.is("-x") or opt.is("--one-file-system")) config.same_fs = true
else if (opt.is("--cross-file-system")) config.same_fs = false
else if (opt.is("-e") or opt.is("--extended")) config.extended = true
else if (opt.is("--no-extended")) config.extended = false
else if (opt.is("-r") and !(config.can_delete orelse true)) config.can_shell = false
else if (opt.is("-r")) config.can_delete = false
else if (opt.is("--enable-shell")) config.can_shell = true
else if (opt.is("--disable-shell")) config.can_shell = false
else if (opt.is("--enable-delete")) config.can_delete = true
else if (opt.is("--disable-delete")) config.can_delete = false
else if (opt.is("--enable-refresh")) config.can_refresh = true
else if (opt.is("--disable-refresh")) config.can_refresh = false
else if (opt.is("--show-hidden")) config.show_hidden = true
else if (opt.is("--hide-hidden")) config.show_hidden = false
else if (opt.is("--show-itemcount")) config.show_items = true
else if (opt.is("--hide-itemcount")) config.show_items = false
else if (opt.is("--show-mtime")) config.show_mtime = true
else if (opt.is("--hide-mtime")) config.show_mtime = false
else if (opt.is("--show-graph")) config.show_graph = true
else if (opt.is("--hide-graph")) config.show_graph = false
else if (opt.is("--show-percent")) config.show_percent = true
else if (opt.is("--hide-percent")) config.show_percent = false
else if (opt.is("--group-directories-first")) config.sort_dirsfirst = true
else if (opt.is("--no-group-directories-first")) config.sort_dirsfirst = false
else if (opt.is("--enable-natsort")) config.sort_natural = true
else if (opt.is("--disable-natsort")) config.sort_natural = false
else if (opt.is("--graph-style")) {
const val = try args.arg();
if (std.mem.eql(u8, val, "hash")) config.graph_style = .hash
else if (std.mem.eql(u8, val, "half-block")) config.graph_style = .half
else if (std.mem.eql(u8, val, "eighth-block") or std.mem.eql(u8, val, "eigth-block")) config.graph_style = .eighth
else try args.die("Unknown --graph-style option: {s}.\n", .{val});
} else if (opt.is("--sort")) {
var val: []const u8 = try args.arg();
var ord: ?config.SortOrder = null;
if (std.mem.endsWith(u8, val, "-asc")) {
val = val[0..val.len-4];
ord = .asc;
} else if (std.mem.endsWith(u8, val, "-desc")) {
val = val[0..val.len-5];
ord = .desc;
}
if (std.mem.eql(u8, val, "name")) {
config.sort_col = .name;
config.sort_order = ord orelse .asc;
} else if (std.mem.eql(u8, val, "disk-usage")) {
config.sort_col = .blocks;
config.sort_order = ord orelse .desc;
} else if (std.mem.eql(u8, val, "apparent-size")) {
config.sort_col = .size;
config.sort_order = ord orelse .desc;
} else if (std.mem.eql(u8, val, "itemcount")) {
config.sort_col = .items;
config.sort_order = ord orelse .desc;
} else if (std.mem.eql(u8, val, "mtime")) {
config.sort_col = .mtime;
config.sort_order = ord orelse .asc;
} else try args.die("Unknown --sort option: {s}.\n", .{val});
} else if (opt.is("--shared-column")) {
const val = try args.arg();
if (std.mem.eql(u8, val, "off")) config.show_shared = .off
else if (std.mem.eql(u8, val, "shared")) config.show_shared = .shared
else if (std.mem.eql(u8, val, "unique")) config.show_shared = .unique
else try args.die("Unknown --shared-column option: {s}.\n", .{val});
} else if (opt.is("--apparent-size")) config.show_blocks = false
else if (opt.is("--disk-usage")) config.show_blocks = true
else if (opt.is("-0")) config.scan_ui = .none
else if (opt.is("-1")) config.scan_ui = .line
else if (opt.is("-2")) config.scan_ui = .full
else if (opt.is("--si")) config.si = true
else if (opt.is("--no-si")) config.si = false
else if (opt.is("-L") or opt.is("--follow-symlinks")) config.follow_symlinks = true
else if (opt.is("--no-follow-symlinks")) config.follow_symlinks = false
else if (opt.is("--exclude")) {
const arg = if (infile) (util.expanduser(try args.arg(), allocator) catch unreachable) else try args.arg();
defer if (infile) allocator.free(arg);
exclude.addPattern(arg);
} else if (opt.is("-X") or opt.is("--exclude-from")) {
const arg = if (infile) (util.expanduser(try args.arg(), allocator) catch unreachable) else try args.arg();
defer if (infile) allocator.free(arg);
readExcludeFile(arg) catch |e| try args.die("Error reading excludes from {s}: {s}.\n", .{ arg, ui.errorString(e) });
} else if (opt.is("--exclude-caches")) config.exclude_caches = true
else if (opt.is("--include-caches")) config.exclude_caches = false
else if (opt.is("--exclude-kernfs")) config.exclude_kernfs = true
else if (opt.is("--include-kernfs")) config.exclude_kernfs = false
else if (opt.is("-c") or opt.is("--compress")) config.compress = true
else if (opt.is("--no-compress")) config.compress = false
else if (opt.is("--compress-level")) {
const val = try args.arg();
const num = std.fmt.parseInt(u8, val, 10) catch try args.die("Invalid number for --compress-level: {s}.\n", .{val});
if (num <= 0 or num > 20) try args.die("Invalid number for --compress-level: {s}.\n", .{val});
config.complevel = num;
} else if (opt.is("--export-block-size")) {
const val = try args.arg();
const num = std.fmt.parseInt(u14, val, 10) catch try args.die("Invalid number for --export-block-size: {s}.\n", .{val});
if (num < 4 or num > 16000) try args.die("Invalid number for --export-block-size: {s}.\n", .{val});
config.export_block_size = @as(usize, num) * 1024;
} else if (opt.is("--confirm-quit")) config.confirm_quit = true
else if (opt.is("--no-confirm-quit")) config.confirm_quit = false
else if (opt.is("--confirm-delete")) config.confirm_delete = true
else if (opt.is("--no-confirm-delete")) config.confirm_delete = false
else if (opt.is("--delete-command")) config.delete_command = allocator.dupeZ(u8, try args.arg()) catch unreachable
else if (opt.is("--color")) {
const val = try args.arg();
if (std.mem.eql(u8, val, "off")) config.ui_color = .off
else if (std.mem.eql(u8, val, "dark")) config.ui_color = .dark
else if (std.mem.eql(u8, val, "dark-bg")) config.ui_color = .darkbg
else try args.die("Unknown --color option: {s}.\n", .{val});
} else if (opt.is("-t") or opt.is("--threads")) {
const val = try args.arg();
config.threads = std.fmt.parseInt(u8, val, 10) catch try args.die("Invalid number of --threads: {s}.\n", .{val});
} else return error.UnknownOption;
}
fn tryReadArgsFile(path: [:0]const u8) void {
var f = std.fs.cwd().openFileZ(path, .{}) catch |e| switch (e) {
error.FileNotFound => return,
error.NotDir => return,
else => ui.die("Error opening {s}: {s}\nRun with --ignore-config to skip reading config files.\n", .{ path, ui.errorString(e) }),
};
defer f.close();
var line_buf: [4096]u8 = undefined;
var line_rd = util.LineReader.init(f, &line_buf);
while (true) {
const line_ = (line_rd.read() catch |e|
ui.die("Error reading from {s}: {s}\nRun with --ignore-config to skip reading config files.\n", .{ path, ui.errorString(e) })
) orelse break;
var argc: usize = 0;
var ignerror = false;
var arglist: [2][:0]const u8 = .{ "", "" };
var line = std.mem.trim(u8, line_, &std.ascii.whitespace);
if (line.len > 0 and line[0] == '@') {
ignerror = true;
line = line[1..];
}
if (line.len == 0 or line[0] == '#') continue;
if (std.mem.indexOfAny(u8, line, " \t=")) |i| {
arglist[argc] = allocator.dupeZ(u8, line[0..i]) catch unreachable;
argc += 1;
line = std.mem.trimLeft(u8, line[i+1..], &std.ascii.whitespace);
}
arglist[argc] = allocator.dupeZ(u8, line) catch unreachable;
argc += 1;
var args = Args.init(arglist[0..argc]);
args.ignerror = ignerror;
while (args.next() catch null) |opt| {
if (argConfig(&args, opt, true)) |_| {}
else |_| {
if (ignerror) break;
ui.die("Unrecognized option in config file '{s}': {s}.\nRun with --ignore-config to skip reading config files.\n", .{path, opt.val});
}
}
allocator.free(arglist[0]);
if (argc == 2) allocator.free(arglist[1]);
}
}
fn version() noreturn {
stdout.writeAll("ncdu " ++ program_version ++ "\n") catch {};
std.process.exit(0);
}
fn help() noreturn {
stdout.writeAll(
\\ncdu <options> <directory>
\\
\\Mode selection:
\\ -h, --help This help message
\\ -v, -V, --version Print version
\\ -f FILE Import scanned directory from FILE
\\ -o FILE Export scanned directory to FILE in JSON format
\\ -O FILE Export scanned directory to FILE in binary format
\\ -e, --extended Enable extended information
\\ --ignore-config Don't load config files
\\
\\Scan options:
\\ -x, --one-file-system Stay on the same filesystem
\\ --exclude PATTERN Exclude files that match PATTERN
\\ -X, --exclude-from FILE Exclude files that match any pattern in FILE
\\ --exclude-caches Exclude directories containing CACHEDIR.TAG
\\ -L, --follow-symlinks Follow symbolic links (excluding directories)
\\ --exclude-kernfs Exclude Linux pseudo filesystems (procfs,sysfs,cgroup,...)
\\ -t NUM Scan with NUM threads
\\
\\Export options:
\\ -c, --compress Use Zstandard compression with `-o`
\\ --compress-level NUM Set compression level
\\ --export-block-size KIB Set export block size with `-O`
\\
\\Interface options:
\\ -0, -1, -2 UI to use when scanning (0=none,2=full ncurses)
\\ -q, --slow-ui-updates "Quiet" mode, refresh interval 2 seconds
\\ --enable-shell Enable/disable shell spawning feature
\\ --enable-delete Enable/disable file deletion feature
\\ --enable-refresh Enable/disable directory refresh feature
\\ -r Read only (--disable-delete)
\\ -rr Read only++ (--disable-delete & --disable-shell)
\\ --si Use base 10 (SI) prefixes instead of base 2
\\ --apparent-size Show apparent size instead of disk usage by default
\\ --hide-hidden Hide "hidden" or excluded files by default
\\ --show-itemcount Show item count column by default
\\ --show-mtime Show mtime column by default (requires `-e`)
\\ --show-graph Show graph column by default
\\ --show-percent Show percent column by default
\\ --graph-style STYLE hash / half-block / eighth-block
\\ --shared-column off / shared / unique
\\ --sort COLUMN-(asc/desc) disk-usage / name / apparent-size / itemcount / mtime
\\ --enable-natsort Use natural order when sorting by name
\\ --group-directories-first Sort directories before files
\\ --confirm-quit Ask confirmation before quitting ncdu
\\ --no-confirm-delete Don't ask confirmation before deletion
\\ --delete-command CMD Command to run for file deletion
\\ --color SCHEME off / dark / dark-bg
\\
\\Refer to `man ncdu` for more information.
\\
) catch {};
std.process.exit(0);
}
fn readExcludeFile(path: [:0]const u8) !void {
const f = try std.fs.cwd().openFileZ(path, .{});
defer f.close();
var line_buf: [4096]u8 = undefined;
var line_rd = util.LineReader.init(f, &line_buf);
while (try line_rd.read()) |line| {
if (line.len > 0)
exclude.addPattern(line);
}
}
fn readImport(path: [:0]const u8) !void {
const fd =
if (std.mem.eql(u8, "-", path)) stdin
else try std.fs.cwd().openFileZ(path, .{});
errdefer fd.close();
var buf: [8]u8 = undefined;
if (8 != try fd.readAll(&buf)) return error.EndOfStream;
if (std.mem.eql(u8, &buf, bin_export.SIGNATURE)) {
try bin_reader.open(fd);
config.binreader = true;
} else {
json_import.import(fd, &buf);
fd.close();
}
}
pub fn main() void {
ui.main_thread = std.Thread.getCurrentId();
// Grab thousands_sep from the current C locale.
_ = c.setlocale(c.LC_ALL, "");
if (c.localeconv()) |locale| {
if (locale.*.thousands_sep) |sep| {
const span = std.mem.sliceTo(sep, 0);
if (span.len > 0)
config.thousands_sep = span;
}
}
const loadConf = blk: {
var args = std.process.ArgIteratorPosix.init();
while (args.next()) |a|
if (std.mem.eql(u8, a, "--ignore-config"))
break :blk false;
break :blk true;
};
if (loadConf) {
tryReadArgsFile("/etc/ncdu.conf");
if (std.posix.getenvZ("XDG_CONFIG_HOME")) |p| {
const path = std.fs.path.joinZ(allocator, &.{p, "ncdu", "config"}) catch unreachable;
defer allocator.free(path);
tryReadArgsFile(path);
} else if (std.posix.getenvZ("HOME")) |p| {
const path = std.fs.path.joinZ(allocator, &.{p, ".config", "ncdu", "config"}) catch unreachable;
defer allocator.free(path);
tryReadArgsFile(path);
}
}
var scan_dir: ?[:0]const u8 = null;
var import_file: ?[:0]const u8 = null;
var export_json: ?[:0]const u8 = null;
var export_bin: ?[:0]const u8 = null;
var quit_after_scan = false;
{
const arglist = std.process.argsAlloc(allocator) catch unreachable;
defer std.process.argsFree(allocator, arglist);
var args = Args.init(arglist);
_ = args.next() catch unreachable; // program name
while (args.next() catch unreachable) |opt| {
if (!opt.opt) {
// XXX: ncdu 1.x doesn't error, it just silently ignores all but the last argument.
if (scan_dir != null) ui.die("Multiple directories given, see ncdu -h for help.\n", .{});
scan_dir = allocator.dupeZ(u8, opt.val) catch unreachable;
continue;
}
if (opt.is("-h") or opt.is("-?") or opt.is("--help")) help()
else if (opt.is("-v") or opt.is("-V") or opt.is("--version")) version()
else if (opt.is("-o") and (export_json != null or export_bin != null)) ui.die("The -o flag can only be given once.\n", .{})
else if (opt.is("-o")) export_json = allocator.dupeZ(u8, args.arg() catch unreachable) catch unreachable
else if (opt.is("-O") and (export_json != null or export_bin != null)) ui.die("The -O flag can only be given once.\n", .{})
else if (opt.is("-O")) export_bin = allocator.dupeZ(u8, args.arg() catch unreachable) catch unreachable
else if (opt.is("-f") and import_file != null) ui.die("The -f flag can only be given once.\n", .{})
else if (opt.is("-f")) import_file = allocator.dupeZ(u8, args.arg() catch unreachable) catch unreachable
else if (opt.is("--ignore-config")) {}
else if (opt.is("--quit-after-scan")) quit_after_scan = true // undocumented feature to help with benchmarking scan/import
else if (argConfig(&args, opt, false)) |_| {}
else |_| ui.die("Unrecognized option '{s}'.\n", .{opt.val});
}
}
if (config.threads == 0) config.threads = std.Thread.getCpuCount() catch 1;
if (@import("builtin").os.tag != .linux and config.exclude_kernfs)
ui.die("The --exclude-kernfs flag is currently only supported on Linux.\n", .{});
const out_tty = stdout.isTty();
const in_tty = stdin.isTty();
if (config.scan_ui == null) {
if (export_json orelse export_bin) |f| {
if (!out_tty or std.mem.eql(u8, f, "-")) config.scan_ui = .none
else config.scan_ui = .line;
} else config.scan_ui = .full;
}
if (!in_tty and import_file == null and export_json == null and export_bin == null and !quit_after_scan)
ui.die("Standard input is not a TTY. Did you mean to import a file using '-f -'?\n", .{});
config.nc_tty = !in_tty or (if (export_json orelse export_bin) |f| std.mem.eql(u8, f, "-") else false);
event_delay_timer = std.time.Timer.start() catch unreachable;
defer ui.deinit();
if (export_json) |f| {
const file =
if (std.mem.eql(u8, f, "-")) stdout
else std.fs.cwd().createFileZ(f, .{})
catch |e| ui.die("Error opening export file: {s}.\n", .{ui.errorString(e)});
json_export.setupOutput(file);
sink.global.sink = .json;
} else if (export_bin) |f| {
const file =
if (std.mem.eql(u8, f, "-")) stdout
else std.fs.cwd().createFileZ(f, .{})
catch |e| ui.die("Error opening export file: {s}.\n", .{ui.errorString(e)});
bin_export.setupOutput(file);
sink.global.sink = .bin;
}
if (import_file) |f| {
readImport(f) catch |e| ui.die("Error reading file '{s}': {s}.\n", .{f, ui.errorString(e)});
config.imported = true;
if (config.binreader and (export_json != null or export_bin != null))
bin_reader.import();
} else {
var buf: [std.fs.max_path_bytes+1]u8 = @splat(0);
const path =
if (std.posix.realpathZ(scan_dir orelse ".", buf[0..buf.len-1])) |p| buf[0..p.len:0]
else |_| (scan_dir orelse ".");
scan.scan(path) catch |e| ui.die("Error opening directory: {s}.\n", .{ui.errorString(e)});
}
if (quit_after_scan or export_json != null or export_bin != null) return;
config.can_shell = config.can_shell orelse !config.imported;
config.can_delete = config.can_delete orelse !config.imported;
config.can_refresh = config.can_refresh orelse !config.imported;
config.scan_ui = .full; // in case we're refreshing from the UI, always in full mode.
ui.init();
state = .browse;
browser.initRoot();
while (true) {
switch (state) {
.refresh => {
var full_path: std.ArrayListUnmanaged(u8) = .empty;
defer full_path.deinit(allocator);
mem_sink.global.root.?.fmtPath(allocator, true, &full_path);
scan.scan(util.arrayListBufZ(&full_path, allocator)) catch {
sink.global.last_error = allocator.dupeZ(u8, full_path.items) catch unreachable;
sink.global.state = .err;
while (state == .refresh) handleEvent(true, true);
};
state = .browse;
browser.loadDir(0);
},
.shell => {
const shell = std.posix.getenvZ("NCDU_SHELL") orelse std.posix.getenvZ("SHELL") orelse "/bin/sh";
var env = std.process.getEnvMap(allocator) catch unreachable;
defer env.deinit();
ui.runCmd(&.{shell}, browser.dir_path, &env, false);
state = .browse;
},
.delete => {
const next = delete.delete();
if (state != .refresh) {
state = .browse;
browser.loadDir(if (next) |n| n.nameHash() else 0);
}
},
else => handleEvent(true, false)
}
}
}
pub var event_delay_timer: std.time.Timer = undefined;
// Draw the screen and handle the next input event.
// In non-blocking mode, screen drawing is rate-limited to keep this function fast.
pub fn handleEvent(block: bool, force_draw: bool) void {
while (ui.oom_threads.load(.monotonic) > 0) ui.oom();
if (block or force_draw or event_delay_timer.read() > config.update_delay) {
if (ui.inited) _ = c.erase();
switch (state) {
.scan, .refresh => sink.draw(),
.browse => browser.draw(),
.delete => delete.draw(),
.shell => unreachable,
}
if (ui.inited) _ = c.refresh();
event_delay_timer.reset();
}
if (!ui.inited) {
std.debug.assert(!block);
return;
}
var firstblock = block;
while (true) {
const ch = ui.getch(firstblock);
if (ch == 0) return;
if (ch == -1) return handleEvent(firstblock, true);
switch (state) {
.scan, .refresh => sink.keyInput(ch),
.browse => browser.keyInput(ch),
.delete => delete.keyInput(ch),
.shell => unreachable,
}
firstblock = false;
}
}
test "argument parser" {
const lst = [_][:0]const u8{ "a", "-abcd=e", "--opt1=arg1", "--opt2", "arg2", "-x", "foo", "", "--", "--arg", "", "-", };
const T = struct {
a: Args,
fn opt(self: *@This(), isopt: bool, val: []const u8) !void {
const o = (self.a.next() catch unreachable).?;
try std.testing.expectEqual(isopt, o.opt);
try std.testing.expectEqualStrings(val, o.val);
try std.testing.expectEqual(o.is(val), isopt);
}
fn arg(self: *@This(), val: []const u8) !void {
try std.testing.expectEqualStrings(val, self.a.arg() catch unreachable);
}
};
var t = T{ .a = Args.init(&lst) };
try t.opt(false, "a");
try t.opt(true, "-a");
try t.opt(true, "-b");
try t.arg("cd=e");
try t.opt(true, "--opt1");
try t.arg("arg1");
try t.opt(true, "--opt2");
try t.arg("arg2");
try t.opt(true, "-x");
try t.arg("foo");
try t.opt(false, "");
try t.opt(false, "--arg");
try t.opt(false, "");
try t.opt(false, "-");
try std.testing.expectEqual(t.a.next(), null);
}

212
src/mem_sink.zig Normal file
View file

@ -0,0 +1,212 @@
// SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
// SPDX-License-Identifier: MIT
const std = @import("std");
const main = @import("main.zig");
const model = @import("model.zig");
const sink = @import("sink.zig");
pub const global = struct {
pub var root: ?*model.Dir = null;
pub var stats: bool = true; // calculate aggregate directory stats
};
pub const Thread = struct {
// Arena allocator for model.Entry structs, these are never freed.
arena: std.heap.ArenaAllocator = std.heap.ArenaAllocator.init(std.heap.page_allocator),
};
pub fn statToEntry(stat: *const sink.Stat, e: *model.Entry, parent: *model.Dir) void {
e.pack.blocks = stat.blocks;
e.size = stat.size;
if (e.dir()) |d| {
d.parent = parent;
d.pack.dev = model.devices.getId(stat.dev);
}
if (e.link()) |l| {
l.parent = parent;
l.ino = stat.ino;
l.pack.nlink = stat.nlink;
model.inodes.lock.lock();
defer model.inodes.lock.unlock();
l.addLink();
}
if (e.ext()) |ext| ext.* = stat.ext;
}
pub const Dir = struct {
dir: *model.Dir,
entries: Map,
own_blocks: model.Blocks,
own_bytes: u64,
// Additional counts collected from subdirectories. Subdirs may run final()
// from separate threads so these need to be protected.
blocks: model.Blocks = 0,
bytes: u64 = 0,
items: u32 = 0,
mtime: u64 = 0,
suberr: bool = false,
lock: std.Thread.Mutex = .{},
const Map = std.HashMap(*model.Entry, void, HashContext, 80);
const HashContext = struct {
pub fn hash(_: @This(), e: *model.Entry) u64 {
return std.hash.Wyhash.hash(0, e.name());
}
pub fn eql(_: @This(), a: *model.Entry, b: *model.Entry) bool {
return a == b or std.mem.eql(u8, a.name(), b.name());
}
};
const HashContextAdapted = struct {
pub fn hash(_: @This(), v: []const u8) u64 {
return std.hash.Wyhash.hash(0, v);
}
pub fn eql(_: @This(), a: []const u8, b: *model.Entry) bool {
return std.mem.eql(u8, a, b.name());
}
};
fn init(dir: *model.Dir) Dir {
var self = Dir{
.dir = dir,
.entries = Map.initContext(main.allocator, HashContext{}),
.own_blocks = dir.entry.pack.blocks,
.own_bytes = dir.entry.size,
};
var count: Map.Size = 0;
var it = dir.sub.ptr;
while (it) |e| : (it = e.next.ptr) count += 1;
self.entries.ensureUnusedCapacity(count) catch unreachable;
it = dir.sub.ptr;
while (it) |e| : (it = e.next.ptr)
self.entries.putAssumeCapacity(e, {});
return self;
}
fn getEntry(self: *Dir, t: *Thread, etype: model.EType, isext: bool, name: []const u8) *model.Entry {
if (self.entries.getKeyAdapted(name, HashContextAdapted{})) |e| {
// XXX: In-place conversion may be possible in some cases.
if (e.pack.etype.base() == etype.base() and (!isext or e.pack.isext)) {
e.pack.etype = etype;
e.pack.isext = isext;
_ = self.entries.removeAdapted(name, HashContextAdapted{});
return e;
}
}
const e = model.Entry.create(t.arena.allocator(), etype, isext, name);
e.next.ptr = self.dir.sub.ptr;
self.dir.sub.ptr = e;
return e;
}
pub fn addSpecial(self: *Dir, t: *Thread, name: []const u8, st: model.EType) void {
self.dir.items += 1;
if (st == .err) self.dir.pack.suberr = true;
_ = self.getEntry(t, st, false, name);
}
pub fn addStat(self: *Dir, t: *Thread, name: []const u8, stat: *const sink.Stat) *model.Entry {
if (global.stats) {
self.dir.items +|= 1;
if (stat.etype != .link) {
self.dir.entry.pack.blocks +|= stat.blocks;
self.dir.entry.size +|= stat.size;
}
if (self.dir.entry.ext()) |e| {
if (stat.ext.mtime > e.mtime) e.mtime = stat.ext.mtime;
}
}
const e = self.getEntry(t, stat.etype, main.config.extended and !stat.ext.isEmpty(), name);
statToEntry(stat, e, self.dir);
return e;
}
pub fn addDir(self: *Dir, t: *Thread, name: []const u8, stat: *const sink.Stat) Dir {
return init(self.addStat(t, name, stat).dir().?);
}
pub fn setReadError(self: *Dir) void {
self.dir.pack.err = true;
}
pub fn final(self: *Dir, parent: ?*Dir) void {
// Remove entries we've not seen
if (self.entries.count() > 0) {
var it = &self.dir.sub.ptr;
while (it.*) |e| {
if (self.entries.getKey(e) == e) it.* = e.next.ptr
else it = &e.next.ptr;
}
}
self.entries.deinit();
if (!global.stats) return;
// Grab counts collected from subdirectories
self.dir.entry.pack.blocks +|= self.blocks;
self.dir.entry.size +|= self.bytes;
self.dir.items +|= self.items;
if (self.suberr) self.dir.pack.suberr = true;
if (self.dir.entry.ext()) |e| {
if (self.mtime > e.mtime) e.mtime = self.mtime;
}
// Add own counts to parent
if (parent) |p| {
p.lock.lock();
defer p.lock.unlock();
p.blocks +|= self.dir.entry.pack.blocks - self.own_blocks;
p.bytes +|= self.dir.entry.size - self.own_bytes;
p.items +|= self.dir.items;
if (self.dir.entry.ext()) |e| {
if (e.mtime > p.mtime) p.mtime = e.mtime;
}
if (self.suberr or self.dir.pack.suberr or self.dir.pack.err) p.suberr = true;
}
}
};
pub fn createRoot(path: []const u8, stat: *const sink.Stat) Dir {
const p = global.root orelse blk: {
model.root = model.Entry.create(main.allocator, .dir, main.config.extended and !stat.ext.isEmpty(), path).dir().?;
break :blk model.root;
};
sink.global.state = .zeroing;
if (p.items > 10_000) main.handleEvent(false, true);
// Do the zeroStats() here, after the "root" entry has been
// stat'ed and opened, so that a fatal error on refresh won't
// zero-out the requested directory.
p.entry.zeroStats(p.parent);
sink.global.state = .running;
p.entry.pack.blocks = stat.blocks;
p.entry.size = stat.size;
p.pack.dev = model.devices.getId(stat.dev);
if (p.entry.ext()) |e| e.* = stat.ext;
return Dir.init(p);
}
pub fn done() void {
if (!global.stats) return;
sink.global.state = .hlcnt;
main.handleEvent(false, true);
const dir = global.root orelse model.root;
var it: ?*model.Dir = dir;
while (it) |p| : (it = p.parent) {
p.updateSubErr();
if (p != dir) {
p.entry.pack.blocks +|= dir.entry.pack.blocks;
p.entry.size +|= dir.entry.size;
p.items +|= dir.items + 1;
}
}
model.inodes.addAllStats();
}

73
src/mem_src.zig Normal file
View file

@ -0,0 +1,73 @@
// SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
// SPDX-License-Identifier: MIT
const std = @import("std");
const main = @import("main.zig");
const model = @import("model.zig");
const sink = @import("sink.zig");
// Emit the memory tree to the sink in depth-first order from a single thread,
// suitable for JSON export.
fn toStat(e: *model.Entry) sink.Stat {
const el = e.link();
return sink.Stat{
.etype = e.pack.etype,
.blocks = e.pack.blocks,
.size = e.size,
.dev =
if (e.dir()) |d| model.devices.list.items[d.pack.dev]
else if (el) |l| model.devices.list.items[l.parent.pack.dev]
else undefined,
.ino = if (el) |l| l.ino else undefined,
.nlink = if (el) |l| l.pack.nlink else 1,
.ext = if (e.ext()) |x| x.* else .{},
};
}
const Ctx = struct {
sink: *sink.Thread,
stat: sink.Stat,
};
fn rec(ctx: *Ctx, dir: *sink.Dir, entry: *model.Entry) void {
if ((ctx.sink.files_seen.load(.monotonic) & 65) == 0)
main.handleEvent(false, false);
ctx.stat = toStat(entry);
switch (entry.pack.etype) {
.dir => {
const d = entry.dir().?;
var ndir = dir.addDir(ctx.sink, entry.name(), &ctx.stat);
ctx.sink.setDir(ndir);
if (d.pack.err) ndir.setReadError(ctx.sink);
var it = d.sub.ptr;
while (it) |e| : (it = e.next.ptr) rec(ctx, ndir, e);
ctx.sink.setDir(dir);
ndir.unref(ctx.sink);
},
.reg, .nonreg, .link => dir.addStat(ctx.sink, entry.name(), &ctx.stat),
else => dir.addSpecial(ctx.sink, entry.name(), entry.pack.etype),
}
}
pub fn run(d: *model.Dir) void {
const sink_threads = sink.createThreads(1);
var ctx: Ctx = .{
.sink = &sink_threads[0],
.stat = toStat(&d.entry),
};
var buf: std.ArrayListUnmanaged(u8) = .empty;
d.fmtPath(main.allocator, true, &buf);
const root = sink.createRoot(buf.items, &ctx.stat);
buf.deinit(main.allocator);
var it = d.sub.ptr;
while (it) |e| : (it = e.next.ptr) rec(&ctx, root, e);
root.unref(ctx.sink);
sink.done();
}

513
src/model.zig Normal file
View file

@ -0,0 +1,513 @@
// SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
// SPDX-License-Identifier: MIT
const std = @import("std");
const main = @import("main.zig");
const ui = @import("ui.zig");
const util = @import("util.zig");
// Numbers are used in the binfmt export, so must be stable.
pub const EType = enum(i3) {
dir = 0,
reg = 1,
nonreg = 2,
link = 3,
err = -1,
pattern = -2,
otherfs = -3,
kernfs = -4,
pub fn base(t: EType) EType {
return switch (t) {
.dir, .link => t,
else => .reg,
};
}
// Whether this entry should be displayed as a "directory".
// Some dirs are actually represented in this data model as a File for efficiency.
pub fn isDirectory(t: EType) bool {
return switch (t) {
.dir, .otherfs, .kernfs => true,
else => false,
};
}
};
// Type for the Entry.Packed.blocks field. Smaller than a u64 to make room for flags.
pub const Blocks = u60;
// Entries read from bin_reader may refer to other entries by itemref rather than pointer.
// This is a hack that allows browser.zig to use the same types for in-memory
// and bin_reader-backed directory trees. Most code can only deal with
// in-memory trees and accesses the .ptr field directly.
pub const Ref = extern union {
ptr: ?*Entry align(1),
ref: u64 align(1),
pub fn isNull(r: Ref) bool {
if (main.config.binreader) return r.ref == std.math.maxInt(u64)
else return r.ptr == null;
}
};
// Memory layout:
// (Ext +) Dir + name
// or: (Ext +) Link + name
// or: (Ext +) File + name
//
// Entry is always the first part of Dir, Link and File, so a pointer cast to
// *Entry is always safe and an *Entry can be casted to the full type. The Ext
// struct, if present, is placed before the *Entry pointer.
// These are all packed structs and hence do not have any alignment, which is
// great for saving memory but perhaps not very great for code size or
// performance.
pub const Entry = extern struct {
pack: Packed align(1),
size: u64 align(1) = 0,
next: Ref = .{ .ptr = null },
pub const Packed = packed struct(u64) {
etype: EType,
isext: bool,
blocks: Blocks = 0, // 512-byte blocks
};
const Self = @This();
pub fn dir(self: *Self) ?*Dir {
return if (self.pack.etype == .dir) @ptrCast(self) else null;
}
pub fn link(self: *Self) ?*Link {
return if (self.pack.etype == .link) @ptrCast(self) else null;
}
pub fn file(self: *Self) ?*File {
return if (self.pack.etype != .dir and self.pack.etype != .link) @ptrCast(self) else null;
}
pub fn name(self: *const Self) [:0]const u8 {
const self_name = switch (self.pack.etype) {
.dir => &@as(*const Dir, @ptrCast(self)).name,
.link => &@as(*const Link, @ptrCast(self)).name,
else => &@as(*const File, @ptrCast(self)).name,
};
const name_ptr: [*:0]const u8 = @ptrCast(self_name);
return std.mem.sliceTo(name_ptr, 0);
}
pub fn nameHash(self: *const Self) u64 {
return std.hash.Wyhash.hash(0, self.name());
}
pub fn ext(self: *Self) ?*Ext {
if (!self.pack.isext) return null;
return @ptrCast(@as([*]Ext, @ptrCast(self)) - 1);
}
fn alloc(comptime T: type, allocator: std.mem.Allocator, etype: EType, isext: bool, ename: []const u8) *Entry {
const size = (if (isext) @as(usize, @sizeOf(Ext)) else 0) + @sizeOf(T) + ename.len + 1;
var ptr = blk: while (true) {
const alignment = if (@typeInfo(@TypeOf(std.mem.Allocator.allocWithOptions)).@"fn".params[3].type == ?u29) 1 else std.mem.Alignment.@"1";
if (allocator.allocWithOptions(u8, size, alignment, null)) |p| break :blk p
else |_| {}
ui.oom();
};
if (isext) {
@as(*Ext, @ptrCast(ptr)).* = .{};
ptr = ptr[@sizeOf(Ext)..];
}
const e: *T = @ptrCast(ptr);
e.* = .{ .entry = .{ .pack = .{ .etype = etype, .isext = isext } } };
const n = @as([*]u8, @ptrCast(&e.name))[0..ename.len+1];
@memcpy(n[0..ename.len], ename);
n[ename.len] = 0;
return &e.entry;
}
pub fn create(allocator: std.mem.Allocator, etype: EType, isext: bool, ename: []const u8) *Entry {
return switch (etype) {
.dir => alloc(Dir, allocator, etype, isext, ename),
.link => alloc(Link, allocator, etype, isext, ename),
else => alloc(File, allocator, etype, isext, ename),
};
}
pub fn destroy(self: *Self, allocator: std.mem.Allocator) void {
const ptr: [*]u8 = if (self.ext()) |e| @ptrCast(e) else @ptrCast(self);
const esize: usize = switch (self.pack.etype) {
.dir => @sizeOf(Dir),
.link => @sizeOf(Link),
else => @sizeOf(File),
};
const size = (if (self.pack.isext) @as(usize, @sizeOf(Ext)) else 0) + esize + self.name().len + 1;
allocator.free(ptr[0..size]);
}
fn hasErr(self: *Self) bool {
return
if(self.dir()) |d| d.pack.err or d.pack.suberr
else self.pack.etype == .err;
}
fn removeLinks(self: *Entry) void {
if (self.dir()) |d| {
var it = d.sub.ptr;
while (it) |e| : (it = e.next.ptr) e.removeLinks();
}
if (self.link()) |l| l.removeLink();
}
fn zeroStatsRec(self: *Entry) void {
self.pack.blocks = 0;
self.size = 0;
if (self.dir()) |d| {
d.items = 0;
d.pack.err = false;
d.pack.suberr = false;
var it = d.sub.ptr;
while (it) |e| : (it = e.next.ptr) e.zeroStatsRec();
}
}
// Recursively set stats and those of sub-items to zero and removes counts
// from parent directories; as if this item does not exist in the tree.
// XXX: Does not update the 'suberr' flag of parent directories, make sure
// to call updateSubErr() afterwards.
pub fn zeroStats(self: *Entry, parent: ?*Dir) void {
self.removeLinks();
var it = parent;
while (it) |p| : (it = p.parent) {
p.entry.pack.blocks -|= self.pack.blocks;
p.entry.size -|= self.size;
p.items -|= 1 + (if (self.dir()) |d| d.items else 0);
}
self.zeroStatsRec();
}
};
const DevId = u30; // Can be reduced to make room for more flags in Dir.Packed.
pub const Dir = extern struct {
entry: Entry,
sub: Ref = .{ .ptr = null },
parent: ?*Dir align(1) = null,
// entry.{blocks,size}: Total size of all unique files + dirs. Non-shared hardlinks are counted only once.
// (i.e. the space you'll need if you created a filesystem with only this dir)
// shared_*: Unique hardlinks that still have references outside of this directory.
// (i.e. the space you won't reclaim by deleting this dir)
// (space reclaimed by deleting a dir =~ entry. - shared_)
shared_blocks: u64 align(1) = 0,
shared_size: u64 align(1) = 0,
items: u32 align(1) = 0,
pack: Packed align(1) = .{},
// Only used to find the @offsetOff, the name is written at this point as a 0-terminated string.
// (Old C habits die hard)
name: [0]u8 = undefined,
pub const Packed = packed struct {
// Indexes into the global 'devices.list' array
dev: DevId = 0,
err: bool = false,
suberr: bool = false,
};
pub fn fmtPath(self: *const @This(), alloc: std.mem.Allocator, withRoot: bool, out: *std.ArrayListUnmanaged(u8)) void {
if (!withRoot and self.parent == null) return;
var components: std.ArrayListUnmanaged([:0]const u8) = .empty;
defer components.deinit(main.allocator);
var it: ?*const @This() = self;
while (it) |e| : (it = e.parent)
if (withRoot or e.parent != null)
components.append(main.allocator, e.entry.name()) catch unreachable;
var i: usize = components.items.len-1;
while (true) {
if (i != components.items.len-1 and !(out.items.len != 0 and out.items[out.items.len-1] == '/'))
out.append(main.allocator, '/') catch unreachable;
out.appendSlice(alloc, components.items[i]) catch unreachable;
if (i == 0) break;
i -= 1;
}
}
// Only updates the suberr of this Dir, assumes child dirs have already
// been updated and does not propagate to parents.
pub fn updateSubErr(self: *@This()) void {
self.pack.suberr = false;
var sub = self.sub.ptr;
while (sub) |e| : (sub = e.next.ptr) {
if (e.hasErr()) {
self.pack.suberr = true;
break;
}
}
}
};
// File that's been hardlinked (i.e. nlink > 1)
pub const Link = extern struct {
entry: Entry,
parent: *Dir align(1) = undefined,
next: *Link align(1) = undefined, // circular linked list of all *Link nodes with the same dev,ino.
prev: *Link align(1) = undefined,
// dev is inherited from the parent Dir
ino: u64 align(1) = undefined,
pack: Pack align(1) = .{},
name: [0]u8 = undefined,
const Pack = packed struct(u32) {
// Whether this Inode is counted towards the parent directories.
// Is kept synchronized between all Link nodes with the same dev/ino.
counted: bool = false,
// Number of links for this inode. When set to '0', we don't know the
// actual nlink count; which happens for old JSON dumps.
nlink: u31 = undefined,
};
// Return value should be freed with main.allocator.
pub fn path(self: *const @This(), withRoot: bool) [:0]const u8 {
var out: std.ArrayListUnmanaged(u8) = .empty;
self.parent.fmtPath(main.allocator, withRoot, &out);
out.append(main.allocator, '/') catch unreachable;
out.appendSlice(main.allocator, self.entry.name()) catch unreachable;
return out.toOwnedSliceSentinel(main.allocator, 0) catch unreachable;
}
// Add this link to the inodes map and mark it as 'uncounted'.
pub fn addLink(l: *@This()) void {
const d = inodes.map.getOrPut(l) catch unreachable;
if (!d.found_existing) {
l.next = l;
l.prev = l;
} else {
inodes.setStats(d.key_ptr.*, false);
l.next = d.key_ptr.*;
l.prev = d.key_ptr.*.prev;
l.next.prev = l;
l.prev.next = l;
}
inodes.addUncounted(l);
}
// Remove this link from the inodes map and remove its stats from parent directories.
fn removeLink(l: *@This()) void {
inodes.setStats(l, false);
const entry = inodes.map.getEntry(l) orelse return;
if (l.next == l) {
_ = inodes.map.remove(l);
_ = inodes.uncounted.remove(l);
} else {
// XXX: If this link is actually removed from the filesystem, then
// the nlink count of the existing links should be updated to
// reflect that. But we can't do that here, because this function
// is also called before doing a filesystem refresh - in which case
// the nlink count likely won't change. Best we can hope for is
// that a refresh will encounter another link to the same inode and
// trigger an nlink change.
if (entry.key_ptr.* == l)
entry.key_ptr.* = l.next;
inodes.addUncounted(l.next);
l.next.prev = l.prev;
l.prev.next = l.next;
}
}
};
// Anything that's not an (indexed) directory or hardlink. Excluded directories are also "Files".
pub const File = extern struct {
entry: Entry,
name: [0]u8 = undefined,
};
pub const Ext = extern struct {
pack: Pack = .{},
mtime: u64 align(1) = 0,
uid: u32 align(1) = 0,
gid: u32 align(1) = 0,
mode: u16 align(1) = 0,
pub const Pack = packed struct(u8) {
hasmtime: bool = false,
hasuid: bool = false,
hasgid: bool = false,
hasmode: bool = false,
_pad: u4 = 0,
};
pub fn isEmpty(e: *const Ext) bool {
return !e.pack.hasmtime and !e.pack.hasuid and !e.pack.hasgid and !e.pack.hasmode;
}
};
// List of st_dev entries. Those are typically 64bits, but that's quite a waste
// of space when a typical scan won't cover many unique devices.
pub const devices = struct {
var lock = std.Thread.Mutex{};
// id -> dev
pub var list: std.ArrayListUnmanaged(u64) = .empty;
// dev -> id
var lookup = std.AutoHashMap(u64, DevId).init(main.allocator);
pub fn getId(dev: u64) DevId {
lock.lock();
defer lock.unlock();
const d = lookup.getOrPut(dev) catch unreachable;
if (!d.found_existing) {
if (list.items.len >= std.math.maxInt(DevId)) ui.die("Maximum number of device identifiers exceeded.\n", .{});
d.value_ptr.* = @as(DevId, @intCast(list.items.len));
list.append(main.allocator, dev) catch unreachable;
}
return d.value_ptr.*;
}
};
// Lookup table for ino -> *Link entries, used for hard link counting.
pub const inodes = struct {
// Keys are hashed by their (dev,ino), the *Link points to an arbitrary
// node in the list. Link entries with the same dev/ino are part of a
// circular linked list, so you can iterate through all of them with this
// single pointer.
const Map = std.HashMap(*Link, void, HashContext, 80);
pub var map = Map.init(main.allocator);
// List of nodes in 'map' with !counted, to speed up addAllStats().
// If this list grows large relative to the number of nodes in 'map', then
// this list is cleared and uncounted_full is set instead, so that
// addAllStats() will do a full iteration over 'map'.
var uncounted = std.HashMap(*Link, void, HashContext, 80).init(main.allocator);
var uncounted_full = true; // start with true for the initial scan
pub var lock = std.Thread.Mutex{};
const HashContext = struct {
pub fn hash(_: @This(), l: *Link) u64 {
var h = std.hash.Wyhash.init(0);
h.update(std.mem.asBytes(&@as(u32, l.parent.pack.dev)));
h.update(std.mem.asBytes(&l.ino));
return h.final();
}
pub fn eql(_: @This(), a: *Link, b: *Link) bool {
return a.ino == b.ino and a.parent.pack.dev == b.parent.pack.dev;
}
};
fn addUncounted(l: *Link) void {
if (uncounted_full) return;
if (uncounted.count() > map.count()/8) {
uncounted.clearAndFree();
uncounted_full = true;
} else
(uncounted.getOrPut(l) catch unreachable).key_ptr.* = l;
}
// Add/remove this inode from the parent Dir sizes. When removing stats,
// the list of *Links and their sizes and counts must be in the exact same
// state as when the stats were added. Hence, any modification to the Link
// state should be preceded by a setStats(.., false).
fn setStats(l: *Link, add: bool) void {
if (l.pack.counted == add) return;
var nlink: u31 = 0;
var inconsistent = false;
var dirs = std.AutoHashMap(*Dir, u32).init(main.allocator);
defer dirs.deinit();
var it = l;
while (true) {
it.pack.counted = add;
nlink += 1;
if (it.pack.nlink != l.pack.nlink) inconsistent = true;
var parent: ?*Dir = it.parent;
while (parent) |p| : (parent = p.parent) {
const de = dirs.getOrPut(p) catch unreachable;
if (de.found_existing) de.value_ptr.* += 1
else de.value_ptr.* = 1;
}
it = it.next;
if (it == l)
break;
}
// There's not many sensible things we can do when we encounter
// inconsistent nlink counts. Current approach is to use the number of
// times we've seen this link in our tree as fallback for when the
// nlink counts aren't matching. May want to add a warning of some
// sorts to the UI at some point.
if (!inconsistent and l.pack.nlink >= nlink) nlink = l.pack.nlink;
// XXX: We're also not testing for inconsistent entry sizes, instead
// using the given 'l' size for all Links. Might warrant a warning as
// well.
var dir_iter = dirs.iterator();
if (add) {
while (dir_iter.next()) |de| {
de.key_ptr.*.entry.pack.blocks +|= l.entry.pack.blocks;
de.key_ptr.*.entry.size +|= l.entry.size;
if (de.value_ptr.* < nlink) {
de.key_ptr.*.shared_blocks +|= l.entry.pack.blocks;
de.key_ptr.*.shared_size +|= l.entry.size;
}
}
} else {
while (dir_iter.next()) |de| {
de.key_ptr.*.entry.pack.blocks -|= l.entry.pack.blocks;
de.key_ptr.*.entry.size -|= l.entry.size;
if (de.value_ptr.* < nlink) {
de.key_ptr.*.shared_blocks -|= l.entry.pack.blocks;
de.key_ptr.*.shared_size -|= l.entry.size;
}
}
}
}
// counters to track progress for addAllStats()
pub var add_total: usize = 0;
pub var add_done: usize = 0;
pub fn addAllStats() void {
if (uncounted_full) {
add_total = map.count();
add_done = 0;
var it = map.keyIterator();
while (it.next()) |e| {
setStats(e.*, true);
add_done += 1;
if ((add_done & 65) == 0) main.handleEvent(false, false);
}
} else {
add_total = uncounted.count();
add_done = 0;
var it = uncounted.keyIterator();
while (it.next()) |u| {
if (map.getKey(u.*)) |e| setStats(e, true);
add_done += 1;
if ((add_done & 65) == 0) main.handleEvent(false, false);
}
}
uncounted_full = false;
if (uncounted.count() > 0)
uncounted.clearAndFree();
}
};
pub var root: *Dir = undefined;
test "entry" {
var e = Entry.create(std.testing.allocator, .reg, false, "hello");
defer e.destroy(std.testing.allocator);
try std.testing.expectEqual(e.pack.etype, .reg);
try std.testing.expect(!e.pack.isext);
try std.testing.expectEqualStrings(e.name(), "hello");
}

View file

@ -1,246 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#include "global.h"
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <unistd.h>
#include <limits.h>
#ifndef LINK_MAX
# ifdef _POSIX_LINK_MAX
# define LINK_MAX _POSIX_LINK_MAX
# else
# define LINK_MAX 32
# endif
#endif
#define RPATH_CNKSZ 256
/* splits a path into components and does a bit of cannonicalization.
a pointer to a reversed array of components is stored in res and the
number of components is returned.
cur is modified, and res has to be free()d after use */
static int path_split(char *cur, char ***res) {
char **old;
int i, j, n;
cur += strspn(cur, "/");
n = strlen(cur);
/* replace slashes with zeros */
for(i=j=0; i<n; i++)
if(cur[i] == '/') {
cur[i] = 0;
if(cur[i-1] != 0)
j++;
}
/* create array of the components */
old = xmalloc((j+1)*sizeof(char *));
*res = xmalloc((j+1)*sizeof(char *));
for(i=j=0; i<n; i++)
if(i == 0 || (cur[i-1] == 0 && cur[i] != 0))
old[j++] = cur+i;
/* re-order and remove parts */
for(i=n=0; --j>=0; ) {
if(!strcmp(old[j], "..")) {
n++;
continue;
}
if(!strcmp(old[j], "."))
continue;
if(n) {
n--;
continue;
}
(*res)[i++] = old[j];
}
free(old);
return i;
}
/* copies path and prepends cwd if needed, to ensure an absolute path
return value has to be free()'d manually */
static char *path_absolute(const char *path) {
int i, n;
char *ret;
/* not an absolute path? prepend cwd */
if(path[0] != '/') {
n = RPATH_CNKSZ;
ret = xmalloc(n);
errno = 0;
while(!getcwd(ret, n) && errno == ERANGE) {
n += RPATH_CNKSZ;
ret = xrealloc(ret, n);
errno = 0;
}
if(errno) {
free(ret);
return NULL;
}
i = strlen(path) + strlen(ret) + 2;
if(i > n)
ret = xrealloc(ret, i);
strcat(ret, "/");
strcat(ret, path);
/* otherwise, just make a copy */
} else {
ret = xmalloc(strlen(path)+1);
strcpy(ret, path);
}
return ret;
}
/* NOTE: cwd and the memory cur points to are unreliable after calling this
* function.
* TODO: This code is rather fragile and inefficient. A rewrite is in order.
*/
static char *path_real_rec(char *cur, int *links) {
int i, n, tmpl, lnkl = 0;
char **arr, *tmp, *lnk = NULL, *ret = NULL;
tmpl = strlen(cur)+1;
tmp = xmalloc(tmpl);
/* split path */
i = path_split(cur, &arr);
/* re-create full path */
strcpy(tmp, "/");
if(i > 0) {
lnkl = RPATH_CNKSZ;
lnk = xmalloc(lnkl);
if(chdir("/") < 0)
goto path_real_done;
}
while(--i>=0) {
if(arr[i][0] == 0)
continue;
/* check for symlink */
while((n = readlink(arr[i], lnk, lnkl)) == lnkl || (n < 0 && errno == ERANGE)) {
lnkl += RPATH_CNKSZ;
lnk = xrealloc(lnk, lnkl);
}
if(n < 0 && errno != EINVAL)
goto path_real_done;
if(n > 0) {
if(++*links > LINK_MAX) {
errno = ELOOP;
goto path_real_done;
}
lnk[n++] = 0;
/* create new path */
if(lnk[0] != '/')
n += strlen(tmp);
if(tmpl < n) {
tmpl = n;
tmp = xrealloc(tmp, tmpl);
}
if(lnk[0] != '/')
strcat(tmp, lnk);
else
strcpy(tmp, lnk);
/* append remaining directories */
while(--i>=0) {
n += strlen(arr[i])+1;
if(tmpl < n) {
tmpl = n;
tmp = xrealloc(tmp, tmpl);
}
strcat(tmp, "/");
strcat(tmp, arr[i]);
}
/* call path_real_rec() with the new path */
ret = path_real_rec(tmp, links);
goto path_real_done;
}
/* not a symlink, append component and go to the next part */
strcat(tmp, arr[i]);
if(i) {
if(chdir(arr[i]) < 0)
goto path_real_done;
strcat(tmp, "/");
}
}
ret = tmp;
path_real_done:
if(ret != tmp)
free(tmp);
if(lnkl > 0)
free(lnk);
free(arr);
return ret;
}
char *path_real(const char *orig) {
int links = 0;
char *tmp, *ret;
if(orig == NULL)
return NULL;
if((tmp = path_absolute(orig)) == NULL)
return NULL;
ret = path_real_rec(tmp, &links);
free(tmp);
return ret;
}
int path_chdir(const char *path) {
char **arr, *cur;
int i, r = -1;
if((cur = path_absolute(path)) == NULL)
return -1;
i = path_split(cur, &arr);
if(chdir("/") < 0)
goto path_chdir_done;
while(--i >= 0)
if(chdir(arr[i]) < 0)
goto path_chdir_done;
r = 0;
path_chdir_done:
free(cur);
free(arr);
return r;
}

View file

@ -1,47 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
/*
path.c reimplements realpath() and chdir(), both functions accept
arbitrary long path names not limited by PATH_MAX.
Caveats/bugs:
- path_real uses chdir(), so it's not thread safe
- Process requires +x access for all directory components
- Potentionally slow
- path_real doesn't check for the existance of the last component
- cwd is unreliable after path_real
*/
#ifndef _path_h
#define _path_h
/* path_real reimplements realpath(). The returned string is allocated
by malloc() and should be manually free()d by the programmer. */
extern char *path_real(const char *);
/* works exactly the same as chdir() */
extern int path_chdir(const char *);
#endif

View file

@ -1,50 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2015-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#include "global.h"
#include <ncurses.h>
int quit_key(int ch) {
switch(ch) {
case 'y':
case 'Y':
return 1;
default:
pstate = ST_BROWSE;
}
return 0;
}
void quit_draw() {
browse_draw();
nccreate(4,30, "ncdu confirm quit");
ncaddstr(2,2, "Really quit? (y/N)");
}
void quit_init() {
pstate = ST_QUIT;
}

View file

@ -1,37 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2015-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef _quit_h
#define _quit_h
#include "global.h"
int quit_key(int);
void quit_draw(void);
void quit_init(void);
#endif

325
src/scan.zig Normal file
View file

@ -0,0 +1,325 @@
// SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
// SPDX-License-Identifier: MIT
const std = @import("std");
const main = @import("main.zig");
const util = @import("util.zig");
const model = @import("model.zig");
const sink = @import("sink.zig");
const ui = @import("ui.zig");
const exclude = @import("exclude.zig");
const c = @import("c.zig").c;
// This function only works on Linux
fn isKernfs(dir: std.fs.Dir) bool {
var buf: c.struct_statfs = undefined;
if (c.fstatfs(dir.fd, &buf) != 0) return false; // silently ignoring errors isn't too nice.
const iskern = switch (util.castTruncate(u32, buf.f_type)) {
// These numbers are documented in the Linux 'statfs(2)' man page, so I assume they're stable.
0x42494e4d, // BINFMTFS_MAGIC
0xcafe4a11, // BPF_FS_MAGIC
0x27e0eb, // CGROUP_SUPER_MAGIC
0x63677270, // CGROUP2_SUPER_MAGIC
0x64626720, // DEBUGFS_MAGIC
0x1cd1, // DEVPTS_SUPER_MAGIC
0x9fa0, // PROC_SUPER_MAGIC
0x6165676c, // PSTOREFS_MAGIC
0x73636673, // SECURITYFS_MAGIC
0xf97cff8c, // SELINUX_MAGIC
0x62656572, // SYSFS_MAGIC
0x74726163 // TRACEFS_MAGIC
=> true,
else => false,
};
return iskern;
}
fn clamp(comptime T: type, comptime field: anytype, x: anytype) std.meta.fieldInfo(T, field).type {
return util.castClamp(std.meta.fieldInfo(T, field).type, x);
}
fn truncate(comptime T: type, comptime field: anytype, x: anytype) std.meta.fieldInfo(T, field).type {
return util.castTruncate(std.meta.fieldInfo(T, field).type, x);
}
pub fn statAt(parent: std.fs.Dir, name: [:0]const u8, follow: bool, symlink: ?*bool) !sink.Stat {
// std.posix.fstatatZ() in Zig 0.14 is not suitable due to https://github.com/ziglang/zig/issues/23463
var stat: std.c.Stat = undefined;
if (std.c.fstatat(parent.fd, name, &stat, if (follow) 0 else std.c.AT.SYMLINK_NOFOLLOW) != 0) {
return switch (std.c._errno().*) {
@intFromEnum(std.c.E.NOENT) => error.FileNotFound,
@intFromEnum(std.c.E.NAMETOOLONG) => error.NameTooLong,
@intFromEnum(std.c.E.NOMEM) => error.OutOfMemory,
@intFromEnum(std.c.E.ACCES) => error.AccessDenied,
else => error.Unexpected,
};
}
if (symlink) |s| s.* = std.c.S.ISLNK(stat.mode);
return sink.Stat{
.etype =
if (std.c.S.ISDIR(stat.mode)) .dir
else if (stat.nlink > 1) .link
else if (!std.c.S.ISREG(stat.mode)) .nonreg
else .reg,
.blocks = clamp(sink.Stat, .blocks, stat.blocks),
.size = clamp(sink.Stat, .size, stat.size),
.dev = truncate(sink.Stat, .dev, stat.dev),
.ino = truncate(sink.Stat, .ino, stat.ino),
.nlink = clamp(sink.Stat, .nlink, stat.nlink),
.ext = .{
.pack = .{
.hasmtime = true,
.hasuid = true,
.hasgid = true,
.hasmode = true,
},
.mtime = clamp(model.Ext, .mtime, stat.mtime().sec),
.uid = truncate(model.Ext, .uid, stat.uid),
.gid = truncate(model.Ext, .gid, stat.gid),
.mode = truncate(model.Ext, .mode, stat.mode),
},
};
}
fn isCacheDir(dir: std.fs.Dir) bool {
const sig = "Signature: 8a477f597d28d172789f06886806bc55";
const f = dir.openFileZ("CACHEDIR.TAG", .{}) catch return false;
defer f.close();
var buf: [sig.len]u8 = undefined;
const len = f.readAll(&buf) catch return false;
return len == sig.len and std.mem.eql(u8, &buf, sig);
}
const State = struct {
// Simple LIFO queue. Threads attempt to fully scan their assigned
// directory before consulting this queue for their next task, so there
// shouldn't be too much contention here.
// TODO: unless threads keep juggling around leaf nodes, need to measure
// actual use.
// There's no real reason for this to be LIFO other than that that was the
// easiest to implement. Queue order has an effect on scheduling, but it's
// impossible for me to predict how that ends up affecting performance.
queue: [QUEUE_SIZE]*Dir = undefined,
queue_len: std.atomic.Value(usize) = std.atomic.Value(usize).init(0),
queue_lock: std.Thread.Mutex = .{},
queue_cond: std.Thread.Condition = .{},
threads: []Thread,
waiting: usize = 0,
// No clue what this should be set to. Dir structs aren't small so we don't
// want too have too many of them.
const QUEUE_SIZE = 16;
// Returns true if the given Dir has been queued, false if the queue is full.
fn tryPush(self: *State, d: *Dir) bool {
if (self.queue_len.load(.acquire) == QUEUE_SIZE) return false;
{
self.queue_lock.lock();
defer self.queue_lock.unlock();
if (self.queue_len.load(.monotonic) == QUEUE_SIZE) return false;
const slot = self.queue_len.fetchAdd(1, .monotonic);
self.queue[slot] = d;
}
self.queue_cond.signal();
return true;
}
// Blocks while the queue is empty, returns null when all threads are blocking.
fn waitPop(self: *State) ?*Dir {
self.queue_lock.lock();
defer self.queue_lock.unlock();
self.waiting += 1;
while (self.queue_len.load(.monotonic) == 0) {
if (self.waiting == self.threads.len) {
self.queue_cond.broadcast();
return null;
}
self.queue_cond.wait(&self.queue_lock);
}
self.waiting -= 1;
const slot = self.queue_len.fetchSub(1, .monotonic) - 1;
defer self.queue[slot] = undefined;
return self.queue[slot];
}
};
const Dir = struct {
fd: std.fs.Dir,
dev: u64,
pat: exclude.Patterns,
it: std.fs.Dir.Iterator,
sink: *sink.Dir,
fn create(fd: std.fs.Dir, dev: u64, pat: exclude.Patterns, s: *sink.Dir) *Dir {
const d = main.allocator.create(Dir) catch unreachable;
d.* = .{
.fd = fd,
.dev = dev,
.pat = pat,
.sink = s,
.it = fd.iterate(),
};
return d;
}
fn destroy(d: *Dir, t: *Thread) void {
d.pat.deinit();
d.fd.close();
d.sink.unref(t.sink);
main.allocator.destroy(d);
}
};
const Thread = struct {
thread_num: usize,
sink: *sink.Thread,
state: *State,
stack: std.ArrayListUnmanaged(*Dir) = .empty,
thread: std.Thread = undefined,
namebuf: [4096]u8 = undefined,
fn scanOne(t: *Thread, dir: *Dir, name_: []const u8) void {
if (name_.len > t.namebuf.len - 1) {
dir.sink.addSpecial(t.sink, name_, .err);
return;
}
@memcpy(t.namebuf[0..name_.len], name_);
t.namebuf[name_.len] = 0;
const name = t.namebuf[0..name_.len:0];
const excluded = dir.pat.match(name);
if (excluded == false) { // matched either a file or directory, so we can exclude this before stat()ing.
dir.sink.addSpecial(t.sink, name, .pattern);
return;
}
var symlink: bool = undefined;
var stat = statAt(dir.fd, name, false, &symlink) catch {
dir.sink.addSpecial(t.sink, name, .err);
return;
};
if (main.config.follow_symlinks and symlink) {
if (statAt(dir.fd, name, true, &symlink)) |nstat| {
if (nstat.etype != .dir) {
stat = nstat;
// Symlink targets may reside on different filesystems,
// this will break hardlink detection and counting so let's disable it.
if (stat.etype == .link and stat.dev != dir.dev) {
stat.etype = .reg;
stat.nlink = 1;
}
}
} else |_| {}
}
if (main.config.same_fs and stat.dev != dir.dev) {
dir.sink.addSpecial(t.sink, name, .otherfs);
return;
}
if (stat.etype != .dir) {
dir.sink.addStat(t.sink, name, &stat);
return;
}
if (excluded == true) {
dir.sink.addSpecial(t.sink, name, .pattern);
return;
}
var edir = dir.fd.openDirZ(name, .{ .no_follow = true, .iterate = true }) catch {
const s = dir.sink.addDir(t.sink, name, &stat);
s.setReadError(t.sink);
s.unref(t.sink);
return;
};
if (@import("builtin").os.tag == .linux
and main.config.exclude_kernfs
and stat.dev != dir.dev
and isKernfs(edir)
) {
edir.close();
dir.sink.addSpecial(t.sink, name, .kernfs);
return;
}
if (main.config.exclude_caches and isCacheDir(edir)) {
dir.sink.addSpecial(t.sink, name, .pattern);
edir.close();
return;
}
const s = dir.sink.addDir(t.sink, name, &stat);
const ndir = Dir.create(edir, stat.dev, dir.pat.enter(name), s);
if (main.config.threads == 1 or !t.state.tryPush(ndir))
t.stack.append(main.allocator, ndir) catch unreachable;
}
fn run(t: *Thread) void {
defer t.stack.deinit(main.allocator);
while (t.state.waitPop()) |dir| {
t.stack.append(main.allocator, dir) catch unreachable;
while (t.stack.items.len > 0) {
const d = t.stack.items[t.stack.items.len - 1];
t.sink.setDir(d.sink);
if (t.thread_num == 0) main.handleEvent(false, false);
const entry = d.it.next() catch blk: {
dir.sink.setReadError(t.sink);
break :blk null;
};
if (entry) |e| t.scanOne(d, e.name)
else {
t.sink.setDir(null);
t.stack.pop().?.destroy(t);
}
}
}
}
};
pub fn scan(path: [:0]const u8) !void {
const sink_threads = sink.createThreads(main.config.threads);
defer sink.done();
var symlink: bool = undefined;
const stat = try statAt(std.fs.cwd(), path, true, &symlink);
const fd = try std.fs.cwd().openDirZ(path, .{ .iterate = true });
var state = State{
.threads = main.allocator.alloc(Thread, main.config.threads) catch unreachable,
};
defer main.allocator.free(state.threads);
const root = sink.createRoot(path, &stat);
const dir = Dir.create(fd, stat.dev, exclude.getPatterns(path), root);
_ = state.tryPush(dir);
for (sink_threads, state.threads, 0..) |*s, *t, n|
t.* = .{ .sink = s, .state = &state, .thread_num = n };
// XXX: Continue with fewer threads on error?
for (state.threads[1..]) |*t| {
t.thread = std.Thread.spawn(
.{ .stack_size = 128 * 1024, .allocator = main.allocator }, Thread.run, .{t}
) catch |e| ui.die("Error spawning thread: {}\n", .{e});
}
state.threads[0].run();
for (state.threads[1..]) |*t| t.thread.join();
}

View file

@ -1,82 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Shell support: Copyright (c) 2014 Thomas Jarosch
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#include "config.h"
#include "global.h"
#include "dirlist.h"
#include "util.h"
#include <ncurses.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/wait.h>
void shell_draw() {
char *full_path;
int res;
/* suspend ncurses mode */
def_prog_mode();
endwin();
full_path = getpath(dirlist_par);
res = chdir(full_path);
if (res != 0) {
reset_prog_mode();
clear();
printw("ERROR: Can't change directory: %s (errcode: %d)\n"
"\n"
"Press any key to continue.",
full_path, res);
} else {
char *shell = getenv("NCDU_SHELL");
if (shell == NULL) {
shell = getenv("SHELL");
if (shell == NULL)
shell = DEFAULT_SHELL;
}
res = system(shell);
/* resume ncurses mode */
reset_prog_mode();
if (res == -1 || !WIFEXITED(res) || WEXITSTATUS(res) == 127) {
clear();
printw("ERROR: Can't execute shell interpreter: %s\n"
"\n"
"Press any key to continue.",
shell);
}
}
refresh();
pstate = ST_BROWSE;
}
void shell_init() {
pstate = ST_SHELL;
}

View file

@ -1,35 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Shell support: Copyright (c) 2014 Thomas Jarosch
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef _shell_h
#define _shell_h
#include "global.h"
void shell_draw(void);
void shell_init();
#endif

498
src/sink.zig Normal file
View file

@ -0,0 +1,498 @@
// SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
// SPDX-License-Identifier: MIT
const std = @import("std");
const main = @import("main.zig");
const model = @import("model.zig");
const mem_src = @import("mem_src.zig");
const mem_sink = @import("mem_sink.zig");
const json_export = @import("json_export.zig");
const bin_export = @import("bin_export.zig");
const ui = @import("ui.zig");
const util = @import("util.zig");
// Terminology note:
// "source" is where scan results come from, these are scan.zig, mem_src.zig
// and json_import.zig.
// "sink" is where scan results go to. This file provides a generic sink API
// for sources to use. The API forwards the results to specific sink
// implementations (mem_sink.zig or json_export.zig) and provides progress
// updates.
// API for sources:
//
// Single-threaded:
//
// createThreads(1)
// dir = createRoot(name, stat)
// dir.addSpecial(name, opt)
// dir.addFile(name, stat)
// sub = dir.addDir(name, stat)
// (no dir.stuff here)
// sub.addstuff();
// sub.unref();
// dir.unref();
// done()
//
// Multi-threaded interleaving:
//
// createThreads(n)
// dir = createRoot(name, stat)
// dir.addSpecial(name, opt)
// dir.addFile(name, stat)
// sub = dir.addDir(...)
// sub.addstuff();
// sub2 = dir.addDir(..);
// sub.unref();
// dir.unref(); // <- no more direct descendants for x, but subdirs could still be active
// sub2.addStuff();
// sub2.unref(); // <- this is where 'dir' is really done.
// done()
//
// Rule:
// No concurrent method calls on a single Dir object, but objects may be passed between threads.
// Concise stat struct for fields we're interested in, with the types used by the model.
pub const Stat = struct {
etype: model.EType = .reg,
blocks: model.Blocks = 0,
size: u64 = 0,
dev: u64 = 0,
ino: u64 = 0,
nlink: u31 = 0,
ext: model.Ext = .{},
};
pub const Dir = struct {
refcnt: std.atomic.Value(usize) = std.atomic.Value(usize).init(1),
name: []const u8,
parent: ?*Dir,
out: Out,
const Out = union(enum) {
mem: mem_sink.Dir,
json: json_export.Dir,
bin: bin_export.Dir,
};
pub fn addSpecial(d: *Dir, t: *Thread, name: []const u8, sp: model.EType) void {
std.debug.assert(@intFromEnum(sp) < 0); // >=0 aren't "special"
_ = t.files_seen.fetchAdd(1, .monotonic);
switch (d.out) {
.mem => |*m| m.addSpecial(&t.sink.mem, name, sp),
.json => |*j| j.addSpecial(name, sp),
.bin => |*b| b.addSpecial(&t.sink.bin, name, sp),
}
if (sp == .err) {
global.last_error_lock.lock();
defer global.last_error_lock.unlock();
if (global.last_error) |p| main.allocator.free(p);
const p = d.path();
global.last_error = std.fs.path.joinZ(main.allocator, &.{ p, name }) catch unreachable;
main.allocator.free(p);
}
}
pub fn addStat(d: *Dir, t: *Thread, name: []const u8, stat: *const Stat) void {
_ = t.files_seen.fetchAdd(1, .monotonic);
_ = t.addBytes((stat.blocks *| 512) / @max(1, stat.nlink));
std.debug.assert(stat.etype != .dir);
switch (d.out) {
.mem => |*m| _ = m.addStat(&t.sink.mem, name, stat),
.json => |*j| j.addStat(name, stat),
.bin => |*b| b.addStat(&t.sink.bin, name, stat),
}
}
pub fn addDir(d: *Dir, t: *Thread, name: []const u8, stat: *const Stat) *Dir {
_ = t.files_seen.fetchAdd(1, .monotonic);
_ = t.addBytes(stat.blocks *| 512);
std.debug.assert(stat.etype == .dir);
std.debug.assert(d.out != .json or d.refcnt.load(.monotonic) == 1);
const s = main.allocator.create(Dir) catch unreachable;
s.* = .{
.name = main.allocator.dupe(u8, name) catch unreachable,
.parent = d,
.out = switch (d.out) {
.mem => |*m| .{ .mem = m.addDir(&t.sink.mem, name, stat) },
.json => |*j| .{ .json = j.addDir(name, stat) },
.bin => |*b| .{ .bin = b.addDir(stat) },
},
};
d.ref();
return s;
}
pub fn setReadError(d: *Dir, t: *Thread) void {
_ = t;
switch (d.out) {
.mem => |*m| m.setReadError(),
.json => |*j| j.setReadError(),
.bin => |*b| b.setReadError(),
}
global.last_error_lock.lock();
defer global.last_error_lock.unlock();
if (global.last_error) |p| main.allocator.free(p);
global.last_error = d.path();
}
fn path(d: *Dir) [:0]u8 {
var components: std.ArrayListUnmanaged([]const u8) = .empty;
defer components.deinit(main.allocator);
var it: ?*Dir = d;
while (it) |e| : (it = e.parent) components.append(main.allocator, e.name) catch unreachable;
var out: std.ArrayListUnmanaged(u8) = .empty;
var i: usize = components.items.len-1;
while (true) {
if (i != components.items.len-1 and !(out.items.len != 0 and out.items[out.items.len-1] == '/'))
out.append(main.allocator, '/') catch unreachable;
out.appendSlice(main.allocator, components.items[i]) catch unreachable;
if (i == 0) break;
i -= 1;
}
return out.toOwnedSliceSentinel(main.allocator, 0) catch unreachable;
}
fn ref(d: *Dir) void {
_ = d.refcnt.fetchAdd(1, .monotonic);
}
pub fn unref(d: *Dir, t: *Thread) void {
if (d.refcnt.fetchSub(1, .release) != 1) return;
_ = d.refcnt.load(.acquire);
switch (d.out) {
.mem => |*m| m.final(if (d.parent) |p| &p.out.mem else null),
.json => |*j| j.final(),
.bin => |*b| b.final(&t.sink.bin, d.name, if (d.parent) |p| &p.out.bin else null),
}
if (d.parent) |p| p.unref(t);
if (d.name.len > 0) main.allocator.free(d.name);
main.allocator.destroy(d);
}
};
pub const Thread = struct {
current_dir: ?*Dir = null,
lock: std.Thread.Mutex = .{},
// On 32-bit architectures, bytes_seen is protected by the above mutex instead.
bytes_seen: std.atomic.Value(u64) = std.atomic.Value(u64).init(0),
files_seen: std.atomic.Value(u32) = std.atomic.Value(u32).init(0),
sink: union {
mem: mem_sink.Thread,
json: void,
bin: bin_export.Thread,
} = .{.mem = .{}},
fn addBytes(t: *Thread, bytes: u64) void {
if (@bitSizeOf(usize) >= 64) _ = t.bytes_seen.fetchAdd(bytes, .monotonic)
else {
t.lock.lock();
defer t.lock.unlock();
t.bytes_seen.raw += bytes;
}
}
fn getBytes(t: *Thread) u64 {
if (@bitSizeOf(usize) >= 64) return t.bytes_seen.load(.monotonic)
else {
t.lock.lock();
defer t.lock.unlock();
return t.bytes_seen.raw;
}
}
pub fn setDir(t: *Thread, d: ?*Dir) void {
t.lock.lock();
defer t.lock.unlock();
t.current_dir = d;
}
};
pub const global = struct {
pub var state: enum { done, err, zeroing, hlcnt, running } = .running;
pub var threads: []Thread = undefined;
pub var sink: enum { json, mem, bin } = .mem;
pub var last_error: ?[:0]u8 = null;
var last_error_lock = std.Thread.Mutex{};
var need_confirm_quit = false;
};
// Must be the first thing to call from a source; initializes global state.
pub fn createThreads(num: usize) []Thread {
// JSON export does not support multiple threads, scan into memory first.
if (global.sink == .json and num > 1) {
global.sink = .mem;
mem_sink.global.stats = false;
}
global.state = .running;
if (global.last_error) |p| main.allocator.free(p);
global.last_error = null;
global.threads = main.allocator.alloc(Thread, num) catch unreachable;
for (global.threads) |*t| t.* = .{
.sink = switch (global.sink) {
.mem => .{ .mem = .{} },
.json => .{ .json = {} },
.bin => .{ .bin = .{} },
},
};
return global.threads;
}
// Must be the last thing to call from a source.
pub fn done() void {
switch (global.sink) {
.mem => mem_sink.done(),
.json => json_export.done(),
.bin => bin_export.done(global.threads),
}
global.state = .done;
main.allocator.free(global.threads);
// We scanned into memory, now we need to scan from memory to JSON
if (global.sink == .mem and !mem_sink.global.stats) {
global.sink = .json;
mem_src.run(model.root);
}
// Clear the screen when done.
if (main.config.scan_ui == .line) main.handleEvent(false, true);
}
pub fn createRoot(path: []const u8, stat: *const Stat) *Dir {
const d = main.allocator.create(Dir) catch unreachable;
d.* = .{
.name = main.allocator.dupe(u8, path) catch unreachable,
.parent = null,
.out = switch (global.sink) {
.mem => .{ .mem = mem_sink.createRoot(path, stat) },
.json => .{ .json = json_export.createRoot(path, stat) },
.bin => .{ .bin = bin_export.createRoot(stat, global.threads) },
},
};
return d;
}
fn drawConsole() void {
const st = struct {
var ansi: ?bool = null;
var lines_written: usize = 0;
};
const stderr = if (@hasDecl(std.io, "getStdErr")) std.io.getStdErr() else std.fs.File.stderr();
const ansi = st.ansi orelse blk: {
const t = stderr.supportsAnsiEscapeCodes();
st.ansi = t;
break :blk t;
};
var buf: [4096]u8 = undefined;
var strm = std.io.fixedBufferStream(buf[0..]);
var wr = strm.writer();
while (ansi and st.lines_written > 0) {
wr.writeAll("\x1b[1F\x1b[2K") catch {};
st.lines_written -= 1;
}
if (global.state == .hlcnt) {
wr.writeAll("Counting hardlinks...") catch {};
if (model.inodes.add_total > 0)
wr.print(" {} / {}", .{ model.inodes.add_done, model.inodes.add_total }) catch {};
wr.writeByte('\n') catch {};
st.lines_written += 1;
} else if (global.state == .running) {
var bytes: u64 = 0;
var files: u64 = 0;
for (global.threads) |*t| {
bytes +|= t.getBytes();
files += t.files_seen.load(.monotonic);
}
const r = ui.FmtSize.fmt(bytes);
wr.print("{} files / {s}{s}\n", .{files, r.num(), r.unit}) catch {};
st.lines_written += 1;
for (global.threads, 0..) |*t, i| {
const dir = blk: {
t.lock.lock();
defer t.lock.unlock();
break :blk if (t.current_dir) |d| d.path() else null;
};
wr.print(" #{}: {s}\n", .{i+1, ui.shorten(ui.toUtf8(dir orelse "(waiting)"), 73)}) catch {};
st.lines_written += 1;
if (dir) |p| main.allocator.free(p);
}
}
stderr.writeAll(strm.getWritten()) catch {};
}
fn drawProgress() void {
const st = struct { var animation_pos: usize = 0; };
var bytes: u64 = 0;
var files: u64 = 0;
for (global.threads) |*t| {
bytes +|= t.getBytes();
files += t.files_seen.load(.monotonic);
}
ui.init();
const width = ui.cols -| 5;
const numthreads: u32 = @intCast(@min(global.threads.len, @max(1, ui.rows -| 10)));
const box = ui.Box.create(8 + numthreads, width, "Scanning...");
box.move(2, 2);
ui.addstr("Total items: ");
ui.addnum(.default, files);
if (width > 48) {
box.move(2, 30);
ui.addstr("size: ");
ui.addsize(.default, bytes);
}
for (0..numthreads) |i| {
box.move(3+@as(u32, @intCast(i)), 4);
const dir = blk: {
const t = &global.threads[i];
t.lock.lock();
defer t.lock.unlock();
break :blk if (t.current_dir) |d| d.path() else null;
};
ui.addstr(ui.shorten(ui.toUtf8(dir orelse "(waiting)"), width -| 6));
if (dir) |p| main.allocator.free(p);
}
blk: {
global.last_error_lock.lock();
defer global.last_error_lock.unlock();
const err = global.last_error orelse break :blk;
box.move(4 + numthreads, 2);
ui.style(.bold);
ui.addstr("Warning: ");
ui.style(.default);
ui.addstr("error scanning ");
ui.addstr(ui.shorten(ui.toUtf8(err), width -| 28));
box.move(5 + numthreads, 3);
ui.addstr("some directory sizes may not be correct.");
}
if (global.need_confirm_quit) {
box.move(6 + numthreads, width -| 20);
ui.addstr("Press ");
ui.style(.key);
ui.addch('y');
ui.style(.default);
ui.addstr(" to confirm");
} else {
box.move(6 + numthreads, width -| 18);
ui.addstr("Press ");
ui.style(.key);
ui.addch('q');
ui.style(.default);
ui.addstr(" to abort");
}
if (main.config.update_delay < std.time.ns_per_s and width > 40) {
const txt = "Scanning...";
st.animation_pos += 1;
if (st.animation_pos >= txt.len*2) st.animation_pos = 0;
if (st.animation_pos < txt.len) {
box.move(6 + numthreads, 2);
for (txt[0..st.animation_pos + 1]) |t| ui.addch(t);
} else {
var i: u32 = txt.len-1;
while (i > st.animation_pos-txt.len) : (i -= 1) {
box.move(6 + numthreads, 2+i);
ui.addch(txt[i]);
}
}
}
}
fn drawError() void {
const width = ui.cols -| 5;
const box = ui.Box.create(6, width, "Scan error");
box.move(2, 2);
ui.addstr("Unable to open directory:");
box.move(3, 4);
ui.addstr(ui.shorten(ui.toUtf8(global.last_error.?), width -| 10));
box.move(4, width -| 27);
ui.addstr("Press any key to continue");
}
fn drawMessage(msg: []const u8) void {
const width = ui.cols -| 5;
const box = ui.Box.create(4, width, "Scan error");
box.move(2, 2);
ui.addstr(msg);
}
pub fn draw() void {
switch (main.config.scan_ui.?) {
.none => {},
.line => drawConsole(),
.full => {
ui.init();
switch (global.state) {
.done => {},
.err => drawError(),
.zeroing => {
const box = ui.Box.create(4, ui.cols -| 5, "Initializing");
box.move(2, 2);
ui.addstr("Clearing directory counts...");
},
.hlcnt => {
const box = ui.Box.create(4, ui.cols -| 5, "Finalizing");
box.move(2, 2);
ui.addstr("Counting hardlinks... ");
if (model.inodes.add_total > 0) {
ui.addnum(.default, model.inodes.add_done);
ui.addstr(" / ");
ui.addnum(.default, model.inodes.add_total);
}
},
.running => drawProgress(),
}
},
}
}
pub fn keyInput(ch: i32) void {
switch (global.state) {
.done => {},
.err => main.state = .browse,
.zeroing => {},
.hlcnt => {},
.running => {
switch (ch) {
'q' => {
if (main.config.confirm_quit) global.need_confirm_quit = !global.need_confirm_quit
else ui.quit();
},
'y', 'Y' => if (global.need_confirm_quit) ui.quit(),
else => global.need_confirm_quit = false,
}
},
}
}

690
src/ui.zig Normal file
View file

@ -0,0 +1,690 @@
// SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
// SPDX-License-Identifier: MIT
// Ncurses wrappers and TUI helper functions.
const std = @import("std");
const main = @import("main.zig");
const util = @import("util.zig");
const c = @import("c.zig").c;
pub var inited: bool = false;
pub var main_thread: std.Thread.Id = undefined;
pub var oom_threads = std.atomic.Value(usize).init(0);
pub var rows: u32 = undefined;
pub var cols: u32 = undefined;
pub fn die(comptime fmt: []const u8, args: anytype) noreturn {
deinit();
std.debug.print(fmt, args);
std.process.exit(1);
}
pub fn quit() noreturn {
deinit();
std.process.exit(0);
}
const sleep = if (@hasDecl(std.time, "sleep")) std.time.sleep else std.Thread.sleep;
// Should be called when malloc fails. Will show a message to the user, wait
// for a second and return to give it another try.
// Glitch: this function may be called while we're in the process of drawing
// the ncurses window, in which case the deinit/reinit will cause the already
// drawn part to be discarded. A redraw will fix that, but that tends to only
// happen after user input.
// Also, init() and other ncurses-related functions may have hidden allocation,
// no clue if ncurses will consistently report OOM, but we're not handling that
// right now.
pub fn oom() void {
@branchHint(.cold);
if (main_thread == std.Thread.getCurrentId()) {
const haveui = inited;
deinit();
std.debug.print("\x1b7\x1b[JOut of memory, trying again in 1 second. Hit Ctrl-C to abort.\x1b8", .{});
sleep(std.time.ns_per_s);
if (haveui)
init();
} else {
_ = oom_threads.fetchAdd(1, .monotonic);
sleep(std.time.ns_per_s);
_ = oom_threads.fetchSub(1, .monotonic);
}
}
// Dumb strerror() alternative for Zig file I/O, not complete.
// (Would be nicer if Zig just exposed errno so I could call strerror() directly)
pub fn errorString(e: anyerror) [:0]const u8 {
return switch (e) {
error.AccessDenied => "Access denied",
error.DirNotEmpty => "Directory not empty",
error.DiskQuota => "Disk quota exceeded",
error.FileBusy => "File is busy",
error.FileNotFound => "No such file or directory",
error.FileSystem => "I/O error", // This one is shit, Zig uses this for both EIO and ELOOP in execve().
error.FileTooBig => "File too big",
error.InputOutput => "I/O error",
error.InvalidExe => "Invalid executable",
error.IsDir => "Is a directory",
error.NameTooLong => "Filename too long",
error.NoSpaceLeft => "No space left on device",
error.NotDir => "Not a directory",
error.OutOfMemory, error.SystemResources => "Out of memory",
error.ProcessFdQuotaExceeded => "Process file descriptor limit exceeded",
error.ReadOnlyFilesystem => "Read-only filesystem",
error.SymlinkLoop => "Symlink loop",
error.SystemFdQuotaExceeded => "System file descriptor limit exceeded",
error.EndOfStream => "Unexpected end of file",
else => @errorName(e),
};
}
var to_utf8_buf: std.ArrayListUnmanaged(u8) = .empty;
fn toUtf8BadChar(ch: u8) bool {
return switch (ch) {
0...0x1F, 0x7F => true,
else => false
};
}
// Utility function to convert a string to valid (mostly) printable UTF-8.
// Invalid codepoints will be encoded as '\x##' strings.
// Returns the given string if it's already valid, otherwise points to an
// internal buffer that will be invalidated on the next call.
// (Doesn't check for non-printable Unicode characters)
// (This program assumes that the console locale is UTF-8, but file names may not be)
pub fn toUtf8(in: [:0]const u8) [:0]const u8 {
const hasBadChar = blk: {
for (in) |ch| if (toUtf8BadChar(ch)) break :blk true;
break :blk false;
};
if (!hasBadChar and std.unicode.utf8ValidateSlice(in)) return in;
var i: usize = 0;
to_utf8_buf.shrinkRetainingCapacity(0);
while (i < in.len) {
if (std.unicode.utf8ByteSequenceLength(in[i])) |cp_len| {
if (!toUtf8BadChar(in[i]) and i + cp_len <= in.len) {
if (std.unicode.utf8Decode(in[i .. i + cp_len])) |_| {
to_utf8_buf.appendSlice(main.allocator, in[i .. i + cp_len]) catch unreachable;
i += cp_len;
continue;
} else |_| {}
}
} else |_| {}
to_utf8_buf.writer(main.allocator).print("\\x{X:0>2}", .{in[i]}) catch unreachable;
i += 1;
}
return util.arrayListBufZ(&to_utf8_buf, main.allocator);
}
var shorten_buf: std.ArrayListUnmanaged(u8) = .empty;
// Shorten the given string to fit in the given number of columns.
// If the string is too long, only the prefix and suffix will be printed, with '...' in between.
// Input is assumed to be valid UTF-8.
// Return value points to the input string or to an internal buffer that is
// invalidated on a subsequent call.
pub fn shorten(in: [:0]const u8, max_width: u32) [:0] const u8 {
if (max_width < 4) return "...";
var total_width: u32 = 0;
var prefix_width: u32 = 0;
var prefix_end: u32 = 0;
var prefix_done = false;
var it = std.unicode.Utf8View.initUnchecked(in).iterator();
while (it.nextCodepoint()) |cp| {
// XXX: libc assumption: wchar_t is a Unicode point. True for most modern libcs?
// (The "proper" way is to use mbtowc(), but I'd rather port the musl wcwidth implementation to Zig so that I *know* it'll be Unicode.
// On the other hand, ncurses also use wcwidth() so that would cause duplicated code. Ugh)
const cp_width_ = c.wcwidth(cp);
const cp_width: u32 = @intCast(if (cp_width_ < 0) 0 else cp_width_);
const cp_len = std.unicode.utf8CodepointSequenceLength(cp) catch unreachable;
total_width += cp_width;
if (!prefix_done and prefix_width + cp_width <= @divFloor(max_width-1, 2)-1) {
prefix_width += cp_width;
prefix_end += cp_len;
} else
prefix_done = true;
}
if (total_width <= max_width) return in;
shorten_buf.shrinkRetainingCapacity(0);
shorten_buf.appendSlice(main.allocator, in[0..prefix_end]) catch unreachable;
shorten_buf.appendSlice(main.allocator, "...") catch unreachable;
var start_width: u32 = prefix_width;
var start_len: u32 = prefix_end;
it = std.unicode.Utf8View.initUnchecked(in[prefix_end..]).iterator();
while (it.nextCodepoint()) |cp| {
const cp_width_ = c.wcwidth(cp);
const cp_width: u32 = @intCast(if (cp_width_ < 0) 0 else cp_width_);
const cp_len = std.unicode.utf8CodepointSequenceLength(cp) catch unreachable;
start_width += cp_width;
start_len += cp_len;
if (total_width - start_width <= max_width - prefix_width - 3) {
shorten_buf.appendSlice(main.allocator, in[start_len..]) catch unreachable;
break;
}
}
return util.arrayListBufZ(&shorten_buf, main.allocator);
}
fn shortenTest(in: [:0]const u8, max_width: u32, out: [:0]const u8) !void {
try std.testing.expectEqualStrings(out, shorten(in, max_width));
}
test "shorten" {
_ = c.setlocale(c.LC_ALL, ""); // libc wcwidth() may not recognize Unicode without this
const t = shortenTest;
try t("abcde", 3, "...");
try t("abcde", 5, "abcde");
try t("abcde", 4, "...e");
try t("abcdefgh", 6, "a...gh");
try t("abcdefgh", 7, "ab...gh");
try t("", 16, "");
try t("", 7, "...");
try t("", 8, "...");
try t("", 9, "...");
try t("a", 8, "..."); // could optimize this, but w/e
try t("a", 8, "...a");
try t("", 15, "...");
try t("a❤a❤a", 5, "❤︎...a"); // Variation selectors; not great, there's an additional U+FE0E before 'a'.
try t("ą́ą́ą́ą́ą́ą́", 5, "ą́...̨́ą́"); // Combining marks, similarly bad.
}
const StyleAttr = struct { fg: i16, bg: i16, attr: u32 };
const StyleDef = struct {
name: [:0]const u8,
off: StyleAttr,
dark: StyleAttr,
darkbg: StyleAttr,
fn style(self: *const @This()) StyleAttr {
return switch (main.config.ui_color) {
.off => self.off,
.dark => self.dark,
.darkbg => self.darkbg,
};
}
};
const styles = [_]StyleDef{
.{ .name = "default",
.off = .{ .fg = -1, .bg = -1, .attr = 0 },
.dark = .{ .fg = -1, .bg = -1, .attr = 0 },
.darkbg = .{ .fg = c.COLOR_WHITE, .bg = c.COLOR_BLACK, .attr = 0 } },
.{ .name = "bold",
.off = .{ .fg = -1, .bg = -1, .attr = c.A_BOLD },
.dark = .{ .fg = -1, .bg = -1, .attr = c.A_BOLD },
.darkbg = .{ .fg = c.COLOR_WHITE, .bg = c.COLOR_BLACK, .attr = c.A_BOLD } },
.{ .name = "bold_hd",
.off = .{ .fg = -1, .bg = -1, .attr = c.A_BOLD|c.A_REVERSE },
.dark = .{ .fg = c.COLOR_BLACK, .bg = c.COLOR_CYAN, .attr = c.A_BOLD },
.darkbg = .{ .fg = c.COLOR_BLACK, .bg = c.COLOR_CYAN, .attr = c.A_BOLD } },
.{ .name = "box_title",
.off = .{ .fg = -1, .bg = -1, .attr = c.A_BOLD },
.dark = .{ .fg = c.COLOR_BLUE, .bg = -1, .attr = c.A_BOLD },
.darkbg = .{ .fg = c.COLOR_BLUE, .bg = c.COLOR_BLACK, .attr = c.A_BOLD } },
.{ .name = "hd", // header + footer
.off = .{ .fg = -1, .bg = -1, .attr = c.A_REVERSE },
.dark = .{ .fg = c.COLOR_BLACK, .bg = c.COLOR_CYAN, .attr = 0 },
.darkbg = .{ .fg = c.COLOR_BLACK, .bg = c.COLOR_CYAN, .attr = 0 } },
.{ .name = "sel",
.off = .{ .fg = -1, .bg = -1, .attr = c.A_REVERSE },
.dark = .{ .fg = c.COLOR_WHITE, .bg = c.COLOR_GREEN, .attr = c.A_BOLD },
.darkbg = .{ .fg = c.COLOR_WHITE, .bg = c.COLOR_GREEN, .attr = c.A_BOLD } },
.{ .name = "num",
.off = .{ .fg = -1, .bg = -1, .attr = 0 },
.dark = .{ .fg = c.COLOR_YELLOW, .bg = -1, .attr = c.A_BOLD },
.darkbg = .{ .fg = c.COLOR_YELLOW, .bg = c.COLOR_BLACK, .attr = c.A_BOLD } },
.{ .name = "num_hd",
.off = .{ .fg = -1, .bg = -1, .attr = c.A_REVERSE },
.dark = .{ .fg = c.COLOR_YELLOW, .bg = c.COLOR_CYAN, .attr = c.A_BOLD },
.darkbg = .{ .fg = c.COLOR_YELLOW, .bg = c.COLOR_CYAN, .attr = c.A_BOLD } },
.{ .name = "num_sel",
.off = .{ .fg = -1, .bg = -1, .attr = c.A_REVERSE },
.dark = .{ .fg = c.COLOR_YELLOW, .bg = c.COLOR_GREEN, .attr = c.A_BOLD },
.darkbg = .{ .fg = c.COLOR_YELLOW, .bg = c.COLOR_GREEN, .attr = c.A_BOLD } },
.{ .name = "key",
.off = .{ .fg = -1, .bg = -1, .attr = c.A_BOLD },
.dark = .{ .fg = c.COLOR_YELLOW, .bg = -1, .attr = c.A_BOLD },
.darkbg = .{ .fg = c.COLOR_YELLOW, .bg = c.COLOR_BLACK, .attr = c.A_BOLD } },
.{ .name = "key_hd",
.off = .{ .fg = -1, .bg = -1, .attr = c.A_BOLD|c.A_REVERSE },
.dark = .{ .fg = c.COLOR_YELLOW, .bg = c.COLOR_CYAN, .attr = c.A_BOLD },
.darkbg = .{ .fg = c.COLOR_YELLOW, .bg = c.COLOR_CYAN, .attr = c.A_BOLD } },
.{ .name = "dir",
.off = .{ .fg = -1, .bg = -1, .attr = 0 },
.dark = .{ .fg = c.COLOR_BLUE, .bg = -1, .attr = c.A_BOLD },
.darkbg = .{ .fg = c.COLOR_BLUE, .bg = c.COLOR_BLACK, .attr = c.A_BOLD } },
.{ .name = "dir_sel",
.off = .{ .fg = -1, .bg = -1, .attr = c.A_REVERSE },
.dark = .{ .fg = c.COLOR_BLUE, .bg = c.COLOR_GREEN, .attr = c.A_BOLD },
.darkbg = .{ .fg = c.COLOR_BLUE, .bg = c.COLOR_GREEN, .attr = c.A_BOLD } },
.{ .name = "flag",
.off = .{ .fg = -1, .bg = -1, .attr = 0 },
.dark = .{ .fg = c.COLOR_RED, .bg = -1, .attr = 0 },
.darkbg = .{ .fg = c.COLOR_RED, .bg = c.COLOR_BLACK, .attr = 0 } },
.{ .name = "flag_sel",
.off = .{ .fg = -1, .bg = -1, .attr = c.A_REVERSE },
.dark = .{ .fg = c.COLOR_RED, .bg = c.COLOR_GREEN, .attr = 0 },
.darkbg = .{ .fg = c.COLOR_RED, .bg = c.COLOR_GREEN, .attr = 0 } },
.{ .name = "graph",
.off = .{ .fg = -1, .bg = -1, .attr = 0 },
.dark = .{ .fg = c.COLOR_MAGENTA, .bg = -1, .attr = 0 },
.darkbg = .{ .fg = c.COLOR_MAGENTA, .bg = c.COLOR_BLACK, .attr = 0 } },
.{ .name = "graph_sel",
.off = .{ .fg = -1, .bg = -1, .attr = c.A_REVERSE },
.dark = .{ .fg = c.COLOR_MAGENTA, .bg = c.COLOR_GREEN, .attr = 0 },
.darkbg = .{ .fg = c.COLOR_MAGENTA, .bg = c.COLOR_GREEN, .attr = 0 } },
};
pub const Style = lbl: {
var fields: [styles.len]std.builtin.Type.EnumField = undefined;
for (&fields, styles, 0..) |*field, s, i| {
field.* = .{
.name = s.name,
.value = i,
};
}
break :lbl @Type(.{
.@"enum" = .{
.tag_type = u8,
.fields = &fields,
.decls = &[_]std.builtin.Type.Declaration{},
.is_exhaustive = true,
}
});
};
const ui = @This();
pub const Bg = enum {
default, hd, sel,
// Set the style to the selected bg combined with the given fg.
pub fn fg(self: @This(), s: Style) void {
ui.style(switch (self) {
.default => s,
.hd =>
switch (s) {
.default => Style.hd,
.key => Style.key_hd,
.num => Style.num_hd,
else => unreachable,
},
.sel =>
switch (s) {
.default => Style.sel,
.num => Style.num_sel,
.dir => Style.dir_sel,
.flag => Style.flag_sel,
.graph => Style.graph_sel,
else => unreachable,
}
});
}
};
fn updateSize() void {
// getmax[yx] macros are marked as "legacy", but Zig can't deal with the "proper" getmaxyx macro.
rows = @intCast(c.getmaxy(c.stdscr));
cols = @intCast(c.getmaxx(c.stdscr));
}
fn clearScr() void {
// Send a "clear from cursor to end of screen" instruction, to clear a
// potential line left behind from scanning in -1 mode.
std.debug.print("\x1b[J", .{});
}
pub fn init() void {
if (inited) return;
clearScr();
if (main.config.nc_tty) {
const tty = c.fopen("/dev/tty", "r+");
if (tty == null) die("Error opening /dev/tty: {s}.\n", .{ c.strerror(@intFromEnum(std.posix.errno(-1))) });
const term = c.newterm(null, tty, tty);
if (term == null) die("Error initializing ncurses.\n", .{});
_ = c.set_term(term);
} else {
if (c.initscr() == null) die("Error initializing ncurses.\n", .{});
}
updateSize();
_ = c.cbreak();
_ = c.noecho();
_ = c.curs_set(0);
_ = c.keypad(c.stdscr, true);
_ = c.start_color();
_ = c.use_default_colors();
for (styles, 0..) |s, i| _ = c.init_pair(@as(i16, @intCast(i+1)), s.style().fg, s.style().bg);
_ = c.bkgd(@intCast(c.COLOR_PAIR(@intFromEnum(Style.default)+1)));
inited = true;
}
pub fn deinit() void {
if (!inited) {
clearScr();
return;
}
_ = c.erase();
_ = c.refresh();
_ = c.endwin();
inited = false;
}
pub fn style(s: Style) void {
_ = c.attr_set(styles[@intFromEnum(s)].style().attr, @intFromEnum(s)+1, null);
}
pub fn move(y: u32, x: u32) void {
_ = c.move(@as(i32, @intCast(y)), @as(i32, @intCast(x)));
}
// Wraps to the next line if the text overflows, not sure how to disable that.
// (Well, addchstr() does that, but not entirely sure I want to go that way.
// Does that even work with UTF-8? Or do I really need to go wchar madness?)
pub fn addstr(s: [:0]const u8) void {
_ = c.addstr(s.ptr);
}
// Not to be used for strings that may end up >256 bytes.
pub fn addprint(comptime fmt: []const u8, args: anytype) void {
var buf: [256:0]u8 = undefined;
const s = std.fmt.bufPrintZ(&buf, fmt, args) catch unreachable;
addstr(s);
}
pub fn addch(ch: c.chtype) void {
_ = c.addch(ch);
}
// Format an integer to a human-readable size string.
// num() = "###.#"
// unit = " XB" or " XiB"
// Concatenated, these take 8 columns in SI mode or 9 otherwise.
pub const FmtSize = struct {
buf: [5:0]u8,
unit: [:0]const u8,
fn init(u: [:0]const u8, n: u64, mul: u64, div: u64) FmtSize {
return .{
.unit = u,
.buf = util.fmt5dec(@intCast( ((n*mul) +| (div / 2)) / div )),
};
}
pub fn fmt(v: u64) FmtSize {
if (main.config.si) {
if (v < 1000) { return FmtSize.init(" B", v, 10, 1); }
else if (v < 999_950) { return FmtSize.init(" kB", v, 1, 100); }
else if (v < 999_950_000) { return FmtSize.init(" MB", v, 1, 100_000); }
else if (v < 999_950_000_000) { return FmtSize.init(" GB", v, 1, 100_000_000); }
else if (v < 999_950_000_000_000) { return FmtSize.init(" TB", v, 1, 100_000_000_000); }
else if (v < 999_950_000_000_000_000) { return FmtSize.init(" PB", v, 1, 100_000_000_000_000); }
else { return FmtSize.init(" EB", v, 1, 100_000_000_000_000_000); }
} else {
// Cutoff values are obtained by calculating 999.949999999999999999999999 * div with an infinite-precision calculator.
// (Admittedly, this precision is silly)
if (v < 1000) { return FmtSize.init(" B", v, 10, 1); }
else if (v < 1023949) { return FmtSize.init(" KiB", v, 10, 1<<10); }
else if (v < 1048523572) { return FmtSize.init(" MiB", v, 10, 1<<20); }
else if (v < 1073688136909) { return FmtSize.init(" GiB", v, 10, 1<<30); }
else if (v < 1099456652194612) { return FmtSize.init(" TiB", v, 10, 1<<40); }
else if (v < 1125843611847281869) { return FmtSize.init(" PiB", v, 10, 1<<50); }
else { return FmtSize.init(" EiB", v, 1, (1<<60)/10); }
}
}
pub fn num(self: *const FmtSize) [:0]const u8 {
return &self.buf;
}
fn testEql(self: FmtSize, exp: []const u8) !void {
var buf: [10]u8 = undefined;
try std.testing.expectEqualStrings(exp, try std.fmt.bufPrint(&buf, "{s}{s}", .{ self.num(), self.unit }));
}
};
test "fmtsize" {
main.config.si = true;
try FmtSize.fmt( 0).testEql(" 0.0 B");
try FmtSize.fmt( 999).testEql("999.0 B");
try FmtSize.fmt( 1000).testEql(" 1.0 kB");
try FmtSize.fmt( 1049).testEql(" 1.0 kB");
try FmtSize.fmt( 1050).testEql(" 1.1 kB");
try FmtSize.fmt( 999_899).testEql("999.9 kB");
try FmtSize.fmt( 999_949).testEql("999.9 kB");
try FmtSize.fmt( 999_950).testEql(" 1.0 MB");
try FmtSize.fmt( 1000_000).testEql(" 1.0 MB");
try FmtSize.fmt( 999_850_009).testEql("999.9 MB");
try FmtSize.fmt( 999_899_999).testEql("999.9 MB");
try FmtSize.fmt( 999_900_000).testEql("999.9 MB");
try FmtSize.fmt( 999_949_999).testEql("999.9 MB");
try FmtSize.fmt( 999_950_000).testEql(" 1.0 GB");
try FmtSize.fmt( 999_999_999).testEql(" 1.0 GB");
try FmtSize.fmt(std.math.maxInt(u64)).testEql(" 18.4 EB");
main.config.si = false;
try FmtSize.fmt( 0).testEql(" 0.0 B");
try FmtSize.fmt( 999).testEql("999.0 B");
try FmtSize.fmt( 1000).testEql(" 1.0 KiB");
try FmtSize.fmt( 1024).testEql(" 1.0 KiB");
try FmtSize.fmt( 102400).testEql("100.0 KiB");
try FmtSize.fmt( 1023898).testEql("999.9 KiB");
try FmtSize.fmt( 1023949).testEql(" 1.0 MiB");
try FmtSize.fmt( 1048523571).testEql("999.9 MiB");
try FmtSize.fmt( 1048523572).testEql(" 1.0 GiB");
try FmtSize.fmt( 1073688136908).testEql("999.9 GiB");
try FmtSize.fmt( 1073688136909).testEql(" 1.0 TiB");
try FmtSize.fmt( 1099456652194611).testEql("999.9 TiB");
try FmtSize.fmt( 1099456652194612).testEql(" 1.0 PiB");
try FmtSize.fmt(1125843611847281868).testEql("999.9 PiB");
try FmtSize.fmt(1125843611847281869).testEql(" 1.0 EiB");
try FmtSize.fmt(std.math.maxInt(u64)).testEql(" 16.0 EiB");
}
// Print a formatted human-readable size string onto the given background.
pub fn addsize(bg: Bg, v: u64) void {
const r = FmtSize.fmt(v);
bg.fg(.num);
addstr(r.num());
bg.fg(.default);
addstr(r.unit);
}
// Print a full decimal number with thousand separators.
// Max: 18,446,744,073,709,551,615 -> 26 columns
// (Assuming thousands_sep takes a single column)
pub fn addnum(bg: Bg, v: u64) void {
var buf: [32]u8 = undefined;
const s = std.fmt.bufPrint(&buf, "{d}", .{v}) catch unreachable;
var f: [64:0]u8 = undefined;
var i: usize = 0;
for (s, 0..) |digit, n| {
if (n != 0 and (s.len - n) % 3 == 0) {
for (main.config.thousands_sep) |ch| {
f[i] = ch;
i += 1;
}
}
f[i] = digit;
i += 1;
}
f[i] = 0;
bg.fg(.num);
addstr(&f);
bg.fg(.default);
}
// Print a file mode, takes 10 columns
pub fn addmode(mode: u32) void {
addch(switch (mode & std.posix.S.IFMT) {
std.posix.S.IFDIR => 'd',
std.posix.S.IFREG => '-',
std.posix.S.IFLNK => 'l',
std.posix.S.IFIFO => 'p',
std.posix.S.IFSOCK => 's',
std.posix.S.IFCHR => 'c',
std.posix.S.IFBLK => 'b',
else => '?'
});
addch(if (mode & 0o400 > 0) 'r' else '-');
addch(if (mode & 0o200 > 0) 'w' else '-');
addch(if (mode & 0o4000 > 0) 's' else if (mode & 0o100 > 0) @as(u7, 'x') else '-');
addch(if (mode & 0o040 > 0) 'r' else '-');
addch(if (mode & 0o020 > 0) 'w' else '-');
addch(if (mode & 0o2000 > 0) 's' else if (mode & 0o010 > 0) @as(u7, 'x') else '-');
addch(if (mode & 0o004 > 0) 'r' else '-');
addch(if (mode & 0o002 > 0) 'w' else '-');
addch(if (mode & 0o1000 > 0) (if (std.posix.S.ISDIR(mode)) @as(u7, 't') else 'T') else if (mode & 0o001 > 0) @as(u7, 'x') else '-');
}
// Print a timestamp, takes 25 columns
pub fn addts(bg: Bg, ts: u64) void {
const t = util.castClamp(c.time_t, ts);
var buf: [32:0]u8 = undefined;
const len = c.strftime(&buf, buf.len, "%Y-%m-%d %H:%M:%S %z", c.localtime(&t));
if (len > 0) {
bg.fg(.num);
ui.addstr(buf[0..len:0]);
} else {
bg.fg(.default);
ui.addstr(" invalid mtime");
}
}
pub fn hline(ch: c.chtype, len: u32) void {
_ = c.hline(ch, @as(i32, @intCast(len)));
}
// Draws a bordered box in the center of the screen.
pub const Box = struct {
start_row: u32,
start_col: u32,
const Self = @This();
pub fn create(height: u32, width: u32, title: [:0]const u8) Self {
const s = Self{
.start_row = (rows>>1) -| (height>>1),
.start_col = (cols>>1) -| (width>>1),
};
style(.default);
if (width < 6 or height < 3) return s;
const acs_map = @extern(*[128]c.chtype, .{ .name = "acs_map" });
const ulcorner = acs_map['l'];
const llcorner = acs_map['m'];
const urcorner = acs_map['k'];
const lrcorner = acs_map['j'];
const acs_hline = acs_map['q'];
const acs_vline = acs_map['x'];
var i: u32 = 0;
while (i < height) : (i += 1) {
s.move(i, 0);
addch(if (i == 0) ulcorner else if (i == height-1) llcorner else acs_vline);
hline(if (i == 0 or i == height-1) acs_hline else ' ', width-2);
s.move(i, width-1);
addch(if (i == 0) urcorner else if (i == height-1) lrcorner else acs_vline);
}
s.move(0, 3);
style(.box_title);
addch(' ');
addstr(title);
addch(' ');
style(.default);
return s;
}
pub fn tab(s: Self, col: u32, sel: bool, num: u3, label: [:0]const u8) void {
const bg: Bg = if (sel) .hd else .default;
s.move(0, col);
bg.fg(.key);
addch('0' + @as(u8, num));
bg.fg(.default);
addch(':');
addstr(label);
style(.default);
}
// Move the global cursor to the given coordinates inside the box.
pub fn move(s: Self, row: u32, col: u32) void {
ui.move(s.start_row + row, s.start_col + col);
}
};
// Returns 0 if no key was pressed in non-blocking mode.
// Returns -1 if it was KEY_RESIZE, requiring a redraw of the screen.
pub fn getch(block: bool) i32 {
_ = c.nodelay(c.stdscr, !block);
// getch() has a bad tendency to not set a sensible errno when it returns ERR.
// In non-blocking mode, we can only assume that ERR means "no input yet".
// In blocking mode, give it 100 tries with a 10ms delay in between,
// then just give up and die to avoid an infinite loop and unresponsive program.
for (0..100) |_| {
const ch = c.getch();
if (ch == c.KEY_RESIZE) {
updateSize();
return -1;
}
if (ch == c.ERR) {
if (!block) return 0;
sleep(10*std.time.ns_per_ms);
continue;
}
return ch;
}
die("Error reading keyboard input, assuming TTY has been lost.\n(Potentially nonsensical error message: {s})\n",
.{ c.strerror(@intFromEnum(std.posix.errno(-1))) });
}
fn waitInput() void {
if (@hasDecl(std.io, "getStdIn")) {
std.io.getStdIn().reader().skipUntilDelimiterOrEof('\n') catch unreachable;
} else {
var buf: [512]u8 = undefined;
var rd = std.fs.File.stdin().reader(&buf);
_ = rd.interface.discardDelimiterExclusive('\n') catch unreachable;
}
}
pub fn runCmd(cmd: []const []const u8, cwd: ?[]const u8, env: *std.process.EnvMap, reporterr: bool) void {
deinit();
defer init();
// NCDU_LEVEL can only count to 9, keeps the implementation simple.
if (env.get("NCDU_LEVEL")) |l|
env.put("NCDU_LEVEL", if (l.len == 0) "1" else switch (l[0]) {
'0'...'8' => |d| &[1] u8{d+1},
'9' => "9",
else => "1"
}) catch unreachable
else
env.put("NCDU_LEVEL", "1") catch unreachable;
var child = std.process.Child.init(cmd, main.allocator);
child.cwd = cwd;
child.env_map = env;
const term = child.spawnAndWait() catch |e| blk: {
std.debug.print("Error running command: {s}\n\nPress enter to continue.\n", .{ ui.errorString(e) });
waitInput();
break :blk std.process.Child.Term{ .Exited = 0 };
};
const n = switch (term) {
.Exited => "error",
.Signal => "signal",
.Stopped => "stopped",
.Unknown => "unknown",
};
const v = switch (term) { inline else => |v| v };
if (term != .Exited or (reporterr and v != 0)) {
std.debug.print("\nCommand returned with {s} code {}.\nPress enter to continue.\n", .{ n, v });
waitInput();
}
}

View file

@ -1,434 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#include "util.h"
#include <string.h>
#include <stdlib.h>
#include <ncurses.h>
#include <stdarg.h>
#include <unistd.h>
#ifdef HAVE_LOCALE_H
#include <locale.h>
#endif
int uic_theme;
int winrows, wincols;
int subwinr, subwinc;
int si;
char thou_sep;
char *cropstr(const char *from, int s) {
static char dat[4096];
int i, j, o = strlen(from);
if(o < s) {
strcpy(dat, from);
return dat;
}
j=s/2-3;
for(i=0; i<j; i++)
dat[i] = from[i];
dat[i] = '.';
dat[++i] = '.';
dat[++i] = '.';
j=o-s;
while(++i<s)
dat[i] = from[j+i];
dat[s] = '\0';
return dat;
}
float formatsize(int64_t from, char **unit) {
float r = from;
if (si) {
if(r < 1000.0f) { *unit = " B"; }
else if(r < 1e6f) { *unit = "KB"; r/=1e3f; }
else if(r < 1e9f) { *unit = "MB"; r/=1e6f; }
else if(r < 1e12f){ *unit = "GB"; r/=1e9f; }
else if(r < 1e15f){ *unit = "TB"; r/=1e12f; }
else if(r < 1e18f){ *unit = "PB"; r/=1e15f; }
else { *unit = "EB"; r/=1e18f; }
}
else {
if(r < 1000.0f) { *unit = " B"; }
else if(r < 1023e3f) { *unit = "KiB"; r/=1024.0f; }
else if(r < 1023e6f) { *unit = "MiB"; r/=1048576.0f; }
else if(r < 1023e9f) { *unit = "GiB"; r/=1073741824.0f; }
else if(r < 1023e12f){ *unit = "TiB"; r/=1099511627776.0f; }
else if(r < 1023e15f){ *unit = "PiB"; r/=1125899906842624.0f; }
else { *unit = "EiB"; r/=1152921504606846976.0f; }
}
return r;
}
void printsize(enum ui_coltype t, int64_t from) {
char *unit;
float r = formatsize(from, &unit);
uic_set(t == UIC_HD ? UIC_NUM_HD : t == UIC_SEL ? UIC_NUM_SEL : UIC_NUM);
printw("%5.1f", r);
addchc(t, ' ');
addstrc(t, unit);
}
char *fullsize(int64_t from) {
static char dat[26]; /* max: 9.223.372.036.854.775.807 (= 2^63-1) */
char tmp[26];
int64_t n = from;
int i, j;
/* the K&R method - more portable than sprintf with %lld */
i = 0;
do {
tmp[i++] = n % 10 + '0';
} while((n /= 10) > 0);
tmp[i] = '\0';
/* reverse and add thousand seperators */
j = 0;
while(i--) {
dat[j++] = tmp[i];
if(i != 0 && i%3 == 0)
dat[j++] = thou_sep;
}
dat[j] = '\0';
return dat;
}
char *fmtmode(unsigned short mode) {
static char buf[11];
unsigned short ft = mode & S_IFMT;
buf[0] = ft == S_IFDIR ? 'd'
: ft == S_IFREG ? '-'
: ft == S_IFLNK ? 'l'
: ft == S_IFIFO ? 'p'
: ft == S_IFSOCK ? 's'
: ft == S_IFCHR ? 'c'
: ft == S_IFBLK ? 'b' : '?';
buf[1] = mode & 0400 ? 'r' : '-';
buf[2] = mode & 0200 ? 'w' : '-';
buf[3] = mode & 0100 ? 'x' : '-';
buf[4] = mode & 0040 ? 'r' : '-';
buf[5] = mode & 0020 ? 'w' : '-';
buf[6] = mode & 0010 ? 'x' : '-';
buf[7] = mode & 0004 ? 'r' : '-';
buf[8] = mode & 0002 ? 'w' : '-';
buf[9] = mode & 0001 ? 'x' : '-';
buf[10] = 0;
return buf;
}
void read_locale() {
thou_sep = '.';
#ifdef HAVE_LOCALE_H
setlocale(LC_ALL, "");
char *locale_thou_sep = localeconv()->thousands_sep;
if(locale_thou_sep && 1 == strlen(locale_thou_sep))
thou_sep = locale_thou_sep[0];
#endif
}
int ncresize(int minrows, int mincols) {
int ch;
getmaxyx(stdscr, winrows, wincols);
while((minrows && winrows < minrows) || (mincols && wincols < mincols)) {
erase();
mvaddstr(0, 0, "Warning: terminal too small,");
mvaddstr(1, 1, "please either resize your terminal,");
mvaddstr(2, 1, "press i to ignore, or press q to quit.");
refresh();
nodelay(stdscr, 0);
ch = getch();
getmaxyx(stdscr, winrows, wincols);
if(ch == 'q') {
erase();
refresh();
endwin();
exit(0);
}
if(ch == 'i')
return 1;
}
erase();
return 0;
}
void nccreate(int height, int width, const char *title) {
int i;
uic_set(UIC_DEFAULT);
subwinr = winrows/2-height/2;
subwinc = wincols/2-width/2;
/* clear window */
for(i=0; i<height; i++)
mvhline(subwinr+i, subwinc, ' ', width);
/* box() only works around curses windows, so create our own */
move(subwinr, subwinc);
addch(ACS_ULCORNER);
for(i=0; i<width-2; i++)
addch(ACS_HLINE);
addch(ACS_URCORNER);
move(subwinr+height-1, subwinc);
addch(ACS_LLCORNER);
for(i=0; i<width-2; i++)
addch(ACS_HLINE);
addch(ACS_LRCORNER);
mvvline(subwinr+1, subwinc, ACS_VLINE, height-2);
mvvline(subwinr+1, subwinc+width-1, ACS_VLINE, height-2);
/* title */
uic_set(UIC_BOX_TITLE);
mvaddstr(subwinr, subwinc+4, title);
uic_set(UIC_DEFAULT);
}
void ncprint(int r, int c, char *fmt, ...) {
va_list arg;
va_start(arg, fmt);
move(subwinr+r, subwinc+c);
vw_printw(stdscr, fmt, arg);
va_end(arg);
}
void nctab(int c, int sel, int num, char *str) {
uic_set(sel ? UIC_KEY_HD : UIC_KEY);
ncprint(0, c, "%d", num);
uic_set(sel ? UIC_HD : UIC_DEFAULT);
addch(':');
addstr(str);
uic_set(UIC_DEFAULT);
}
static int colors[] = {
#define C(name, ...) 0,
UI_COLORS
#undef C
0
};
static int lastcolor = 0;
static const struct {
short fg, bg;
int attr;
} color_defs[] = {
#define C(name, off_fg, off_bg, off_a, dark_fg, dark_bg, dark_a) \
{off_fg, off_bg, off_a}, \
{dark_fg, dark_bg, dark_a},
UI_COLORS
#undef C
{0,0,0}
};
void uic_init() {
size_t i, j;
start_color();
use_default_colors();
for(i=0; i<sizeof(colors)/sizeof(*colors)-1; i++) {
j = i*2 + uic_theme;
init_pair(i+1, color_defs[j].fg, color_defs[j].bg);
colors[i] = color_defs[j].attr | COLOR_PAIR(i+1);
}
}
void uic_set(enum ui_coltype c) {
attroff(lastcolor);
lastcolor = colors[(int)c];
attron(lastcolor);
}
/* removes item from the hlnk circular linked list and size counts of the parents */
static void freedir_hlnk(struct dir *d) {
struct dir *t, *par, *pt;
int i;
if(!(d->flags & FF_HLNKC))
return;
/* remove size from parents.
* This works the same as with adding: only the parents in which THIS is the
* only occurence of the hard link will be modified, if the same file still
* exists within the parent it shouldn't get removed from the count.
* XXX: Same note as for dir_mem.c / hlink_check():
* this is probably not the most efficient algorithm */
for(i=1,par=d->parent; i&&par; par=par->parent) {
if(d->hlnk)
for(t=d->hlnk; i&&t!=d; t=t->hlnk)
for(pt=t->parent; i&&pt; pt=pt->parent)
if(pt==par)
i=0;
if(i) {
par->size = adds64(par->size, -d->size);
par->asize = adds64(par->size, -d->asize);
}
}
/* remove from hlnk */
if(d->hlnk) {
for(t=d->hlnk; t->hlnk!=d; t=t->hlnk)
;
t->hlnk = d->hlnk;
}
}
static void freedir_rec(struct dir *dr) {
struct dir *tmp, *tmp2;
tmp2 = dr;
while((tmp = tmp2) != NULL) {
freedir_hlnk(tmp);
/* remove item */
if(tmp->sub) freedir_rec(tmp->sub);
tmp2 = tmp->next;
free(tmp);
}
}
void freedir(struct dir *dr) {
if(!dr)
return;
/* free dr->sub recursively */
if(dr->sub)
freedir_rec(dr->sub);
/* update references */
if(dr->parent && dr->parent->sub == dr)
dr->parent->sub = dr->next;
if(dr->prev)
dr->prev->next = dr->next;
if(dr->next)
dr->next->prev = dr->prev;
freedir_hlnk(dr);
/* update sizes of parent directories if this isn't a hard link.
* If this is a hard link, freedir_hlnk() would have done so already
*
* mtime is 0 here because recalculating the maximum at every parent
* dir is expensive, but might be good feature to add later if desired */
addparentstats(dr->parent, dr->flags & FF_HLNKC ? 0 : -dr->size, dr->flags & FF_HLNKC ? 0 : -dr->asize, 0, -(dr->items+1));
free(dr);
}
char *getpath(struct dir *cur) {
static char *dat;
static int datl = 0;
struct dir *d, **list;
int c, i;
if(!cur->name[0])
return "/";
c = i = 1;
for(d=cur; d!=NULL; d=d->parent) {
i += strlen(d->name)+1;
c++;
}
if(datl == 0) {
datl = i;
dat = xmalloc(i);
} else if(datl < i) {
datl = i;
dat = xrealloc(dat, i);
}
list = xmalloc(c*sizeof(struct dir *));
c = 0;
for(d=cur; d!=NULL; d=d->parent)
list[c++] = d;
dat[0] = '\0';
while(c--) {
if(list[c]->parent)
strcat(dat, "/");
strcat(dat, list[c]->name);
}
free(list);
return dat;
}
struct dir *getroot(struct dir *d) {
while(d && d->parent)
d = d->parent;
return d;
}
void addparentstats(struct dir *d, int64_t size, int64_t asize, uint64_t mtime, int items) {
struct dir_ext *e;
while(d) {
d->size = adds64(d->size, size);
d->asize = adds64(d->asize, asize);
d->items += items;
if (d->flags & FF_EXT) {
e = dir_ext_ptr(d);
e->mtime = (e->mtime > mtime) ? e->mtime : mtime;
}
d = d->parent;
}
}
/* Apparently we can just resume drawing after endwin() and ncurses will pick
* up where it left. Probably not very portable... */
#define oom_msg "\nOut of memory, press enter to try again or Ctrl-C to give up.\n"
#define wrap_oom(f) \
void *ptr;\
char buf[128];\
while((ptr = f) == NULL) {\
close_nc();\
write(2, oom_msg, sizeof(oom_msg));\
read(0, buf, sizeof(buf));\
}\
return ptr;
void *xmalloc(size_t size) { wrap_oom(malloc(size)) }
void *xcalloc(size_t n, size_t size) { wrap_oom(calloc(n, size)) }
void *xrealloc(void *mem, size_t size) { wrap_oom(realloc(mem, size)) }

View file

@ -1,195 +0,0 @@
/* ncdu - NCurses Disk Usage
Copyright (c) 2007-2020 Yoran Heling
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef _util_h
#define _util_h
#include "global.h"
#include <ncurses.h>
/* UI colors: (foreground, background, attrs)
* NAME OFF DARK
*/
#define UI_COLORS \
C(DEFAULT, -1,-1,0 , -1, -1, 0 )\
C(BOX_TITLE, -1,-1,A_BOLD , COLOR_BLUE, -1, A_BOLD)\
C(HD, -1,-1,A_REVERSE , COLOR_BLACK, COLOR_CYAN, 0 ) /* header & footer */\
C(SEL, -1,-1,A_REVERSE , COLOR_WHITE, COLOR_GREEN,A_BOLD)\
C(NUM, -1,-1,0 , COLOR_YELLOW, -1, A_BOLD)\
C(NUM_HD, -1,-1,A_REVERSE , COLOR_YELLOW, COLOR_CYAN, A_BOLD)\
C(NUM_SEL, -1,-1,A_REVERSE , COLOR_YELLOW, COLOR_GREEN,A_BOLD)\
C(KEY, -1,-1,A_BOLD , COLOR_YELLOW, -1, A_BOLD)\
C(KEY_HD, -1,-1,A_BOLD|A_REVERSE, COLOR_YELLOW, COLOR_CYAN, A_BOLD)\
C(DIR, -1,-1,0 , COLOR_BLUE, -1, A_BOLD)\
C(DIR_SEL, -1,-1,A_REVERSE , COLOR_BLUE, COLOR_GREEN,A_BOLD)\
C(FLAG, -1,-1,0 , COLOR_RED, -1, 0 )\
C(FLAG_SEL, -1,-1,A_REVERSE , COLOR_RED, COLOR_GREEN,0 )\
C(GRAPH, -1,-1,0 , COLOR_MAGENTA,-1, 0 )\
C(GRAPH_SEL, -1,-1,A_REVERSE , COLOR_MAGENTA,COLOR_GREEN,0 )
enum ui_coltype {
#define C(name, ...) UIC_##name,
UI_COLORS
#undef C
UIC_NONE
};
/* Color & attribute manipulation */
extern int uic_theme;
void uic_init();
void uic_set(enum ui_coltype);
/* updated when window is resized */
extern int winrows, wincols;
/* used by the nc* functions and macros */
extern int subwinr, subwinc;
/* used by formatsize to choose between base 2 or 10 prefixes */
extern int si;
/* Macros/functions for managing struct dir and struct dir_ext */
#define dir_memsize(n) (offsetof(struct dir, name)+1+strlen(n))
#define dir_ext_offset(n) ((dir_memsize(n) + 7) & ~7)
#define dir_ext_memsize(n) (dir_ext_offset(n) + sizeof(struct dir_ext))
static inline struct dir_ext *dir_ext_ptr(struct dir *d) {
return d->flags & FF_EXT
? (struct dir_ext *) ( ((char *)d) + dir_ext_offset(d->name) )
: NULL;
}
/* Instead of using several ncurses windows, we only draw to stdscr.
* the functions nccreate, ncprint and the macros ncaddstr and ncaddch
* mimic the behaviour of ncurses windows.
* This works better than using ncurses windows when all windows are
* created in the correct order: it paints directly on stdscr, so
* wrefresh, wnoutrefresh and other window-specific functions are not
* necessary.
* Also, this method doesn't require any window objects, as you can
* only create one window at a time.
*/
/* updates winrows, wincols, and displays a warning when the terminal
* is smaller than the specified minimum size. */
int ncresize(int, int);
/* creates a new centered window with border */
void nccreate(int, int, const char *);
/* printf something somewhere in the last created window */
void ncprint(int, int, char *, ...);
/* Add a "tab" to a window */
void nctab(int, int, int, char *);
/* same as the w* functions of ncurses, with a color */
#define ncaddstr(r, c, s) mvaddstr(subwinr+(r), subwinc+(c), s)
#define ncaddch(r, c, s) mvaddch(subwinr+(r), subwinc+(c), s)
#define ncmove(r, c) move(subwinr+(r), subwinc+(c))
/* add stuff with a color */
#define mvaddstrc(t, r, c, s) do { uic_set(t); mvaddstr(r, c, s); } while(0)
#define mvaddchc(t, r, c, s) do { uic_set(t); mvaddch(r, c, s); } while(0)
#define addstrc(t, s) do { uic_set(t); addstr( s); } while(0)
#define addchc(t, s) do { uic_set(t); addch( s); } while(0)
#define ncaddstrc(t, r, c, s) do { uic_set(t); ncaddstr(r, c, s); } while(0)
#define ncaddchc(t, r, c, s) do { uic_set(t); ncaddch(r, c, s); } while(0)
#define mvhlinec(t, r, c, s, n) do { uic_set(t); mvhline(r, c, s, n); } while(0)
/* crops a string into the specified length */
char *cropstr(const char *, int);
/* Converts the given size in bytes into a float (0 <= f < 1000) and a unit string */
float formatsize(int64_t, char **);
/* print size in the form of xxx.x XB */
void printsize(enum ui_coltype, int64_t);
/* int2string with thousand separators */
char *fullsize(int64_t);
/* format's a file mode as a ls -l string */
char *fmtmode(unsigned short);
/* read locale information from the environment */
void read_locale();
/* recursively free()s a directory tree */
void freedir(struct dir *);
/* generates full path from a dir item,
returned pointer will be overwritten with a subsequent call */
char *getpath(struct dir *);
/* returns the root element of the given dir struct */
struct dir *getroot(struct dir *);
/* Add two signed 64-bit integers. Returns INT64_MAX if the result would
* overflow, or 0 if it would be negative. At least one of the integers must be
* positive.
* I use uint64_t's to detect the overflow, as (a + b < 0) relies on undefined
* behaviour, and (INT64_MAX - b >= a) didn't work for some reason. */
#define adds64(a, b) ((a) > 0 && (b) > 0\
? ((uint64_t)(a) + (uint64_t)(b) > (uint64_t)INT64_MAX ? INT64_MAX : (a)+(b))\
: (a)+(b) < 0 ? 0 : (a)+(b))
/* Adds a value to the size, asize and items fields of *d and its parents */
void addparentstats(struct dir *, int64_t, int64_t, uint64_t, int);
/* A simple stack implemented in macros */
#define nstack_init(_s) do {\
(_s)->size = 10;\
(_s)->top = 0;\
(_s)->list = xmalloc(10*sizeof(*(_s)->list));\
} while(0)
#define nstack_push(_s, _v) do {\
if((_s)->size <= (_s)->top) {\
(_s)->size *= 2;\
(_s)->list = xrealloc((_s)->list, (_s)->size*sizeof(*(_s)->list));\
}\
(_s)->list[(_s)->top++] = _v;\
} while(0)
#define nstack_pop(_s) (_s)->top--
#define nstack_top(_s, _d) ((_s)->top > 0 ? (_s)->list[(_s)->top-1] : (_d))
#define nstack_free(_s) free((_s)->list)
/* Malloc wrappers that exit on OOM */
void *xmalloc();
void *xcalloc(size_t, size_t);
void *xrealloc(void *, size_t);
#endif

249
src/util.zig Normal file
View file

@ -0,0 +1,249 @@
// SPDX-FileCopyrightText: Yorhel <projects@yorhel.nl>
// SPDX-License-Identifier: MIT
const std = @import("std");
const c = @import("c.zig").c;
// Cast any integer type to the target type, clamping the value to the supported maximum if necessary.
pub fn castClamp(comptime T: type, x: anytype) T {
// (adapted from std.math.cast)
if (std.math.maxInt(@TypeOf(x)) > std.math.maxInt(T) and x > std.math.maxInt(T)) {
return std.math.maxInt(T);
} else if (std.math.minInt(@TypeOf(x)) < std.math.minInt(T) and x < std.math.minInt(T)) {
return std.math.minInt(T);
} else {
return @intCast(x);
}
}
// Cast any integer type to the target type, truncating if necessary.
pub fn castTruncate(comptime T: type, x: anytype) T {
const Ti = @typeInfo(T).int;
const Xi = @typeInfo(@TypeOf(x)).int;
const nx: std.meta.Int(Ti.signedness, Xi.bits) = @bitCast(x);
return if (Xi.bits > Ti.bits) @truncate(nx) else nx;
}
// Multiplies by 512, saturating.
pub fn blocksToSize(b: u64) u64 {
return b *| 512;
}
// Ensure the given arraylist buffer gets zero-terminated and returns a slice
// into the buffer. The returned buffer is invalidated whenever the arraylist
// is freed or written to.
pub fn arrayListBufZ(buf: *std.ArrayListUnmanaged(u8), alloc: std.mem.Allocator) [:0]const u8 {
buf.append(alloc, 0) catch unreachable;
defer buf.items.len -= 1;
return buf.items[0..buf.items.len-1:0];
}
// Format an integer as right-aligned '###.#'.
// Pretty much equivalent to:
// std.fmt.bufPrintZ(.., "{d:>5.1}", @floatFromInt(n)/10.0);
// Except this function doesn't pull in large float formatting tables.
pub fn fmt5dec(n: u14) [5:0]u8 {
std.debug.assert(n <= 9999);
var buf: [5:0]u8 = " 0.0".*;
var v = n;
buf[4] += @intCast(v % 10);
v /= 10;
buf[2] += @intCast(v % 10);
v /= 10;
if (v == 0) return buf;
buf[1] = '0' + @as(u8, @intCast(v % 10));
v /= 10;
if (v == 0) return buf;
buf[0] = '0' + @as(u8, @intCast(v));
return buf;
}
test "fmt5dec" {
const eq = std.testing.expectEqualStrings;
try eq(" 0.0", &fmt5dec(0));
try eq(" 0.5", &fmt5dec(5));
try eq(" 9.5", &fmt5dec(95));
try eq(" 12.5", &fmt5dec(125));
try eq("123.9", &fmt5dec(1239));
try eq("999.9", &fmt5dec(9999));
}
// Straightforward Zig port of strnatcmp() from https://github.com/sourcefrog/natsort/
// (Requiring nul-terminated strings is ugly, but we've got them anyway and it does simplify the code)
pub fn strnatcmp(a: [:0]const u8, b: [:0]const u8) std.math.Order {
var ai: usize = 0;
var bi: usize = 0;
const isDigit = std.ascii.isDigit;
while (true) {
while (std.ascii.isWhitespace(a[ai])) ai += 1;
while (std.ascii.isWhitespace(b[bi])) bi += 1;
if (isDigit(a[ai]) and isDigit(b[bi])) {
if (a[ai] == '0' or b[bi] == '0') { // compare_left
while (true) {
if (!isDigit(a[ai]) and !isDigit(b[bi])) break;
if (!isDigit(a[ai])) return .lt;
if (!isDigit(b[bi])) return .gt;
if (a[ai] < b[bi]) return .lt;
if (a[ai] > b[bi]) return .gt;
ai += 1;
bi += 1;
}
} else { // compare_right - for right-aligned numbers
var bias = std.math.Order.eq;
while (true) {
if (!isDigit(a[ai]) and !isDigit(b[bi])) {
if (bias != .eq or (a[ai] == 0 and b[bi] == 0)) return bias
else break;
}
if (!isDigit(a[ai])) return .lt;
if (!isDigit(b[bi])) return .gt;
if (bias == .eq) {
if (a[ai] < b[bi]) bias = .lt;
if (a[ai] > b[bi]) bias = .gt;
}
ai += 1;
bi += 1;
}
}
}
if (a[ai] == 0 and b[bi] == 0) return .eq;
if (a[ai] < b[bi]) return .lt;
if (a[ai] > b[bi]) return .gt;
ai += 1;
bi += 1;
}
}
test "strnatcmp" {
// Test strings from https://github.com/sourcefrog/natsort/
// Includes sorted-words, sorted-dates and sorted-fractions.
const w = [_][:0]const u8{
"1-02",
"1-2",
"1-20",
"1.002.01",
"1.002.03",
"1.002.08",
"1.009.02",
"1.009.10",
"1.009.20",
"1.010.12",
"1.011.02",
"10-20",
"1999-3-3",
"1999-12-25",
"2000-1-2",
"2000-1-10",
"2000-3-23",
"fred",
"jane",
"pic01",
"pic02",
"pic02a",
"pic02000",
"pic05",
"pic2",
"pic3",
"pic4",
"pic 4 else",
"pic 5",
"pic 5 ",
"pic 5 something",
"pic 6",
"pic 7",
"pic100",
"pic100a",
"pic120",
"pic121",
"tom",
"x2-g8",
"x2-y08",
"x2-y7",
"x8-y8",
};
// Test each string against each other string, simple and thorough.
const eq = std.testing.expectEqual;
for (0..w.len) |i| {
try eq(strnatcmp(w[i], w[i]), .eq);
for (0..i) |j| try eq(strnatcmp(w[i], w[j]), .gt);
for (i+1..w.len) |j| try eq(strnatcmp(w[i], w[j]), .lt);
}
}
pub fn expanduser(path: []const u8, alloc: std.mem.Allocator) ![:0]u8 {
if (path.len == 0 or path[0] != '~') return alloc.dupeZ(u8, path);
const len = std.mem.indexOfScalar(u8, path, '/') orelse path.len;
const home_raw = blk: {
const pwd = pwd: {
if (len == 1) {
if (std.posix.getenvZ("HOME")) |p| break :blk p;
break :pwd c.getpwuid(c.getuid());
} else {
const name = try alloc.dupeZ(u8, path[1..len]);
defer alloc.free(name);
break :pwd c.getpwnam(name.ptr);
}
};
if (pwd != null)
if (@as(*c.struct_passwd, pwd).pw_dir) |p|
break :blk std.mem.span(p);
return alloc.dupeZ(u8, path);
};
const home = std.mem.trimRight(u8, home_raw, "/");
if (home.len == 0 and path.len == len) return alloc.dupeZ(u8, "/");
return try std.mem.concatWithSentinel(alloc, u8, &.{ home, path[len..] }, 0);
}
// Silly abstraction to read a file one line at a time. Only exists to help
// with supporting both Zig 0.14 and 0.15, can be removed once 0.14 support is
// dropped.
pub const LineReader = if (@hasDecl(std.io, "bufferedReader")) struct {
rd: std.io.BufferedReader(4096, std.fs.File.Reader),
fbs: std.io.FixedBufferStream([]u8),
pub fn init(f: std.fs.File, buf: []u8) @This() {
return .{
.rd = std.io.bufferedReader(f.reader()),
.fbs = std.io.fixedBufferStream(buf),
};
}
pub fn read(s: *@This()) !?[]u8 {
s.fbs.reset();
s.rd.reader().streamUntilDelimiter(s.fbs.writer(), '\n', s.fbs.buffer.len) catch |err| switch (err) {
error.EndOfStream => if (s.fbs.getPos() catch unreachable == 0) return null,
else => |e| return e,
};
return s.fbs.getWritten();
}
} else struct {
rd: std.fs.File.Reader,
pub fn init(f: std.fs.File, buf: []u8) @This() {
return .{ .rd = f.readerStreaming(buf) };
}
pub fn read(s: *@This()) !?[]u8 {
// Can't use takeDelimiter() because that's not available in 0.15.1,
// Can't use takeDelimiterExclusive() because that changed behavior in 0.15.2.
const r = &s.rd.interface;
const result = r.peekDelimiterInclusive('\n') catch |err| switch (err) {
error.EndOfStream => {
const remaining = r.buffer[r.seek..r.end];
if (remaining.len == 0) return null;
r.toss(remaining.len);
return remaining;
},
else => |e| return e,
};
r.toss(result.len);
return result[0 .. result.len - 1];
}
};

View file

@ -1,130 +0,0 @@
#!/bin/sh
# This script is based on static/build.sh from the ncdc git repo.
# Only i486 and arm arches are supported. i486 should perform well enough, so
# x86_64 isn't really necessary. I can't test any other arches.
#
# This script assumes that you have the musl-cross cross compilers installed in
# $MUSL_CROSS_PATH.
#
# Usage:
# ./build.sh $arch
# where $arch = 'arm', 'i486' or 'x86_64'
MUSL_CROSS_PATH=/opt/cross
NCURSES_VERSION=6.0
export CFLAGS="-O3 -g -static"
# (The variables below are automatically set by the functions, they're defined
# here to make sure they have global scope and for documentation purposes.)
# This is the arch we're compiling for, e.g. arm/mipsel.
TARGET=
# This is the name of the toolchain we're using, and thus the value we should
# pass to autoconf's --host argument.
HOST=
# Installation prefix.
PREFIX=
# Path of the extracted source code of the package we're currently building.
srcdir=
mkdir -p tarballs
# "Fetch, Extract, Move"
fem() { # base-url name targerdir extractdir
echo "====== Fetching and extracting $1 $2"
cd tarballs
if [ -n "$4" ]; then
EDIR="$4"
else
EDIR=$(basename $(basename $(basename $2 .tar.bz2) .tar.gz) .tar.xz)
fi
if [ ! -e "$2" ]; then
wget "$1$2" || exit
fi
if [ ! -d "$3" ]; then
tar -xvf "$2" || exit
mv "$EDIR" "$3"
fi
cd ..
}
prebuild() { # dirname
if [ -e "$TARGET/$1/_built" ]; then
echo "====== Skipping build for $TARGET/$1 (assumed to be done)"
return 1
fi
echo "====== Starting build for $TARGET/$1"
rm -rf "$TARGET/$1"
mkdir -p "$TARGET/$1"
cd "$TARGET/$1"
srcdir="../../tarballs/$1"
return 0
}
postbuild() {
touch _built
cd ../..
}
getncurses() {
fem http://ftp.gnu.org/pub/gnu/ncurses/ ncurses-$NCURSES_VERSION.tar.gz ncurses
prebuild ncurses || return
$srcdir/configure --prefix=$PREFIX\
--without-cxx --without-cxx-binding --without-ada --without-manpages --without-progs\
--without-tests --without-curses-h --without-pkg-config --without-shared --without-debug\
--without-gpm --without-sysmouse --enable-widec --with-default-terminfo-dir=/usr/share/terminfo\
--with-terminfo-dirs=/usr/share/terminfo:/lib/terminfo:/usr/local/share/terminfo\
--with-fallbacks="screen linux vt100 xterm xterm-256color" --host=$HOST\
CPPFLAGS=-D_GNU_SOURCE || exit
make || exit
make install.libs || exit
postbuild
}
getncdu() {
prebuild ncdu || return
srcdir=../../..
$srcdir/configure --host=$HOST --with-ncursesw PKG_CONFIG=false\
CPPFLAGS="-I$PREFIX/include -I$PREFIX/include/ncursesw"\
LDFLAGS="-static -L$PREFIX/lib -lncursesw" CFLAGS="$CFLAGS -Wall -Wextra" || exit
make || exit
VER=`cd '../../..' && git describe --abbrev=5 --dirty= | sed s/^v//`
tar -czf ../../ncdu-linux-$TARGET-$VER-unstripped.tar.gz ncdu
$HOST-strip ncdu
tar -czf ../../ncdu-linux-$TARGET-$VER.tar.gz ncdu
echo "====== ncdu-linux-$TARGET-$VER.tar.gz and -unstripped created."
postbuild
}
buildarch() {
TARGET=$1
case $TARGET in
arm) HOST=arm-linux-musleabi DIR=arm-linux-musleabi ;;
aarch64)HOST=aarch64-linux-musl DIR=aarch64-linux-musl ;;
i486) HOST=i486-linux-musl DIR=i486-linux-musl ;;
x86_64) HOST=x86_64-linux-musl DIR=x86_64-linux-musl ;;
*) echo "Unknown target: $TARGET" ;;
esac
PREFIX="`pwd`/$TARGET/inst"
mkdir -p $TARGET $PREFIX
ln -s lib $PREFIX/lib64
OLDPATH="$PATH"
export PATH="$PATH:$MUSL_CROSS_PATH/$DIR/bin"
getncurses
getncdu
PATH="$OLDPATH"
}
buildarch $1