I've just released 1.14. This release adds mtime display & sorting, a limited form of --follow-symlinks, can display larger file counts in the file browser and fixes a few bugs you weren't likely to trigger anyway.

Get it from

Damn, I think it's been a decade since I last wrote a release announcement for ncdu. Back then Freshmeat was still a thing... I miss that time.

@ayo wait, you're the developer of ncdu? Cool, I love this tool. Have 10€.

@ayo hold on, you wrote ncdu? I love that thing! Thanks for writing this useful TUI utility :blobcat:

@ayo NCDU is very useful! Always helps me debug which logfile decided to take up 10Gb this time...

@ayo I like the mtime stuff! 😉 Thanks for the feedback on my patches.

@blunaxela Ooh I hadn't realized you were on Mastodon! Good work on the mtime patches, most people give up after my first round of feedback. 😅

@ayo Yeah, I saw your mastodon handle on your webpage and was tempted to send patches on here instead! 😝 BTW, I have a silly change set that makes dir_scan.c pthreaded. Still pretty slow... It's maybe 2x faster for NFS, but other cases are much slower. I'll let you know if I make any real progress.

@blunaxela So you weren't kidding about that threading!

Now I'm curious about your approach. The dir_output interface doesn't allow for much parallelism; Probably the only thing that can be "easily" done in parallel are the stat() calls for all files in a single directory, but I'd imagine quite a high synchronization overhead for that in many cases.

My approach has been to start a thread for each dir up to N threads. I had remove a number of global variables and pass the current dir context in order to make dir_walk and recur reentrant. It also fstats by full path since chdir isn't thread friendly. The most expensive part right now is mallocs and it's threading on every item, instead of just dirs. There's a lock on item() to protect the tree and addstatstoparents()

Oh, I just realized that there could be a lock on item() for each independent branch of the tree as well! That might be interesting to coordinate.

@blunaxela Scanning multiple dirs in parallel doesn't seem like it'll work well with the ordering that item() expects. File export, in particular, is going to be hard to fix, I think.

Extra consideration with passing full paths to syscalls: Make sure to test nested dirs exceeding PATH_MAX.

I pass the the current dir struct around via argument and don't use globals in item() and item_add() other than the absolute root for the whole tree. So far it works with -0 output. Yeah... export would have to be redone to use the struct tree after it's done.

As for PATH_MAX, I want to switch to fstatat(). Then there's less checking for fated as well. Although I'm starting to wonder what's POSIX and what's Linux.

@blunaxela The point of export is that it doesn't need much memory...

I think the best approach is keeping the threading inside dir_scan.c and serialize/ coordinate calls to item() to ensure correct ordering. This complicates and limits the task distribution a bit, but it keeps the complexity from affecting other parts.

@ayo great job. I really like this tool. A coworker showed it to me recently. Was best tool to shrink/prune my docker container. :)

Sign in to participate in the conversation

Welcome to your niu world ! We are a cute and loving international community O(≧▽≦)O !