- Apr 08, 2020
-
-
Christian Göttsche authored
(cherry picked from commit 9801f46ece0ca2525f02d71464efc42296dddcb5)
-
Christian Göttsche authored
(cherry picked from commit e4e8ebd9e0037812436a1588809deb23e0f3751a)
-
- Feb 10, 2020
-
- Jul 23, 2019
-
-
Yorhel authored
This is a best-effort approach to save ncdu state when memory is low. There's likely allocation in libraries that isn't being checked (ncurses, printf). Fixes #132 (it actually doesn't, that needs a 64bit static binary too, but I'll get to that)
-
- Feb 04, 2019
-
-
Yorhel authored
-
- Jan 21, 2019
-
-
Alex Wilson authored
This adds an 'm' command to show the latest modified time of all files in a directory. The 'M' command allows for ascending and descending mtime sorting. These are only enabled with the -e flag and overload the dir_ext mtime field.
-
- Jan 29, 2018
-
-
Yorhel authored
-
- Jan 23, 2018
-
-
Yorhel authored
It's looking a bit cramped, but I'm lazy.
-
Yorhel authored
Unfortunately, there wasn't a single bit free in struct dir.flags, so I had to increase its size to 16 bit. This commit is just the initial preparation, there's still a few things to do: - Add "extended information" cli flag to enable/disable this functionality. - Export and import extended information when requested - Do something with the data. I also did a few memory measurements on a file list with 12769842 items: before this commit: 1.239 GiB without extended info: 1.318 GiB with extended info: 1.698 GiB It's surprising what adding a single byte to a struct can do to the memory usage. :(
-
- Jan 21, 2018
-
-
Yorhel authored
I've decided not to use ls-like file name coloring for now, instead just coloring the difference between a (regular) file and a dir. Still looking for a good color scheme for light backgrounds.
-
- Jul 08, 2017
-
-
Yorhel authored
TODO: - Add (ls-like) colors to the actual file names -> Implement full $LS_COLORS handling or something simple and custom? - Test on a white/black terminal, and provide an alternate color scheme if necessary. - Make colors opt-in?
-
- Aug 24, 2016
-
-
Yorhel authored
-
- Jan 22, 2014
-
-
Yorhel authored
-
- Jul 23, 2013
-
-
Yorhel authored
This is a slightly modified patch contributed at http://dev.yorhel.nl/ncdu/bug/35
-
- Jan 13, 2013
-
-
Chris West (Faux) authored
-
- Nov 22, 2012
-
-
Yorhel authored
I realized that I used addparentstats() with negative values when removing stuff, so it had to be done this way (without rewriting everything). It's a simple solution, anyway.
-
Yorhel authored
This mostly avoids the issue of getting negative sizes. It's still possible to get a negative size after refresh or deletion, I'll get to that in a bit.
-
- Sep 06, 2012
-
-
Yorhel authored
-
- Aug 29, 2012
-
-
Yorhel authored
-
- Aug 27, 2012
-
-
Yorhel authored
2 billion files should be enough for everyone. You probably won't have enough memory to scan such a filesystem. int is a better choice than long, as sizeof(int) is 4 on pretty much any system where ncdu runs.
-
Yorhel authored
*Should* be equivalent, but having a clearly standardised width is much better.
-
- Aug 26, 2012
-
-
Yorhel authored
The architecture is explained in dir.h. The reasons for these changes is two-fold: - calc.c was too complex, it simply did too many things. 399ccdeb is a nice example of that: Should have been an easy fix, but it introduced a segfault (fixed in 0b49021a), and added a small memory leak. - This architecture features a pluggable input/output system, which should make a file export/import feature relatively simple. The current commit does not feature any user interface, so there's no feedback yet when scanning a directory. I'll get to that in a bit. I've also not tested the new scanning code very well yet, so I might have introduced some bugs.
-
- Jan 18, 2012
-
-
Yorhel authored
Damn, it's 2012 already.
-
- Oct 31, 2011
-
-
Yorhel authored
-
- Feb 27, 2010
-
-
Yorhel authored
The directory sizes are now incorrect as hard links will be counted twice again (as if there wasn't any detection in the first place), but this will get fixed by adding a shared size field. This method of keeping track of hard links is a lot faster and allows adding an interface which lists the found links.
-
Yorhel authored
-
- May 12, 2009
-
-
Yorhel authored
-
- May 11, 2009
-
-
Yorhel authored
Hard link detection is now done in a separate pass on the in-memory tree, and duplicates can be 'removed' and 're-added' on the fly. When making any changes in the tree, all hard links are re-added before the operation and removed again afterwards. While this guarantees that all hard link information is correct, it does have a few drawbacks. I can currently think of two: 1. It's not the most efficient way to do it, and may be quite slow on large trees. Will have to do some benchmarks later to see whether it is anything to be concerned about. 2. The first encountered item is considered as 'counted' and all items encountered after that are considered as 'duplicate'. Because the order in which we traverse the tree doesn't always have to be the same, the items that will be considered as 'duplicate' can vary with each deletion or re-calculation. This might cause confusion for people who aren't aware of how hard links work.
-
- Apr 26, 2009
-
-
Yorhel authored
So we're actually back to having one header file for everything, except it's now maintainable.
-
- Apr 23, 2009
-
-
Yorhel authored
ncdu already does seem to handle longer paths now, though there are still a few places where the code still relies on PATH_MAX. Will fix those later.
-
- Apr 19, 2009
-
-
Yorhel authored
-
- Apr 18, 2009
-
-
Yorhel authored
It was a totally useless feature, anyway.
-
- Apr 11, 2009