Low Disk Space
- Some general concepts
Note: If disk space is reported to be exhausted, but there does appear to be free space on the filesystem volume, consider whether end-user quotas may be having an impact. (This text currently does not have a hyperlink to a section discussing such quotas.)
- Hard Links
Hard links can result in some software reporting reporting that an amount of space is being used up, although that reported number may actually be larger than the actual amount of space to be used up. For instance, Microsoft Windows Vista and newer may have this happen with WinSxS directory. Unix users may also have similar effects with hard links, so this isn't something that is necessarily specific to Microsoft Windows.
Apparently there is quite a bit of software that may report disk space incorrectly. jayy78's post on WinSxS notes the problem that software tools “won't show the actual HDD usage.” (That's not to say that software can't detect the problem. It simply is stating that some software does have that result.) Greg D's comment on SuperUser.com question about Microsoft Windows
%windir%\WinSxS states, “There are a lot of hard links in winsxs, so the size is also frequently over-reported.” Those quotations are referring to Microsoft Widnows's “Side by side assemblies” (WinSxS) directory, but, as noted, the general concept is not limited to Microsoft Windows. Similar things can happen with hard links in Unix.
Jonwis's “Deleting from the WinSxS directory” notes, “Files and directories will be removed over time as the servicing system cleans up after itself. Administrators should not, for any reason, take it upon themselves to clean out the directory - doing so may” cause problems. However, it seems this advice is a bit misleading... The
%windir%\WinSxS directory is known to grow over time (at least on some operating systemts, including Microsoft Windows Vista).
- Microsoft Windows
- [#sefredsk]: Seeing how much space is free
Note that the percentage given may be the amount of free space meant to be available to standard user acounts. OpenBSD FAQ: Disk Setup: section about negative free space (and having more than 100% of the disk space being used) (FAQ 14.14) discusses this. (Having this potentially happen may depend on how the filesystem is implemented. Filesystem tuning may affect how this works.)
A simpler option may be to use “
”. The purpose of using “
” is to also show information about inode usage. That is only meant to be useful for some filesystem types. The concept of an “inode” exists with both Ext2fs and FFS (and successor filesystem types, such as FFS2). The inode usage on some file systems, such as FAT32, may show as 100% used. If such a thing is seen, just ignore such unhelpful information. This is simply a result of the fact that FAT32 doesn't use “inodes” that
The point of showing inode usage is to help determine whether the filesystem volume may be running out of “inodes” (before available disk space is fully allocated). This tends to be a fairly uncommon issue, but the results are very similar to running out of disk space. In both cases (running out of inodes, or running out of disk space), new data cannot be written to disk space that gets tracked by an inode.
If the “
” is higher than the “
” percentage used, then the trend is that the filesystem volume may run out of free inodes before it runs out of free disk space. This is more likely to occur on a filesystem with small files (that doesn't use up much disk space on average). A solution may involve making changes to the filesystem. This might not be urgent, but may be worth rectifying when convenient, before the issue becomes critical (and possibly extremely inconvenient).
An even simpler command line to type would just be to run
with no parameters. In many cases, this may outputs of disk space in half-kilobyte (512 byte) blocks. So, usually, divide any reported numbers by two, and consider that to be the amount of kilobytes. However, it may be possible that some implementations use a different block size in the reported numbers, so verification/clarification may be needed. Checking the operating system's manual page for the
may suitably work.
There may be a
command. Even if there is not, the
command (which is generally used for listing files) will generally work, and may be the fastest option. Other options may also exist, like using filesystem checking software for FAT:
- [#fndusdsk]: Finding out what is using up disk space
If there is a suspect directory/folder (or more than one), an option may be to see how much space is being used by the specific directory.
- Getting a report of disk space used
On some Unix-based operating systems,
shows sizes in “human-readable” format. MS-DOS uses “high ASCII graphics to draw lines” unless the
/Aswitch is used to force standard low ASCII characters (
- Using Unix Userland Tools
-type f -exec
- NCurses Solutions
- [#ncdu]: Ncdu
NCurses Disk Usage uses text mode. The home page for that software also refers to other available options, including options that use graphical displays. For instance, the “Similar projects” section refers to Filelight which shows a visual display.
More information on this program, including some sample output, is currently at a web page about
(a web page about
also helps if the package isn't being found in CentOS / Fedora.)
- tdu uses GPLv2.
https://github.com/dundee/gdu/releases seems to have an advantage over the
program, which is that gdu uses parallelization to get the job done quicker. It also has releases for Unix-ish platforms and Microsoft Windows.
Beware that the program name of gdu might also be used for “gnome disk utility”. The program being mentioned here may also be known as “dundee gdu”. (At least, that was indicated by a filename seen at https://www.freshports.org/sysutils/gdu)
In August of 2021, releases included gdu v5.50, v5.60, v5.61, and v5.6.2. This demonstrates a fairly rapid amount of releases.
--help, shows syntax.
Pretty fast disk usage analyzer written in Go. Gdu is intended primarily for SSD disks where it can fully utilize parallel processing. However HDDs work as well, but the performance gain is not so huge. Usage: gdu [directory_to_scan] [flags] Flags: -h, --help help for gdu -i, --ignore-dirs strings Absolute paths to ignore (separated by comma) (default [/proc,/dev,/sys,/run]) -I, --ignore-dirs-pattern strings Absolute path patterns to ignore (separated by comma) -X, --ignore-from string Read absolute path patterns to ignore from file -f, --input-file string Import analysis from JSON file -l, --log-file string Path to a logfile (default "/dev/null") -m, --max-cores int Set max cores that GDU will use. 8 cores available (default 8) -c, --no-color Do not use colorized output -x, --no-cross Do not cross filesystem boundaries -H, --no-hidden Ignore hidden directories (beginning with dot) -p, --no-progress Do not show progress in non-interactive mode -n, --non-interactive Do not run in interactive mode -o, --output-file string Export all info into file as JSON -a, --show-apparent-size Show apparent size -d, --show-disks Show all mounted disks -s, --summarize Show only a total in non-interactive mode -v, --version Print versionC:\<
Error: loading mount points: Only Linux platform is supported for listing device
The home page for “NCurses Disk Usage” and tdu's home page are both about programs that operate in text mode, rather than requiring a graphical display environment. However, these home pages are also noteworthy for a selfless feature: both have sections that mention other similar programs, including options that use graphical displays. For instance, the on the Ncdu page, the “Similar projects” section refers to Filelight which shows a visual display. (More information about Ncdu is mentioned earlier, in the section about Ncdu.)
Treemap software can report this nicely. Some options may include:
- Microsoft Windows
WinDirStat (also available is the WinDirStat home page on SourceForge) provides a nice summary, including being able (after a time-consuming scan runs) to quickly see how much space is used by each file extension. A link to the latest version of the WinDirStat installer and the most recent packages of source code files are available from WinDirStat's Permalinks for Downloads web page.
SequoiaView may be another option.
If Wine is installed, then using WinDirStat may be a nice choice.
xdiskusage can also make a similar display. It uses GPLv2+.
KDirStat is similar to WinDirStat. KDirStat may have the claim of coming earlier, but it also may be a slightly less nice choice. KDirStat is similar, but doesn't show as much information like which file extensions use up the most space. The version for KDE4 has been known to be distributed in a version called k4dirstat, so that may be a name to search for when using package repositories.
Disk Usage Analyzer (aka Baobob), probably more widely referred to as baobob, may also be an option.
GdMap creates treemaps.
Treemaps (Ben Shneiderman's information on treemaps shows some information about treemaps (perhaps targeting an academic audience). The windirstat program for Microsoft Windows is a bit nicer than the program it was based on, kdirstat for KDE.
The following software can show things visually. (This might, or might not, fit the specific description of a treemap.)
For X, Disk Inventory X
Boabab (Disk Usage Analyzer) for Linux has online documentation that mentions Treemaps, but also has support for another approach called Ringschart (which is a compound word: rings chart).
There's another piece of software by JAM software, named TreeSize. An older note made reference to the “limited TreeSize” software. Now, multiple names have been used for JAM Software's different releases of related software, including “TreeSize Free”, “TreeSize Personal”, and “TreeSize Professional”. JAM Software's home page for TreeSize may provide some further information about this option.
- [#lsdirsiz]: Seeing how much space is used by a specific directory
- Standard approaches
These approaches typically do not require obtaining any extra software, and can be done with whatever comes with the operating system.
Unix will likey come with
Built in: Unix: “
hcauses human-readable rounding to occur: If that isn't desired then leave it off.)
- Information that was here has been moved, and is now at: listing subdirectories.
- Microsoft Windows
The method(s) for DOS may work.
An option that comes with the operating system is to use the context menu of a directory (which can be done by right-clicking on the folder), and choose Properties.
There may be various versions of the
command, such as those that are ports of Unix software, or Sysinternals's (TechNet: Sysinternals Disk Usage command). http://www.ltr-data.se/opencode.html has a couple of utilitizes (sizdir
??.zip, sizeof.zip), and available source code. (At least some of these utilities may not have been tested by the author of this text, so use at your own risk.)
Note that WinSxS may report misleading amounts of disk space used. This is because the directory contains a large number of “hard links”. Discussion: Superuser.com discussion on WinSxS size.
see also the section on finding out what is using up the disk space.
- Even more options...
NCurses Disk Usage uses text mode. The home page for that software also refers to other available options, including options that use graphical displays. So, the web page may be useful even people seeking options that use graphical interfaces. See the “Similar projects” section.
tdu uses ncurses. Its web page also refers to other programs, including calling ncdu a “fancier” option.
The gt5 software seems to create web pages that can easily be viewed with text browsers.
- Microsoft Windows
- Other approaches/info
- [#fixlwdsk]: Dealing with having too little available disk space where it is needed
This troubleshooting section discusses how to deal with low disk space in a specific location. This can be useful when just one location is too low on disk space, including instances where the location with too low of disk space is the entire drive.
There are various approaches. Some may work better in some situations than other approaches.
- Using rather automated approaches
In theory, this might be rather harmful. However, automated approaches can be much faster than more manual approaches, and so they may be the most attractive option when they work well. This documentation is not trying to recommend this approach as being 100% safe, but often it is safe. In some cases, certain techniques (like trying to delete old and unused “temporary files” that somehow remained) may often be rather pointless, accomplishing nearly nothing, although there may be the benefit that they don't take a lot of time to try.
- Microsoft Windows
Microsoft Windows may come with some options to try to clean up disk space rather automatically.
One item to note: jayy78's post on WinSxS noted, “There is no dedicated tool, it all got integrated into the Disk Cleanup service. One note that I would make here is that even if the option says it will remove a few hundreds of MB after I cleaned up my drive the service removed around 3GB of excess files, so it varies from system to system. Funny, now I have even more space than I had before installing SP1. Nice one, MS.” (Actually, Win7 information does provide some details about a clean-up tool.) The point being made here is: don't necessarily trust that the space reported by “Disk Cleanup” will be accurate; according to this report, the “Disk Cleanup” might free up much more space than it suggests.
- Disk Cleanup
See: Disk Cleanup, MS KB 181701. Although ss64 page on
notes that DIsk Cleanup seems to be removed from Windows Server 2008, but it is available. One way to make it available is to install the “Desktop Experience” role; another may be to use some software available: see Null-Byte : Missing disk cleanup utilty in Windows Server “Fix”. For Windows Server 2012, see NickC comment on an answer to NickC's ServerFault question about Disk Cleanup.
ss64 page on
lists command line options, and may also have hyperlinks to additional resources.
See also: MS KB 315246 for details about the options that the GUI presents, and for command line parameters. Also, command line parameters are documented by MS KB 181701 and TechNet: “Use Some (Relatively) Unknown Command-Line Switches for Disk Cleanup.
A quick review of the command line options will make it appear that the
sagesetoption is just about making an easy method of saving preferences regarding which available checkboxes should be enabled. However, this option may also show more checkboxes, as noted by Windows Club. MS KB 253597 makes a note that the
/d“switch is not used with
As a point of discrepancy, the TechNet article (mentioned before) and ss64 page on
indicate that sageset can have a value of up to 65,535; the Windows Club article (mentioned before) gives a maximum value of 255. TechNet and ss64 pages mention Windows Server 2008, while Windows Club mentions Windows 7 (and Windows 8). So the maximum value may depend on what operating system is being used.
- Additional options
Check out ss64 page on
which lists command line options, and may also have hyperlinks to additional resources. e.g., In WinXP/2003,
? (SS64 page on
?). Another tool it refers to is CleanRoamingProfile.vbs (information on SS64).
Windows user versions mentions some “clean-up tools” that may be specific to certain service pack releases. If the operating system has a service pack applied, check that page to see if there is an option. Note that using such software may have some drawbacks, like being unable to perform a task of reversing some changes.
- Removing unneeded data
Find out what data is using space. (See: Finding out what is using up disk space.)
- Moving data
If data should be kept, but does not need to be located where it is, perhaps moving data to another location will provide more free space where space is needed.
- Unmounting an unneeded seperate partition
If the directory that ran out of space is on a mount point that is not on the root directory of the device, consider what would happen if the mount point (that has low free space) was unmounted. If the process needing disk space was re-attempted, then data would not go onto the mount point that doesn't have enough space. Instead, data would go to the mount point used by a parent in the heirarchy.
As an example using Unix filesystems (where it is more common to have mount points that aren't mounted right onto the root directory), if there are separate mount points used for / and /usr/ and /usr/src/ (and other mount points as well), and /usr/src/ is not big enough, consider unmounting /usr/src/. Then any data that is written to an empty /usr/src/ directory will take up the free space on the /usr/ mount point. That may work just fine if /usr/ has sufficiently more free space. (Using this example, it does not matter how much free space is on / because, in this example, /usr/ was a separate mount point.
- Using an unused partition
Using another unused partition. This seems unlikely, but if available, a solution may be easily used. First, if a partition (which is too small) is already mounted at (or under?) the destination, then unmount that mount point. Then mount the available filesystem volume at the needed location. This approach may be assuming some strong control over where a mount point is created. Some operating systems, like Unix, may provide that power easily, while this approach might not be quite as easy with other operating systems. (If using a different operating system, determine how to alter mount points less conveniently, or perhaps perform the task in an operating system that does provide this option.)
- Using a RAM drive
On computers with sufficiently large amounts of available (unused) memory (RAM), note that a RAM drive (also known as an “mfs” (“memory filesystem”) mount point) may be created. Then, once that is created, simply implement the previous option.
- Adjusting the disk layout
The cleanest way to resolve the problem may be to fix the problem more cleanly by taking the (perhaps substantial) amount of time to adjust the disk layout (even if that means using a different disk). Note that this may be destructive, and/or time-consuming. So, do not expect this will necessarily be the easiest way to handle this.
- Compressing data
Perhaps some data may be compressed. A solution could involve a single large file that is being kept around but not really used often. (TOOGAM's software archive: Archivers might have some software to help with that.) Compressing an entire volume might also be a solution. (For details on that solution, see compressed drives, including myths about drive compression that even many technical professionals may errorneously believe.)
- Using links
If one large partition has lots of space available, that space might be able to be used even if the free space needs to be in another location. This approach might only work on filesystem types where the operating system provides support for using a “symbolic link” (perhaps also referred to as a “junction”). This appraoch may be assuming some strong control over where a mount point is created. Some operating systems, like Unix, may provide that power easily, while this approach might not be quite as easy with other operating systems. (If using a different operating system, determine how to alter mount points less conveniently, or perhaps perform the task in an operating system that does provide this option.)
Note: sometimes some software might interact with a drive in such a way that the software becomes aware of, and uses, the location that is pointed to. This sort of behavior (which might be considered to be a sort of jail-breaking) can cause some issues. For instance, if /tmp/symlink/ points to /home/username/ then /tmp/symlink/../ may end up pointing to /tmp/ or /home/. Even worse is when a multi-stage task/process ends up using one behavior at one point, and another behavior at another point. Then there may be an expectation (by an end user, or perhaps even the software) that data is in one location when data is really in two separate locations. This may be uncommon, and encountered errors may not immediate point (in a very obvious manner) to what the actual problem is. Once the problem is known, finding a solution may be a bit challenging. Trying to make an adjustment to quickly workaround the issue, while not simply recreating the exact same problem, may not be extremely simple. In some cases, there may be a documented method to help with this. For instance, when using OpenBSD's source code by compiling, an environment variable may be checked. Such a solution may sometimes be a way to help avoid such problems.
- The straightforward way
If the desired destination/location is already mounted, unmount the mount point. That may be needed for this next step: If the destination/location already exists, even as an empty directory, then that needs to be removed (either renamed, or deleted). Then create a destination symlink.
Information about symlinks is here; it should be moved to another location.
In Unix, such a symlink may be done with something like:
may be any sort of filesystem object; including possibilities of being a file, a directory/folder, or a device object. Each destination created will be a symlink. As a generalization, a symlink to a file can be treated similar to how a file is treated (such as being opened by a text editor), and a symlink to a directory/folder can be treated similar to how a directory/folder is treated (such as being able to use a
command). Some software, particularly software that handles recursion, may be exceptions to this generalization.
There may be multiple destinations provided. This will create multiple symlinks pointing to the same source. For example:
This creates two symlinks: one is called
and the other is called
. It is also perfectly possible to create more than two symlinks.
Leaving off the
-sparameter will attempt to create a “hard” link instead of a “soft” link. A hard link may use one less allocation unit, and use one less redirection which may improve speed and cause less likelihood of exceeding a limit to how much redirection is used. However, there is one significant limit: a hard link can only be created to a
which is on the same filesystem volume as every specified
- Create more space in a subdirectory
Determining the actual requirements can produce positive results. For instance, consider a scenario where it seems like /usr/src/ needs to have 5GB free, but only 3GB is free. Perhaps another drive has 3GB free. If it can be determined that /usr/src/needlots/ needs 2.5GB of the space, and 2.5GB of space is needed for the rest of /usr/src/, then a solution may be available by creating a symbolic link (or mount point) at the /usr/src/needlots/ location.
- Change the requirements
Why is free space absolutely needed at a specific location? Can software be adjusted to look for data at another location?
- Use multiple methods
- e.g., change the disk layout, make a new filesystem volume with lots of space, and then use symlinks.
- New hardware
- Often the priciest option, this might work when budget allows and when other options may not be possible/available/desirable.