Memory Usage

(At least some of this section may still be a bit preliminary...)
[#memamt]: Detecting how much memory there is available/free/used/total

Details vary based on the different operating systems. For details on how much memory exists, in total, see the sections about memory-related details specific to operating systems.

For details on seeing how much of that memory is free/available, that is often simply figured out by subtracting the amount of used memory from what is available in total.

For details on how much memory is used, see the section on memory usage.

[#swapsize]: The ideal swap/page (file/partition) size
No universal answer

This is a topic where there may be quite a few different opinions by experienced technicians. These opinions are often strongly held, and uninformed.

https://opensource.com/article/18/9/swap-space-linux-systems discusses some of the old advice that had been frequently used closer to the start of the third millennium, which was to have swap space be double the RAM size. “This rule took into account the facts that RAM sizes were typically quite small at that time and that allocating more than 2X RAM for swap space did not improve performance. With more than twice RAM for swap, most systems spent more time thrashing than actually performing useful work.”

So, let's see some actual better advice applicable to today:

Mark Russinovich of Sysinternals, which is an organization that made programs like Process Explorer and which got bought out by Microsoft, discussed the page file size in this article: “Pushing the Limits of Windows: Virtual Memory” (as archived on 2015-Dec-5 by the Wayback Machine @ Archive.org) (“Pushing the Limits of Windows: Virtual Memory”, apparently re-published). After discussing some details about the “paging file” in Microsoft Windows, including details about types of memory limitations called the “commit limit” and “commit charge”, Mark Russinovich had this to say:

Perhaps one of the most commonly asked questions related to virtual memory is, how big should I make the paging file? There’s no end of ridiculous advice out on the web and in the newsstand magazines that cover Windows, and even Microsoft has published misleading recommendations. Almost all the suggestions are based on multiplying RAM size by some factor, with common values being 1.2, 1.5 and 2. Now that you understand the role that the paging file plays in defining a system’ commit limit and how processes contribute to the commit charge, you’re well positioned to see how useless such formulas truly are.

Perhaps the most correct answer is that memory gets handled differently by different operating systems. Seeking a single and universal answer that correctly works on all computers, and which everyone agrees upon, seems like a quest that is quite unlikely to provide joyous results.

Some key factors that may be very likely to affect what is a good RAM size include:

  • which operating system is being used
  • and perhaps more details about what software is being used
  • How much RAM is in the system
  • If you want RAM for reasons other than swap, such as supporting having crash dump data getting stored in swap space, or storing data to enable a system's built-in support for hibernation

In some (quite possibly many/most) cases, there may be some struggle trying to find extensive (or any) details about the impact of specific software's effect on how much swap space is desirable. However, there are often recommendations that are made for specific operating systems, so the recommended practice is to try to at least find some documented details that take into consideration which operating system is being used. (For instance, the operating system's vendors may have helpful information.)

MS KB 889653: How to determine the appropriate page file size for 64-bit versions of Windows states, “When lots of memory is added to a computer, a paging file may not be required.” However, the article goes on to state, “Windows Domain Controllers and DFS Replication, Certificate and ADAM/LDS Servers are not supported without a configured pagefile.” Some software may be designed to require the usage of a page file for reasons other than the traditional and most famous reason of having a way to store memory when the system has run out of freely available RAM.

The key point shown by that example is that the way that a computer gets used can affect how a page file may be used. This realization, by itself, proves a lack of a single simple formula that optimizes the page file decision for all computers.

The NetBSD for Sega Dreamcast FAQ about page size addressed the question about how big a swap file should be, by saying, “This is, of course, a matter of personal choice, and depends on what you'll be doing with the system.” Guidelines are provided, but the FAQ does not provide a single answer that is recommended for all situations.

Even though there may not be a single universal answer, there may be some general philosophies that are often correct.

Common misconception

Many people think that swap space is used as follows: when a computer runs out of available physical memory, then memory is read and writen from swap. Well, yes, that is true, but that's not necessarily the whole story. A swap file may be helpful even when there is no data being written to, or read from, the file.

David Schwartz's answer to user1306322's SuperUser.com question about disabling swap is quite detailed and consise, although complex enough that a clear understanding can require some dedicated brain power. He describes that software may need “to reserve physical RAM to back allocations that are very unlikely to ever require it (for example, a private, modifiable file mapping), leading to a case where you can have plenty of free physical RAM and yet allocations are refused to avoid overcommitting.”

This text tries to describe his scenario, broken down in to simpler thoughts, though this text is a bit longer.

In some cases, an operating system may commit to being able to provide a certain amount of memory. That memory might be reserved for a potential task that might actually not occur. Needing to keep track of that much memory might be necessary only “for possibilities that are extraordinarily unlikely.” However, since such possibilities are possible, a program may have asked the operating system to reserve enough memory to be able to perform the potential task if necessary. If the operating system has committed to being able to provide that memory as needed, then the operating system is obligated to have enough available memory to perform that task.

When an operating system can count on a paging file to be able to store lots of memory, if needed, then the operating system can utilize some of the fast physical RAM for other tasks. By having swap space available, the operating system can effectively use RAM for operations that are likely to benefit from increased speed (including being used as an effective disk cache, which can be actively helpful when a program reads data that has been saved onto a disk).

Using physcial RAM as a disk cache may be useful, and may be a better way of using RAM than just reserving physical RAM in case some program might need to use it. So, there can be cases where an operating system can optimize the usage of physical RAM by having lots of “memory” available, even if much of that “memory” is just virtual memory on a disk. Even if that virtual memory is not actively storing memory that is actively keeping track of data that a program is relying upon, the operating system can be helped just by having that memory available as a resource. Most of the time, extraordinarily unlikely possibilties aren't what happen. So, if the computer can operate in a way that is faster most of the time, the result may be a computer that is usually faster.

To re-cap that scenario in a single sentence, the operating system can optimize actual fast physical RAM if the operating system knows that it can count on a paging file to be available if needed, even if the need would only be rare. However, if swapping is completely disabled, the operating system might need to reserve a bunch of fast physical memory for a program that indicated that it might need to use that much memory. The memory needs to be available, even if the program doesn't actually end up using the memory. However, the memory that is required to be available can be in the form of virtual memory, insetad of the fast physical RAM.

Swap space compared to RAM

The traditional idea of swap is that programs could then have a way to store data when the amount of main memory was insufficient. So, if there is a larger amount of main memory, then there should be a smaller need for swap, not more. The amount of swap space needed should have a mathematical relationship which is inversely proportional. (Perhaps the implementation may result in numbers that are inversely but indirectly proportional, but the directions should be that as RAM increases, needed swap decreases.)

Some people have had different advice. The OpenBSD team has addressed this: OpenBSD version 4.3's FAQ 4.7: “How much space” had some tips that aren't as clear in the newer FAQs, including the following: “Many people follow an old rule of thumb that your swap partition should be twice the size of your main system RAM. This rule is nonsense. On a modern system, that's a LOT of swap, most people prefer that their systems never swap. You don't want your system to ever run out of RAM+swap, but you usually would rather have enough RAM in the system so it doesn't need to swap. If you are using a flash device for disk, you probably want no swap partition at all. Use what is appropriate for your needs.”

The OpenBSD FAQs have even gone so far as to state, OpenBSD's FAQ on Swap says “There are all kinds of tips about optimizing swap (where on the disk, separate disks, etc.), but if you find yourself in a situation where optimizing swap is an issue, you probably need more RAM. In general, the best optimization for swap is to not need it.”

Well, that's all nice in theory. In reality, older machines have often been memory starved, and so budget has been known to impact whether it seems reasonable to try to go swapless.

Crash space

Both Microsoft Windows and at least some Unix implementations have been known to store crash data into swap space.

OpenBSD version 4.3's FAQ 4.7: How much space had some tips that aren't as clear in the newer FAQs, including the following: “Swap and /var spaces are used to store system core dumps on in the event of a” crash. “If this is a consideration for you, your swap space should be slightly larger than the amount of main memory you are likely to ever have in the system.” (Keep in mind the possibility of expanding system RAM to the maximum supported by the motherboard. So, if your motherboard has a certain amount of RAM, but has enough memory slots to store more RAM, you might want to have enough disk space available to cover that amount of RAM. Also, keep in mind the less likely possibility of both the hard drive, and much of the operating system installation, being moved to a new motherboard which might store even more RAM. However, many times a new operating system is used when a new motherboard is used, so that scenario might not be quite as likely.) The FAQ goes on to say that part of the operating system “will attempt to save the contents of the swap partition to” “dump files. Be realistic -- few developers will want to look at your 1GB dump file, so if you aren't planning on investigating a crash locally, this is probably not a concern.”

Perhaps the idea for Unix storing stuff into a swap file is that hopefully there will be less likelihood of filesystem issues causing problems with storing the data.

Specific advice
Partition / Page file layout

If an older computer begins to start swapping/thrashing, the cause might be that some newer software, such as software code that was installed when security updates were installed, is using up more memory. Check to see how much disk space is used. If there are many gigabytes free, particularly if the percentage is low enough that the disk is unlikely to be filled before the machine is eventually replaced, then there should be little hesitation about using up some of that free space in order to create swap space. That may be easier to implement when swap data is, and/or can be, using an already-existing filesystem volume (storing data into one or more “swap file”(s)). If the operating system is using disk space on a “swap partition” rather than a “swap file”, this could require re-partitioning which may be a bit more of a challenge if the available space is initially in a filesystem volume insetad of being unallocated, unpartitioned space.

Page file size for Microsoft Windows

In Pushing the Limits of Windows: Virtual Memory, Mark Russinov wrote about how to “size” the paging file, meaning how to determine the size of the paging file. He said:

“the only way to reasonably size the paging file is to know the maximum total commit charge for the programs you like to have running at the same time.” ... “Peak Commit Charge. To optimally size your paging file you should start all the applications you run at the same time, load typical data sets, and then note the commit charge peak (or look at this value after a period of time where you know maximum load was attained). Set the paging file minimum to be that value minus the amount of RAM in your system (if the value is negative, pick a minimum size to permit the kind of crash dump you are configured for).”

Actually, using “typical data sets” might be best for average circumstances, but there may be benefit to providing a size that will actually work well with the most memory-intensive data sets, including not just current data, but newer data (using data sets that might grow over time).

The advice by Mark Russinov, which was just quoted, is best provided in hind-sight, after some data can be gathered. If you want to make a decision ahead of time, you may get an idea by knowing the requirements of various software. However, Mark Russinovich's approach may help to provide a minimum size that provides full, maximum benefit while not being wastefully large.

There are some various approaches to see this “Peak Commit Charge” value.

Misc info: Wikipedia: Commit charge, Geoff Chappel: ZwQuerySystemInformation

Memory Dump info

KB 889654: How to determine the appropriate page file size for 64-bit versions of Windows Server 2003 or Windows XP. Q254649 mentions some requiremetns for memory dumps. It says “If you select the Complete memory dump option, you must have a paging file on the boot volume that is sufficient to hold all the physical RAM plus 1 megabyte (MB).” (Note that it also says “The Complete memory dump option is not available on computers that are running a 32-bit operating system and that have 2 gigabytes (GB) or more of RAM.”)

Static versus Dynamic

Wayback Machine @ Archive.org archive of Win95 FAQ: Part 9.6 warns of “Bottom-out”. In general, resizing is good if there is disk space to afford it and if a program needs memory, so the resizing should be allowed. But it is also good to know that it happened, so that memory usage can be reviewed. Then the minimum size can be increased so that this doesn't happen again.


Using at least twice as much swap as RAM is recommended by the second paragraph of FreeBSD's man page on “tuning” (system performance).

As noted earlier, this could be based on some older ideas about ideal swap size space. However, since this did come from an operating system's official “man page”, it seemed relevant enough to quote.


MS-DOS does not generally have swap provided by the operating system. Some space may be able to be freed up by looking for old swap files of programs that are no longer running, which may have file extensions such as *.swp or *.tmp *.00?. The way such orphaned files are created is often to have the system reboot when swap was being used by a program. Swap files may commonly be placed in %TEMP%. Platforms such as CWSDPMI may help make some swapping be automatic, but this will only affect programs that were designed to use the feature.

The ideal amount of unused memory

Perhaps zero, or something close to it. Note that this topic did not say that the ideal amount of available memory is zero. The ideal amount of available memory may be infinite. However, physical memory which is going unused may be wasted. The memory may still be available, so any program requesting memory may use the memory. However, any memory which is not being used by any running program may be utilized to help allow some sort of optional task, such as storing information in a disk cache. Recognizing this, Microsoft Windows Vista will typically consume a lot more memory than its predecessor Windows XP, and use the memory for cache to help speed things up.

Once again, it should be pointed out that memory gets handled differently by different operating systems.

[#memislow]: How to handle low memory

One item that may be useful is to just figure out what is using the memory. That might (or might not) point to an easily-rectified problem.

If the problem has been identified as being related to a specific piece of software, check whether that software has any configuration settings/options that determine how much memory is being used. In some cases, upgrading to a new version of the software may worsen the situation of memory utilization, as newer software may be more optimized for newer machines with higher amounts of memory. In other cases, upgrading to a new version may help fix a “memory leak” or some other bug that used memory uselessly: this might be particularly likely if the program tends to use more memory when it runs longer (even when people are not actively using the program). If the program is suspected to be a problem, seeking support for the individual specific program might (or might not) end up being helpful.

One approach that may work, in some organizations, is to determine how old the machine is. If the machine is quite old, especially if the typical task performed by the machine is fairly important, then some organizations may choose to spend money resolving the problem by replacing the machine with a newer machine. Since updated software is often necessarily for maximum information/data security, and since updated software may often be designed for modern (newer) hardware, older machines may not have an amount of memory that keeps running newer software very well. If the older machine is going to be getting replaced very soon, it may not be very worthwhile to deeply investigate into the specific cause of memory being used up. Instead, the orgzanization may prefer spending money on getting a faster computer. That approach may be preferable for an organization than experiencing costs from less productive labor using an older machine, and whatever costs may exist from paying authorized personnel to be trying to upgrade the amount of usable memory in an older machine.

Another course of action may be to get more memory. This will likely be the best solution for a machine, if it has any effect at all. However, if the problem is not a lack of RAM, but is a lack of a more specific type of memory, adding more total RAM might not have any effect on the limiting factor.

There may be configurable settings. Most well known would be settings to determine how much disk space may be used for the operating system to swap to virtual memory. Although more rare, other applications may have some settings to determine how much memory is used.

In the case of DOS, one solution may be to move software to use a different type of memory. If the issue is with low free conventional memory, a better option may be to see if modern drivers are available that use substantially less memory than some older drivers. More details are in the section about DOS. Such techniques probably do not apply much with other operating systems.

Operating systems may handle memory differently. There may be different types/categories of memory. Running out of one type can lead to problems. Such limitations come from software design, so further details are broken down by operating system.

[#memusage]: Memory usage

This section is largely about confirming how much memory (including virtual memory) is being used (by various software). This can help to verify that a system is running out of memory, and what is using the memory.

If low memory is confirmed to be an issue, and a solution is not clear just from knowing which program is hogging up all the memory, see the section about handling low memory.

(Details are provided, categorized by operating system type.)


This guide may still be very brief/cursory.

Guide to running/tuning OpenBSD network services in production explains that some memory may be allocated to a filesystem cache. This memory may not be reported by other software, so unless this is known and determined, the memory may seem to be missing/unreported. To find out how much memory like this is used, see the dmesg log, by running dmesg or viewing the dmesg output that occurred at the time the system booted, by viewing the /var/run/dmesg.boot file. Search for something like “using #### buffers containing ####### bytes (####K) of memory).”

Additional information from this guide may also have come from Guide to running/tuning OpenBSD network services in production.

One option may be to run top. If Swap is substantially available but entirely unused, and if there is memory free, then the issue is not likely to little memory overall. In the section discussing Real memory, the tot includes act (which stands for “allocated”) and may be freed.

Note that if the amount of available memory seems to be artificially low, there may be per-user limits being enforced. The command line shell program may have an internal ulimit command.

If memory is known to be low, and the need is to find out why: Try using ps. The options for ps are known to vary quite a lot amongst different operating systems (as noted by implementation differences for ps). In OpenBSD, -m sorts by memory, so “ ps -auxwwm | more ” may be good. Columns to check out may be %MEM and VSZ and RSS.

[#syvmstat]: Using (OpenBSD's) “ systat vmstat

By default, the program runs in interactive mode. To quit systat if it is in interactive mode, simply press q or Ctrl-C, just like many pager commands.

OpenBSD's systat has different reports of information. Each style of report is called a “view” (using the terminology from the OpenBSD's web page for systat).

One way to go directly into a specific view is to specify the desired view name on the command line. For example, to get to the view showing what programs are using up a lot of processor time, one may run: “ systat pigs ”. Another option is that when the program is running in interactive mode (which is the default), press the number associated with the view. (For example, 5 switches to the same view as the pigs command line option.) To see the name and number associated with a specific view, once the view is active, send the BEL character (by pressing Ctrl-G (or the lowercase equivilent, Ctrl-g, because lowercase control sequences also work)). The bottom of the screen will then show a line consisting of the name (and, therefore, the command line parameter) and number of the view. The default view is the “vmstat” view, so just running systat with no command line parameters is equivilent to running “ systat vmstat ”.

The screen will print repeatedly, delaying for a specific number of seconds between each report, and stopping only after reaching a specified count. If no value was specified for the maximum count, then the maximum is considered to be infinite. At least in OpenBSD, having the delay times the count exceed a multiple of five may be likely to produce more useful information that gets generated every five seconds. It also may be useful to have the delay be set to a value of 1. (This information is documented in OpenBSD Manual Page for vmstat: “Examples” section.) If an update is desired before the update happens, the screen can often be quickened by switching back and forth to another view. (This might not be effective when the system is operating slowly, and is simply slow to respond to any input.) So, tapping 6 and then 5 will result in a refreshed, updated screen of view #5.

Although the name of the vmstat program relates to its role to “Report virtual memory statistics” (as titled in a man page found on Debian), the program also reports a variety of details related to system performance, including details related to CPU usage and interrupts. An advantage to using the vmstat screen (in systat) is that it does show more information, so useful details about a wider variety of problems may be determined by using this screen. (If a user was very familiar with that screen, and went to the screen to check for one problem, the user might rule out that problem but also notice information that points to another problem.) However, just because there is a lot of information doesn't mean that all of the information is intuitive understood. As noted before, details are provied by the manual page.

Some additional resources describe this program in more detail:

  • An example of using systat in more details (to troubleshoot memory/disk usage) is shown by busy disk: using OpenBSD's systat. The example shows some output from a computer that was running slow.
  • finding how CPU is used discusses systat in a couple of sub-sections (with less detail about the vmstat screen).

Microsoft Windows

Perhaps see: Info about Memsnap.exe (for Win XP Pro, Win Server 2003).

Main/Physical memory
This section may benefit by additional review. However, the included information might be useful/helpful. Consider it to be untested information, and see if it may be benficial. Perhaps the following may show an assortment of useful data: Perhaps, for those who want to type the least:
WMIC Process List Full

However, for those who can copy and paste big long command lines with ease, the following may provide less unuseful details, as well as show things in a table:

WMIC Process Get Caption,CommandLine,Description,Handle,HandleCount,ParentProcessId,PeakWorkingSetSize,ProcessId,QuotaNonPagedPoolUsage,QuotaPagedPoolUsage,QuotaPeakNonPagedPoolUsage,QuotaPeakPagedPoolUsage,ThreadCount,VirtualSize,WorkingSetSize

Perhaps see also:

See also:

systeminfo | find "Memory"

Note that these programs (WMIC and systeminfo) may also be used to check remote sytems.

Page file usage
This section may benefit by additional review. However, the included information might be useful/helpful. Consider it to be untested information, and see if it may be benficial.

The following may provide some additional details about, or related to, page file usage:

WMIC /OUTPUT:memuse2.txt PATH Win32_Process GET Caption,CommandLine,Handle,HandleCount,OtherOperationCount,OtherTransferCount,PageFaults,PageFileUsage,ParentprocessId,PeakPageFileUsage,PeakVirtualSize,PeakWorkingSetSize,PrivatePageCount,ProcessId,VirtualSize

(One may see if one or more settings related to a page file using:)

WMIC PageFile Get /All
WMIC PageFileSet Get /All
WMIC volume get BootVolume,Caption,Description,DeviceID,DirtyBitSet,DriveLetter,DriveType,FileSystem,FreeSpace,Label,MaximumFileNameLength,Name,PageFilePresent,SerialNumber,SystemVolume

(This program may also be used to check remote sytems.)

systeminfo | find "Page File"
Kernel Memory

Information about running out of specific types of kernel memory (“Non-Paged Pool Memory”, a.k.a. “NPP”, and also “Paged Pool Memory”) may be found in the section dedicated to covering the topic.

[#winmemlk]: Troubleshooting Memory Leaks

See the various sections about detecting memory usage to verify if the amount of used memory does indeed keep increasing over time. A leak refers to memory being used that shouldn't be used. The most classic example is when memory continues to be allocated to a program even after the program stops running. The term “garbage collection” refers to some methods to notice memory that is no longer needed (and to identify that memory as being free for use). Some programming languages and/or frameworks come with garbage collection features. It is up to the software developer to implement such memory handling. Software developers may be able to assist “open source” software development by identifying and fixing any existing problems. For other software, one of the best ways that end users typically help is simply to provide information to a software developer, including performing tests if the software developer makes such a request.

Providing information to a developer

This is probably the best option for the developer of the software to perform.

The first thing for the software developer to know is that there is a memory leak. The software developer may know the most efficient way to track down the problem. Here are some generalizations which may or may not be helpful.

Microsoft KB Q318263: How to identify memory leaks in the common language runtime refers to Visual Studio .NET's “comprehensive garbage collection package and managed memory” as one possible solution. The page notes, “Because of the garbage collection package that is implemented in the Microsoft .NET Framework, it is not possible to have a memory leak in managed code.”

Microsoft KB Q919790: How to use the IIS Debug Diagnostics tool to troubleshoot a memory leak in an IIS process involves downloading the IIS Diagnostics Toolkit (for 32-bit systems and/or Itanium systems), setting up logging, checking the “Auto-create a crash rule to get userdump on unexpected process exit” box and/or creating a dump file, stopping performance logging, and analyzing a dump file (using any needed symbols).

[#dosmempr]: DOS (memory issues/problems)

Extensive information, including using newer drivers with superior functionality than those typically used with DOS was more common, is available on the page for DOS memory.


Note: This information is about MacOS (version 9 and earlier), not Mac OSX.

Based on my reading, Connectix's products were known to actually do some good. Wikipedia's article on Connectix: “Products” section mentions Virtual and Mode32, and RAM Doubler, all of which were related to memory. (There was also the “Speed Doubler” program, which could often translate inefficient programs to efficient programs.) Basically, it seems like the people at Connectix did some things better than Apple's programmers. Over time, some of the benefits of these programs were of reduced importance as Apple improved their operating system.

(Perhaps see also: hardware page's section on RAM Compression.)

Microsoft Windows Process Memory Types
Working Set

Clint Huffman's “Can a process be limited on how much physical memory it uses?” references “Windows® Sysinternals Administrator’s Reference (p. 59)” (by Mark E. Russinovich, Aaron Margosis) notes, “The amount of physical memory that a process uses is called Working Set.”

The same web page notes that the Minimum Working Set and Maximum Working Set values are not strictly guaranteed values unless a “resource management application” enforces hard limits. For instance, Windows Server 2008 R2 has an available software called the Windows System Resource Manager which is typically available, but not installed by default, so someone can install that software as an operating system feature.