Selecting hardware to purchase is one aspect to this task. Determining where to obtain the hardware is a separate endeavor, and that is discussed further in the section about Researching suppliers/Marketplace offerings.

Determining what hardware will function with other hardware can require some research. Determining what hardware actually delivers very satisfying performance can require its own research: the numbers used in product/model numbers may often be misleading. If a certain type of hardware starts to get a reputation for having overall quality be measurable by a single statistic, manufacturers often have an incentive to focus on that statistic alone and possibly even at the cost of overall quality in other critical areas. Therefore, overall quality is often derived best by hands-on research, which can be prohibitively expensive, or by reading reports by members of the media who get access to free/donated/sponsored sample equipment to be able to research and then write reports about performed research.

This website does not currently have an extensive guide on how to be a hardware-selecting master. There may, however, be some resources that have been known to do a good job. As quality may vary, inclusion in the following list is not to say that the latest reports are the best available information. They may, however, be good.

This list is currently way too sparse... Surely there are other websites that have earned positive reputations. However, not much time has yet been spent on creating this list. Surely there are solid contenders that should be added. Perhaps some focus only on certain aspects like gaming performance of video cards.

Complete systems
  • APC Magazine's Master Builder: How to build the ultimate PC reviews various types of computers (budget, gaming-centric, extreme). It seems that all parts listed, including RAM and power supplies and cooling fans, had brand names associated with them. (This may result in some opportunity of potential cost savings, though perhaps at the risk of downgrading quality.) Some of the information may be old (e.g. in December 2012, the most recent items were from July 2012 and others were even later), but fortunately the page clearly notes the month that the system was reviewed so you can see just how recent (or not) the information is.

This broader hardware section is chock-full of information about the individual components.

The following are simply references (not necessarily strong recommendations: they may be great, but decide the value yourself) to some other resources that have been released. Other sites known to review hardware include PCMag and Tom's Hardware. e.g.: Guide to building a PC (from the year 2010), Ultimate Honor. YouTube: NewEgg's Tutorial (found from NewEgg page featuring Do-It-Yourself (DIY) PC Combos.

Speciality: Hardware compatability with open source

If you're fortunate, you may find an operating system vendor provides some official information about whether hardware works or not. Ubuntu Certified hardware is one example.

Gainframe provides technical reports about compatability with an operating system, and a Slashdot article about buying hardware suggests the Phoronix review site, and manufacturers System76 and ZaReason.

Some information has been compiled about hardware brand reputation.


Information can be helpful when troubleshooting, and this section provides information related to hardware. However, the troubleshooting section may be more helpful in narrowing down a problem.

[#dethw]: Determining what hardware is available on a computer

This section details ways to get information from the hardware. Specific solutions may have details about finding out whether the hardware is supported by the operating system (or some similar task, such as displaying a list of all hardware which is detected but which is not currently being supported by any actively running drivers for the operating system).

Note that this is mostly a section about generalized approaches to successfully find out what hardware is available. These sorts of approaches may also be useful in finding out specific details about the hardware, such as whether a detected piece of equipment supports a specific feature. Starting out with these generalized approaches will often work suitably and so may be the best approach/habit to be using over the long run. More specific details, about specific types of hardware, may be provided in the other areas of the section discussing hardware.

This activity may sometimes be called “system profiling” (and so the software that does this may be called a “system profiler” or a “system profiling tool”).


Note: The process of detecting hardware has, at least historically, been a fairly risky procedure. The risk could very often be that a computer could hang/lock up. There might even be the possibility of hardware damage. (Note the technology warnings. (The risks could be similar to the risks of hardware damage that may be common when testing systems.))

For pre-PnP computers (which are now considered to be very old), often there was no really safe way to reliably use software to communicate with hardware in order to perform a hardware detection procecure. This problem got better over time, especially after “PnP” technology started to become more commonplace.

A section about Detecting hardware provides these and many more details about the topic.

Connecting and using hardware
Internal slots

Unplug power from a computer before plugging a card into an internal slot! Some technicians have been known to attempt to plug a card into a slot of a motherboard which was turned off, but which was physically powered. They were careful to not allow the card to be touching any components other than the slot. The act of simply connecting the card to the slot has occassionally been known to cause the entire computer to start powering up (while the technician's hards were inside the case, and quite possibly still touching the card that just got at least partially inserted). Clearly, this is dangerous.

Internal hard drives are not meant to be added or removed while the computer is turned on. There has sometimes been some advice to perform a “hot swap”, possibly as a result of working around hard drive technology that serves as a sort of “key” to enable a certain hard drive to be used. People have had success using this sort of a “hot swap” method. However, note that it may pose some danger to the person doing the work, and may pose even more danger to the I/O board (motherboard) that the drive is connected to, and may pose significantly more danger to the hard drive. (Clearly, for humanitarian reasons, the most crucial of those dangers is the danger posed to a human.) Therefore, the only safe recommendation to provide is to NEVER perform such a technique. (A safer technique might be to use something like a write-blocker with a power switch and a switch to turn off write-blocking functionality. This still might be using hardware in a way that wasn't designed, but stands a higher chance of providing connectivity more uncleanly as all needed connectors may be connected when power starts to flow to the device, and may pose less danger to human flesh.)

[#usb]: Universal Serial Bus (“USB”)
About USB

Windows.Microsoft.com Vista web page about installing a USB device notes, “If your USB device uses a power cord, you should connect the device to a power source and turn it on before connecting it.” This could be an important step, because a USB port will provide the device with some power. If the device needs more power than what the port provides, problems (possibly including damage) might occur if the device is given insufficient power. So, make sure that any device with a seperate power connector has sufficient power being given before it is connected to the USB port. Some devices might be able to operate in a low-power mode if it can only draw power from the USB port. However, that is less common.

There is an official logo for USB. (There might also be more logos for things like more specific types of USB connectors.)

There are multiple types of USB connectors. Initially, and up through the 2010s, the most recognized kind of connector is the wider and shorter “USB Type A” connector. The USB B connector is more square. Both have four pins and a converter may be used, if needed. There have been other variations used during that time, such as “Mini USB” and the more common “Micro USB”. There have also been standardized variations of these which are slightly larger and add more bandwidth and/or more power. USB Type C is oval-ish and not much larger than “Micro USB”, but handles significantly more bandwidth and more power, and famously was “reversible” in the sense that plugging a USB Type C plug upside down typically worked exactly the same.

(Although USB Type C is widely marketed as having a reversible plug with no proper side up, allowing identica functionality if plugged in “upside down”, Pim de Groot's tweet: Cursed USB-C 2.0 showed a video that had a circuit board with two lights. One light would show as green when a USB Type C plug was inserted, and if the same plug was inserted upside down, then the other light would show as green. This gets discussed further by Hackaday.com article by Maya Posch, “Cursed USB[ Type]-C: When Plug Orientation Matters”

It seems that the term “USB” most properly refers to the protocol that devices use to communicate, rather than being primarily focused on the connector type. Using that definition, proprietary plugs can be USB even if they aren't using one of the common USB standards. (An example of such a “proprietary plug” would be the ports meant for the controllers on the original Xbox, which did accept standard USB devices (including some data storage devices) using the official Xbox USB Keyboard Adapter for Phantasy Star Online obtainable by Microsoft, and some unofficial adapters that did the same thing.)

Wikipedia's article about USB: section showing “Host interface receptacles” shows that the Micro-AB receptacle is more flexible, but has the rounded corners on the long side near the pins while the Micro-B receptacle (which might be more common) has the more rounded inner edge on the long side further from the pins. As those are opposite sides, one should not be too concerned about which inner connector edge has the rounded side.

There are multiple versions of USB, which may have some differences. Generally, the most noticeable/impactful of these differences is speed.

There are multiple official logos related to USB. (The “USB trident” is related to USB 2.0, as shown by PDF file about USB 2.0 icon.)

USB devices may get some power from the USB port. Some devices may need additional power; a smaller number of devices might support one mode that uses additional power for full functionality, and also support a low power mode that just draws power from the USB port. The understanding of the author of this text (which might be based on some very old, possibly wrong rumor) is that USB hardware, including a USB device and also a USB port, may be physically damaged if a USB device is unplugged while it is still drawing power from the port. The further understanding is that this is part of USB's design, so hardware does not violate any sort of USB specifications just for being susceptible to this sort of damage.

In practice, it is commonplace for people to yank out USB memory sticks, which may also pose a threat to the integrity of the filesystem. Whether or not it is really true that USB ports are highly susceptible to hardware damage may be a bit more questionable. NirSoft's USBDeview's history section, for version 1.92, states, “As opposed to Windows XP, Windows 7/2008/Vista doesn't turn off the USB device when you disable or 'Safely Remove' the device.” That sounds condemningly bad, although the referenced MS KB 2401954 does explain that the software is given a “Safely remove” command. What really doesn't happen, according to MS's article, is that (by default) the USB port isn't effectively disabled in the newer operating systems.

[#usbopsys]: Operating system support for USB
USB in Microsoft Windows
Some older operating systems

USB support may have been really added in Windows 98. Microsoft KB 263218: General USB Troubleshooting in Windows 98, Windows 98 Second Edition, and Windows Me

Windows 98 Second Edition
Decim's upgrade
Maximus Decim's Native USB (version 3.3) is an upgrade available for Windows 98 SE. This may also be available from Win98 USB Mass Storage Device Drivers (particularly the referred page that says, Win98SE USB Mass Storage Device Drivers).
Updating more
Win98 SE Updates + Fixes may have yet another update to install after Maximus Decim's drivers.
Win98 First Edition

Win98 USB Mass Storage Device Drivers has a hyperlink to Windows 98 First Edition USB drivers which offers a version of Maximum Decim's version that “have been modified for Windows 98FE by” a user identified by the name “PassingBy”.


See: detecting USB devices


Use the “Safely Remove” graphical interface, which can be brought up by running:

rundll32.exe shell32.dll,Control_RunDLL hotplug.dll
Other/Misc info

NirSoft's USBDeview may support command lines. (However, the “System Requirement and Limitations” section has been known to say that a specific version of USB may not yet be supported. It may be worth checking if this is the case, particularly if there is a relatively new USB standard on the market.)

USB in Unix

There is a PDF file about a suspend and resume framework in OpenBSD (visible via Google Docs) which describes some of the early support/design. For instance, it mentions autoconf being involved, and what drivers may often get involved when a USB stick is inserted.

USB Ports

Calomel's guide to installing NUT provided some guidance used when researching this process. The guide has, as an early step, installing a package called libusb. Install that package if it isn't already installed. (For details on checking what packages are installed, and installing more packages, see software installation.)

The ports should be visible to the software. It may be helpful to check /var/log/messages and/or dmesg output. Great things to see are references to “usb” (e.g. “usb0”0, uhub (e.g. “uhub0”). (An example of some output is shown on Calomel's guide to NUT.)

To elaborate upon that a bit further, and quite possibly further than what is often necessary: According to a paper (called “OpenBSD's New Suspend and Resume Framework”, from March 11, 2011) at http://openbsd.org/papers/zzz.pdf, typically a USB hub (“uhub”) device connects to a USB controller (“usb”) which may often connect to a PCI bus, so “the connection path” after the USB controller may often be ehci and then pci and then mainbus (and then root). (The point of mentioning this connection path is that devices tend to be named after their connection, so the first detected USB hub may have a name such as “uhub0” and then the next couple of detected USB hubs may have names such as “uhub1” and “uhub2”. So, running commands like the following may make sense:

dmesg | grep -i uhub
dmesg | grep -i usb

If the ports are not visible to the computer, check that they actually function. Review BIOS settings to make sure they are enabled. Also, check cabling, as external USB ports will be fairly useless if there isn't a data cable connecting the USB port to the USB controller.

It is very possible that USB hubs may be daisy-chained; this may happen on a technical level if a device is using some sort of embedded controller, even if there is just one physical device connected to the USB port. The dmesg output may look something like the following real-world example:

uhub8 at uhub0 port 2 "Terminus Technology USB 2.0 Hub [MTT]" rev 2.00/1.00 addr 2
USB Devices

To get more information about what is plugged into each USB port, and which filesystem objects (e.g. /etc/usb0) refers to the physical ports that may have something plugged into them, list the details available from the following command:

usbdevs -dv

A USB device will hopefully be detected, and an appropriate driver may then be used. The ugen driver may be used for most types of devices, with a notable exception (because there are many devices that fit this category) of mass-strorage devices that may use umass.

According to a paper (called “OpenBSD's New Suspend and Resume Framework”, from March 11, 2011) at http://openbsd.org/papers/zzz.pdf, a USB memory storage device will “ likely be visible to the user as an sd” disk, so “the connection path” has sd connecting to scsibus which connects to umass which connects to uhub.

If there is an immediate plan for a device to be accessed at a USB Port, then check permissions. If the planned username does not have needed permissions, then consider adding the user to a group which is specifically designed for the purpose of allowing full permissions to the relevant needed USB devices. (This way, if any other user later needs to be able to use the same filesystem object, especially if that object being referred to is a controller and/or hub which may be used by multiple USB devices, then later on the permissions may be as easy as adding the additional user to a needed group.) Then, make sure that the group has the needed access (probably both read and write access) to the filesystem objects, modifying attributes of filesystem objects as required.

USB Length Limits

There have been some claimed limits to the USB cable lengths. Synopsys.com Blogs: To USB or Not to USB: USB 3.2 Cable Lengths... says, “The specifications don’t actually specific cable length. They specify the amount of signal loss through a “cable” that is allowable.” (Emphasis, via boldness, was added to the quote and not part of the original, cited text.) A loss measured as 6 dB is used as an example limit. The article notes, “The resistance of the cable can be reduced by having thicker cables (smaller gauges) or using materials with lower resistance. Most cable makers will minimize cost by using the thinnest copper wire they can and still make specification for the length. So a thick gold cable would be much more expensive, but it could be longer.”

Even if you don't use more expensive materials like gold, and limit the main connectivity material to copper, using thicker cabling (wire with a lower “gauge” measurement) could help. (Different wires within the cable may require different minimum thickness based on whether they are meant for data or power.)

There are some limits of common economic cables. These limits will depend on what type of USB connections are desired. The previously-mentioned article at Synopsys.com notes some cable lengths:

  • “When USB 2.0 launched in 1999 it maintained the 5 meter cable length.” (“maintained” suggests that earlier standards used the same length.)
  • “With USB 3.0 the cable length dropped to about 2-3 meters for 5 Gbps.”
  • “And with USB 3.1 it dropped to 1 meter for 10 Gbps.” That was using a single lane of traffic.
  • USB 3.2 added support for two lanes of traffic without requiring a different cable length. “USB 3.2 cables can be 1 meter because it uses 2 lanes of 10 Gbps.”


Although old, here's a reference with text originating from an official source: USB.org Archived by the Wayback Machine @ Archive.org: “USB Info: Frequently Asked Questions”, “USB Cables, Connectors, and Networking with USB” says, “A1: In practice, the USB specification limits the length of a cable between full speed devices to 5 meters (a little under 16 feet 5 inches). For a low speed device the limit is 3 meters (9 feet 10 inches).”

(This is sometimes referred to as 16.4 feet, whereas 16 feet 5 inches would be about 41.66666% of that 17th foot.) The phrase “In practice” was probably used since that text provided common usage scenarios, whereas the actual specification probably technically discussed signal loss rather than length.

USB Hubs
USB 3.1 Gen 2 Type A Hubs

USB 3.1 Gen 2 is the last USB version to support a Type A connector (at least with its fastest connectivity methods). However, most USB 3.1 devices are either USB 3.1 Gen 1 (meaning they operate using the same connectivity as USB 3.0) or they use use a USB Type C connector.

While a USB Type C connector and USB Type C port are nicer than their Type A variations, if a computer has a Type A port than using a Type A connector can be more convenient, especially if you are wanting to minimize signal loss (possibly in order to use the longest cables that are usable).

Some heavy searching indicated that USB 3.1 Gen 2 hubs often don't get fed with a Type A connector. Only two viable options seem to have been found, and one of those seemed to only fir the description because it used a Type A to Type C cord when it was using a Type C connector.

The Inland IH8003 / Y-HB08003 adapter (UPC 6 18996 72485 7, MC 434720, 434720 SKU) comes with four USB 3.1 Gen 2 ports for devices. While its main feed is USB Type C, it comes with a USB Type C to USB Type A cord, so it is marketed as being available for USB Type A devices. It also comes with an optional plug to supply it with DC 12V, which can be helpful for having the USB hub power some hungry devices like data storage drives. This has been seen available at: Inland IH8003 USB 3.1 Gen 2 Type-A 4-Port Hub @ MicroCenter.com Computers & Electronics and at Inland IH8003 USB 3.1 Gen 2 Type-A 4-Port Hub @ CaptainComputers.com (both offering this for $33.99 in July of 2021).

There is also the Orico. ORICO Powered USB 3.1 Hub M3H4-G2 @ Amazon.com state, “It supports connecting 4 peripherals simultaneously, and guarantee steady operation?This is a data hub, cannot be used for charging)”. If you know that you don't want to be trying to use this hub for charging, that warning might seem non-severe.

If searching for more alternatives, in addition to searching for Type A, make sure to be searching for USB 3.1 Gen 2 or searching for 10 Gbps. There may be some USB 3.0 devices identifying themselves as USB 3.1 when they are really just USB 3.1 Gen 1 (better referred to as USB 3.0), so simply searching for “USB 3.1” is not sufficient to exclude such devices.


Dignited.com article, “USB-C is not perfect. Top 5 issues facing USB-C connector” referenced A review by Benson Leung's at Amazon.com which says a cable “seriously damaged the laptop computer” he used, causing “permanent damage” because “they completely miswired the cable”. “This is a total recipie for disaster and I have 3 pieces of electronics dead to show for it” ... “Needless to say, this cable is fundamentally dangerous. Do not buy this under any circumstances.”

[#portps2]: PS/2 ports

There are two types of PS/2 ports: a PS/2 mouse port and a PS/2 keyboard port. Both of the physical connectors are shaped identically, using the 6-pin Mini-DIN standard. The difference is that a PS/2 mouse port will typically use up IRQ 12, and PS/2 mice require IRQ 12. Although the PS/2 mouse port may have IRQ12 be disabled by a BIOS setting, the BIOS typically does not have an option to enable IRQ12 on any PS/2 port except for the one PS/2 mouse port. If a laptop has only one PS/2 port, it is likely a PS/2 mouse port.

A PS/2 keyboard will also work just fine in a PS/2 mouse port. However, a system with a PS/2 keyboard port will typically also have a PS/2 mouse port, and a PS/2 mouse won't work with the PS/2 keyboard port. The reason the mouse won't work in the keyboard port is because the mouse typically requires that IRQ12 is enabled for the PS/2 port it uses, and IRQ12 is not being used by the PS/2 keyboard port. The simple reason that the keyboard port doesn't use IRQ12 if there is an active PS/2 mouse port is because IRQ12 is already in use (by the PS/2 mouse port).

[#ps2prtpw]: Warning about hardware damage

It is critical to know that a PS/2 device should never be plugged into a PS/2 port that is powered on. (Typically, powering off a PS/2 port involves powering off the whole computer system.) This is because on rare occasions, plugging in a PS/2 device into a PS/2 port may permanently damage the port, rendering it permanently unusable. (This seems to be something mentioned more about keyboards than mice.) Since the port is typically part of the motherboard, replacing a damaged port typically involves replacing the whole motherboard. The proper way to plug a device into the port is to first shut down the system, and plug the device in while the system is powered off. Working around the reboot requirement is typically an available option on systems that support USB (by plugging in a USB device, such as a USB keyboard or a USB rodent or perhaps an adapter that plugs into USB and provides PS/2 ports).

To clarify, this warning is referring to an actual PS/2 port. If a device is plugged into a PS/2 port that is part of a USB adapter, there should generally be no danger to plug the USB adapter into a USB port.

There are even reports that the damage might extend beyond just the PS/2 port, and even cause an entire motherboard to be rendered useless. Some reading indicates possibilities of electrical shorts or even sparks. It seems like the problem may be that electricity ends up going somewhere that electricity is not designed to go, which is a type of uncontrolled situation that could have varying effects. (For instance, forum post of a user who reported issues with a power supply.)

Note that in many cases, people have plugged a PS/2 keyboard into a port of a powered-on machine. This often has no ill effects whatsoever; in some uncommon cases the system might receive a few (dozen) keystrokes, but then the keyboard might work fine. In some other cases (perhaps especially if no keyboard was plugged in while the BIOS was starting up), a plugged in device might not work until the system is rebooted (or powered off and then back on), but there may have been no long-standing damage from the incident. However, because of the possibility of permanent hardware damage to the component which is not the keyboard (which is generally the more expensive component), this is just generally not recommended. As an example of some varying opinions, Tom's Hardware forum about PS/2 port damage has one person state, “in my 4-5 years there I build and repaired thousands of systems and hot plugged too many PS/2 devices to count and never witnessed ANY problems. And, neither did any of the other technicians.” However, multiple other people did chime in on that forum post with reports of physically damaged hardware. In another forum post, Forum post about PS/2 damage (Sunner's January 8, 2003 2:31am post) stated the problem did, “happen like once or twice when I worked in the support dept of a mid sized OEM.” The quality of individual machines may vary. In a Forum post about PS/2 damage (prosaic's January 8, 6:49am) post, prosaic notes that “Not all damage from this sort of excursion gives evidence of itself by frank failure”. In other words, electricity might do something undesirable and cause some damage other than a complete failure of the PS/2 port or motherboard. The user “prosaic” goes on to say that technicians “often figure that no harm is done, despite the fact that they should know better, because nothing catastrophic happens. They do their damage” and then do not handle (and might not even be aware) of long-term consequences.

The obvious solutions: If the PS/2 mouse port is damaged, then a USB mouse may be used instead of the PS/2 mouse. If the PS/2 keyboard port is damaged, one option may be to use a USB keyboard instead of a PS/2 keyboard. Now, one other solution that might require a bit of thinking to realize: Another option of how to handle a situation with a broken PS/2 keyboard port may be to use a USB mouse instead of a PS/2 mouse, and then (while the computer is turned off) plug the PS/2 keyboard into the PS/2 mouse port.


The “PC 97” standard, one of the versions of the PC System Design Guide by Intel and Microsoft, standardized the colors of green for mouse connectors, and purple for keyboard connectors. Prior to this standardization, Compaq systems had often used purple for mouse connectors, and orange for keyboard connectors. Therefore, remembering that green is for mouse is a fact that is typically always correct, whereas remembering the purpose of purple connectors may lead one to make an incorrect connection in some cases. (Other, older PS/2 ports would often not have special color-coding. The connectors might have been white/beige, most commonly. The ports themselves may have most commonly been black.)

Some ports may show themselves as being half-green and half-purple. These are actually PS/2 mouse ports, and so making them entirely green would be sensible. (Remember, a keyboard will work just fine in a PS/2 mouse port. All PS/2 mouse ports are able to effectively work as a mouse port or a keyboard port.) This is typically only seen on laptops which, probably for some space-related concerns, only have one PS/2 port. Hardware designers probably just figured out that average consumers would understand that the port can be used for both purposes if it is showing both colors.

Eventually USB became more commonly supported, and USB to PS/2 adapters became cheap enough that keyboard and mouse manufacturers would include a USB to PS/2 adapter with the keyboard and/or mouse. By doing so, the keyboard or mouse would be USB and so the single product on a store shelf would work well with both USB and PS/2 systems. The USB to PS/2 adapters that came with one USB keyboard or mouse was not necessarily universal enough to work with a different model of a USB keyboard or mouse, but they did work pretty reliably with whatever device the adapter was shipped with. (The only real problem with this method is that the length of the adapter meant there was an additional requirement for space behind the computer. However, that requirement was often not a problem since it is typically good general practice to have plenty of space behind the system anyways, to accomodate good air flow.)

The PS/2 keyboard connector largely replaced the earlier DIN 41529 5/180° connector. Some keyboards were released with these 5-pin DIN connectors and were bundled with adapters to go from this 5-pin DIN connector to the newer standard PS/2 keyboard connector, so the keyboards with these adapters worked very flawlessly with PS/2 connectors. That paved the way for newer motherboards to stop including the 5-pin DIN keyboard connectors.

The PS/2 mouse connector essentially replaced using DE-9 serial port connectors for mice (which were often called DB-9 connectors, as noted by text about “D sub DB shell”). Using a PS/2 connector was nicer because it used IRQ12 which had previously been largely unused, and using a PS/2 mouse port freed up the serial port for other devices. The serial ports were also a bit problematic because they used up an IRQ for a COM port, and that had a chance of conflicting with other devices that may use a COM port, such as an internal dial-up modem, which was the primary method of remote computer connectivity in the timeframe that PS/2 ports started to be popular. (Often that problem was worked around by having the mouse use COM1 and having the modem use COM2, but the chances of an IRQ conflict were even lower if the mouse didn't use either COM1 or COM2 because it was using the PS/2 mouse port's IRQ, IRQ12.) The other substantial benefit perceived with the PS/2 mouse port is that it was physically smaller, which was considered a benefit when hardware designers were designing laptops. Laptops were starting to become more common at about the same time that PS/2 mouse ports were. (Prior to that, issues like price and battery life had made laptops much more rare, even among business travelers.) There were also serial port to PS/2 mouse port converters (to allow a serial port mouse to work in a PS/2 mouse port), although they were not very commonly used.


The case is basically the metal and/or plastic that surrounds the motherboard.

Overview of case styles

Historically (in the late 20th century, and probably the first decade of the 21st century), there was little about a case that mattered a whole lot. Laptops were an obvious variation from typical desktop (horizontal) or tower (vertical) computers, with the most significant difference in technology being an embedded screen and that the whole computer could operate with battery.

Many cases were designed around a common “form factor”, which mostly meant that it had space and connections for popular sizes of motherboards. For businesses, servers were often designed to be mounted onto a standard “rack”. Some of those servers were 1U tall and some 2U and some taller. The U came from the phrase “rack unit”. Typically a full-sized rack unit was generally 48U tall (or close, like 46U tall).

Starting during the second decade of the 21st century, mobile devices started to become more popular. First, “smart phones”, which were phones that had enough circuitry to allow a person to perform tasks that other people used computers for, became quite popuilar with Apple's iPhone (which used an operating system that Apple created and named iOS), and subsequent competition from phones using the Google's operating system (which was named Android, and which Google bought and then enhanced). A key feature of these devices were touch screens which were just starting to become affordable. After phones became popular, “tablet” computers were released; they were similar to phones but lacked some key antennas and/or circuitry that would let them communicate with the towers operated by the wireless phone companies. Around this time, hardware enthusiasts also enjoyed smaller computers, like the Raspberry Pi.

Case features

Often, the most significant feature that makes one case actually technically superior to another case is good handling of heat. Many computers generate substantial enough heat that the computer could overheat if the heat is not properly dispersed.


One thing that can have a notable effect on the amount of heat in a system is how well the system is kept clean from build-up of dust. The effect was real enough and substantial enough that many companies were known to purchase cans of compressed air so that employees could clean systems. Full-blown air compressors could be more pleasant to use, but more costly to initially purchase and more inconvenient to maintain and more inconvenient to move.


Be very wary of BES page: section about CPU heat's advice, “First switch on your vacuum cleaner, and while sucking the air around the heat sink using your left hand, blow the dust off with canned compressed air using your right hand. This way, you can remove the dust quite completely. (If you don't have canned air, you can use your mouth at your own risk. Be careful not to spit anyway.)”

That sounds effective for a goal of removing dust, but might be dangerous advice. (Presumably this is suggesting to use something that has a hose, like a “shop-vac”.) First of all, some places give advice that many vacuum cleaners generate substantial amounts of static electricity, which is highly likely to cause significant damage to electronics like what is found inside a computer. Anti-static vacuum cleaners may exist (and may be the bases of the “Data-Vac” models?), but are pricey enough to not generally be worthwhile for home use (because it makes little sense to spend a huge chunk of money on a specialized vacuum cleaner, rather than using that money to just get a new computer, or use some alternate ways of cleaning out dust).

Second of all, there may be risk of sucking up something unintended, like a jumper.

Compressed air

There are some things to know about these cans of compressed air. First, they should ideally spray just air, not liquid. There are also cans that may shoot out liquid, such as “contact cleaner”. Those cans are a different product; do not get them confused.

Second: the contents of the can may become liquid if the cans are shaken or held at too much of an angle other than normal proper up-right orientation. This can cause the cans to spray liquid. This is VERY UNDESIRABLE. Try to keep the cans unshaken and up-right. Many cans may start spraying liquid if they are tilted by about 22.5 degrees (a sixtheenth of a circle). Keep them upright.

Spraying for a length of time (several seconds) can cause the cans to become cold. This can be minimized by spraying in shorter bursts, although sometimes longer bursts may seem more effective in cleaning. Feeling coldness come from a can that was room temperature just a minute ago may seem nifty, but do not expose skin to the outside of such a can for a substantial length of time. Know that the coldness can actually be a medical risk, causing frostbite.

The cans do lose power fairly quickly. Once they start to seem weaker, set the can aside for some time (like 10-30 minutes?) It will then regain some of the power that it formerly had. However, doing this more than once (or perhaps twice if a person is lucky) is often less effective, because the cans do run out of power.

Such cans can be purchased for about $4. Using them for entertainment purposes can be dangerous, and rather expensive. It really is advised to just use them for their intended purpose. Trying to inhale this gas can be exceedingly dangerous; some cans have been known to say, “contains a bitterant to help discourage inhalent abuse.” (On a side note: some people have become familiar with inhaling other things like helium from a helium baloon. This is potentially extremely dangerous and people have become unconscious and even died from such a thing. Wikipedia article on Helium: “Hazards” section makes reference to several deaths. StackExchange discusses the science.

Air compressor

An air compressor that has remain unused for some time (overnight... maybe many minutes?) may get a bit of condensation from water vapor. The air compressor might spit out that condensation in liquid form. The easy (and rather fun) way to resolve this is to just shoot the air compressor into the air for a second or two; if a drop or two of liquid goes into the air then that will likely be a non-issue. This advice may be assuming that the cleaning is done outdoors, which is often done because a ventilated area is needed for the dust that gets released by the cleaning.

screws used for PC cases

Although it seems that IBM-branded computers may have originally used “standard” screws (meant for a “flat-head”-style screwdriver), at some time (by the mid 1990s, and probably earlier) that switched and most computers could be opened with only one tool, which is a Phillips screwdriver (which handles the screws where the top ridges form the rough shape of the letter X).

Fancier systems would sometimes use screwless designs. They may or may not work well; at least one case is known to have a part that swings over the corner of a card in order to latch. Well, if the card has jumper pins there, then that mechanism won't work well. Some of the screwless designs may have held things less securely than typicaly designs that required screws. However, when screwless designs were designed well enough that they worked very well, they were quite nice. In some case, the only drawback was that this was a bit of a luxury and so had a cost, but it could make life more pleasant when people were making changes to what parts were inside of a computer.

The topic of screws used in computers is covered further: screws in personal computers.


Information about keyboards are on the Keyboard page.

Testing hardware
See: testing hardware.
[#videoout"]: Video Output
Text mode
(Not much information here at the moment...) (See also text-mode graphics.)
Graphics Platforms/Subsystems
[#txtmdgfx]: Text-mode graphics

Even when in “text mode”, when “graphics” are drawn using pre-defined groups of pixels (where each group is called a “character”, and the definitions are in a “character map”) instead of individual pixels, software which supported such interfaces would sometimes refer to such an interface as a “graphical” interface.

Graphics hardware

A lot of early graphics hardware followed some well-known hardware standards. The most famous of these may be Video Graphics Array which is more commonly known by its abbreviation, VGA, and its successor, “Super VGA” (“SVGA”). Earlier standards also exist. Newer standards, such as XGA, also exist. However, hardware that supported such newer standards would often be marketed as supporting Super VGA. In practice, the term SVGA basically refers to common SVGA resolutions (640x480 @ 8-bit color, or 800x600 with any resolution) and also to any higher resolutions.

Support for these standards are often built into the operating systems. TOOGAM's Software Archvie: Video Card Drivers may provide some drivers for some operating systems that don't support one of the standards. However, drivers that are distributed by the manufacturuer of the video card may often support much greater speed, and better compatibility with certain software/platforms. For instance, a freely downloadable VESA driver for Win9x may support higher resolutions on many video cards, although it may be dreadfully slow and may not support commonly implemented DirectX APIs.

X Windows

Similar: See: Wikipedia's article on “Fresco (windowing system)”.

Basic functionality


Changing video mode resolutions


Ubuntu page about changing resolutions has some info, including some “Xrandr Graphical Front End GUI” options.

See also: Wikipedia article on Mode-setting.

Video acceleration
See: Wikipedia's article on Graphics hardware and FOSS.
[#libsdl]: Simple DirectMedia Layer (“SDL”) library

Available for multiple platforms. (Although the official logo includes the words “Simple Directmedia Layer” with the second “m” lowercase, even the project's main home page uses a capital M, as does Wikipedia's page about Simple DirectMedia Layer.)


Wikipedia's page for “Framebuffer” says “A framebuffer is a video output device that drives a video display from a memory buffer containing a complete frame of data.” The term also refers to the portion of memory that contains the complete frame of data. Wikipedia article on Linux framebuffer.

OpenBSD FAQ 11.1.2 shows there is no framebuffer console driver. “Some operating systems provide this, but there is not currently one for OpenBSD, nor is there much interest among developers for one.”

Of possible use: OpenGL Frame Buffer Object (FBO)

SVGAlib (“Linux SuperVGA Graphics Library”)
A neat little trick for those who enjoy “shooter” games involving moving around in 3D space that is rendered on a 2D display: DOOM (and Quake?) could be played with the graphics card being in text mode.
Wikipedia's page for “Quartz (graphics layer)”
[#opengl]: OpenGL
A compatible graphics library. (For similar, Wikipedia's page of alternatives may provide some resources.)

Video Electronics Standards Association was associated with video compatibility. There was a standard called VBE (VESA BIOS Extensions).

SciTech Software released software called UniVBE (Universabl VBE) and SciTech Display Doctor for DOS and OS/2. TOOGAM's Software Archive: section about VESA may have some further references.

Intel info about a graphics chip notes “XFree86 and most Linux distributions also provide a VESA/VBE driver which is compatible with these graphics controllers.”

[#mesa]: Mesa
freedesktop.org wiki's page on Gallium describes this as a platform for graphics drivers, and that this platform has flexibility/portability advantages over older platforms such as Direct Rendering Infrastructure (DRI).
Mesa 3D

In addition to the above standards, which may be more hardware-specific and more compatible with various software, another common approach for software compatability may be to use software that acts as a standardized platform/API. The following section(s) may be fairly specific to a single operating system platform.

Microsoft Windows
[#mswnif2d]: Simple 2D interface
Changing resolutions
Display Settings

The most commonly known method is to visit the “Display Settings” page. One way to do that is to run:

rundll32.exe shell32.dll,Control_RunDLL desk.cpl,,3

Other ways to get there are to use a Control Panel applet, or to right-click on the desktop background and to choose an option near the button of that shortcut/context menu:

  • In Win Vista: Start, Control Panel, (if in classic view) Personalization. Or, right-click on the desktop background, and choose “Personalize”. Once at the “Personalization” screen, choose “Display Settings”.
System Tray Icons

Microsoft has released some software that shows up the system tray and allows a video mode resolution to be changed. The software was called QuickRes, and was part of the collection of software called “Microsoft Windows 95 Power Toys”. (Microsoft Windows 95 Power Toys Readme describes version 2.1 as fixing bug fixes, changing a UI, and not expiring.) This software was also mentioned in Microsoft TechNet page about Windows 95 Resource Kit Utilities, and is referred to by the name “Quick Resolution Changer”. MS KB Q282436: The Quickres Utility Is Not Available in Windows XP notes that “the Display Properties” may have had a “show settings icon on taskbar” option. “This option is not available in Windows XP because Windows XP does not include the Quickres utility.”

Video card manufacturers/distributors may provide some software that can run in the system tray and may be able to change video modes.

Command line options

The consensus seems to be that there isn't a way to instantly change the video mode simply by using a command line program that comes with Windows. e.g. Ars Technical OpenForum post about changing resolutions refers to multiple solutions involving third party software, including at least some of the following options.

Warning: Software listed here may include some third party options which have not been heavily used. Inclusion in this list is not meant to suggest that the software is non-malicious, or even safe.

  • Microsoft's QuickRes: Rob Vanderwoude's notes on changing screen resolution notes a command line like “ RunDLL DESKCP16.DLL,QUICKRES_RUNDLLENTRY 800x600x8 ” (for 8-bit color on an 800x600 display). (Note: Rob Vanderwoude is of no known relation to the creator/founder of the ][CyberPillar][ website, Scott Conrad VanderWoude. Also, Rob Vanderwoude gives credit for William Allen for posting this onto the alt.msdos.batch Usenet group.) This might only be available for Win9X systems that have QuickRes already installed.
  • Source code Copyright © Herfried K. Wagner
  • Software discussed by alter.org.au: vmctl, the uictl software which provides additional functionality, Personal Display Settings for Windows
  • NirSoft's NirCmd (which is freeware that provides some additional (and unrelated) functionality)
  • MultiRes may support a command line interface as well as providing an icon for the system tray (similar to what Microsoft's QuickRes does).
  • Anders Kjersem's freeware called QRes
  • “Social” section of TechNet: post regarding Display Settings says “The only way to change the display resolution in Server Core is by modifying the registry.” The post notes, “If you are just using the Standard VGA Adapter”... so perhaps this is not as valid when using other display drivers. Note also that the post states, “You will need to logoff/logon for the changes to take affect.” It looks like the only real advantage to this inconvenient method is that it might be available without using programs that need to be downloaded. First, a Display GUID needs to be obtained. Then, the example provided looks something like:

    Reg add HKLM\SYSTEM\CurrentControlSet\Control\Video\DisplayGUID\0000 /v DefaultSettings.XResolution /t REG_DWORD /d 1024

    and similar for horizontal.

Seeing resolution settings

Besides the above steps for changing resolution, the following steps may be methods to display the current resolution:

  • Microsoft's “Hey, Scripting Guy!” Blog: page about screen resolution notes,

    For better or worse (and yes, we agree that this qualifies as “for worse”) there?s no built-in way to change the screen resolution using a script. Sorry.

    So, even using a custom script (using VBScript), there may be no “built-in” support for making changes. However, getting the resolution may be a bit easier than trying to make the change. See:

    WMIC PATH Win32_DisplayConfiguration Get BitsPerPel,DeviceName,DisplayFrequency,PelsWidth,PelsHeight
    WMIC PATH Win32_DisplayConfiguration Get /ALL
    WMIC PATH Win32_DisplayConfiguration Get /?

    Since these examples are using WMIC, a natural and correct conclusion would be that this is using WMI and that there are various other approaches. See: Microsoft's “Hey, Scripting Guy!” Blog: page about screen resolution.

    Note: On a laptop, those WMIC commands seemed to only show the resolution of the main screen (built into the laptop), and not a secondary display (using an HDMI cable).

    The following showed only the HDMI output (and not the laptop screen's resolution).

    WMIC DESKTOPMONITOR Get Caption,Description,DeviceID,MonitorManufacturer,MonitorType,Name,ScreenHeight,ScreenWidth,Status

There may be some option(s) related to system acceleration. This can help if system acceleration seems to be causing problems, and sometimes reducing the acceleration may no noticed reduction in performance.

Note: If acceleration seems to be problematic for a specific application, check the program to see if a menu option can disable acceleration for just that one program. Doing this has been known to help achieve useful results when trying to copy graphical data (so the data may be pasted in another application). (Notably, this has been known to work for Windows Media Player, a.k.a. WiMP. When using “Print Screen” key to capture the program's output, this method may result in pixels being entirely black, when copied, if those pixels were part of a video that was affected by being “accelerated”. Simply disabling the useless acceleration seemed to cause no noticeable negative consequence, but made copying work well.

MS KB 263391 (as archived by the Wayback Machine @ Archive.org) notes going to Start, Control Panel, System (is this simply sysdm.cpl?), “Performance” tab, and Graphics. (Positive effects may be noticed by moving the hardware to “Basic” acceleration setting, which is just one notch right to the “None” setting that is all the way to the left.)
Newer Microsoft Windows

In some newer versions of Microsoft Windows, wikiHow.com: “How to Turn Off Hardware Acceleration” notes, “This option may not be available on newer computers,” and later, “Not all computers will support this. Most newer computers using Nvidia or AMD/ATI graphics cards will not have the ability to change the amount of acceleration through. These options are typically only available on older computers or computers using onboard video.” That pag then cites EightForums forum: Turning off hardware acceleration.

The wikiHow page had directions for both the video/display card/adapter, and the monitor. In both cases, finding the relevant slider bar involved going to a tab called “Troubleshoot”. However, on some newer computers, that might only lead to a button that leads to a greyed-out slider bar, or that button itself might be disabled, or the entire “Troubleshoot” tab might not even exist.


In Microsoft Windows 10, when using a display with a sufficiently high amounts of DPI (“dots per inch”, Microsoft Windows may pop-up a notification asking if you'd ike to help with “Blurry” applications. Based on TenForums: How to Turn On or Off Fix Scaling for Apps that are Blurry in Windows 10, it looks like that may start with Windows 10 build 17063, and be less likely to happen starting with build 18277 (19H1) since the setting became default.

That same TenForums: How to Turn On or Off Fix Scaling for Apps that are Blurry in Windows 10 also provided a number of additional details. Based on those details, the following information is provided:

  • Reg QUERY "HKCU\Control Panel\Desktop" /v EnablePerProcessSystemDPI
    Reg ADD "HKCU\Control Panel\Desktop" /v EnablePerProcessSystemDPI /t REG_DWORD /d 1
    • “1 = On”, and “(delete) = Off”, so apparently removing this setting just involved deleting this registry...
  • Reg QUERY "HKCU\Software\Policies\Microsoft\Windows\Control Panel\Desktop" /v EnablePerProcessSystemDPI
    Reg ADD "HKCU\Control Panel\Desktop" /v EnablePerProcessSystemDPI /t REG_DWORD /d 1
    • HKLM was also shown as an alternative
    • (Actually, on the web page that this information was found on, the name of the key ended with “Desktopr”, but that is presumed to simply be an error.)
    • values included: “(delete) = Default”, “0 = Disable”, “1 = Enable”
  • Reg QUERY "HKCU\Software\Policies\Microsoft\Windows\Display" /v EnablePerProcessSystemDPIForProcesses
    Reg QUERY "HKCU\Software\Policies\Microsoft\Windows\Display" /v DisablePerProcessSystemDPIForProcesses
    Reg ADD "HKCU\Software\Policies\Microsoft\Windows\Display" /v EnablePerProcessSystemDPIForProcesses /t REG_??? /d ???
    Reg ADD "HKCU\Software\Policies\Microsoft\Windows\Display" /v DisablePerProcessSystemDPIForProcesses /t REG_??? /d ???
    • The documentation identified this as a string type (so presumably REG_SZ or REG_MULTI_SZ), and the only option noted was “(delete) = Default”.

Besides those registry entires, there were directions for using the graphical interface, or using “Local Group Policy Editor” to go to either “Computer Configuration” or “User Configuration”, and then the screenshots showed the tree view of “Administrative Templates\System\Display”, “Configure Per-Process System DPI settings”.


DirectX is basically a software package that supports multiple technologies including DirectDraw, Direct3D, DirectSound, and DirectInput. (Not all of those mentioned are graphics standards, but they can be useful when making interactive multimedia applications.)

The DirectX software package, and platform/specifications, are fairly targeted to the Microsoft Windows platform. (There have been some attempts to use DirectX on other platforms: Slashdot article commentary #42759545 and sub-comments mention some options.)

QuickDraw (and “QuickDraw 3D”)
QuickDraw: 2D (see Wikipedia's page about QuickDraw), “QuickDraw 3D” (Wikipedia's page about QuickDraw 3D)
Slashdot commentary: Microsoft Phases Out XNA and DirectX? : comment # 42759127 notes that there have been multiple application programmer interfaces, including “GDI, GDI+, WPF 2D/3D, DX, MDX, XNA”.
Other graphics approaches

Hardware acceleration had its birth in various standards. The following may largely be a trip down memory lane, but there is a point: There were several standards that were specific to hardware vendors before some newer platforms became more standard.

This section may need to be cleaned up a bit (including adding hyperlinks to some of these old standards).

S3 Virtual Reality Graphics Engine (“S3 ViRGE”)
(Wikipedia's page on “S3 ViRGE”: section about “Support” describes some software that supported that S3D hardware accelerated graphics platform. Non-supporting software could actually run slower when using the hardware-specific coding. Wikipedia's page on “S3 ViRGE”: section on Performance notes, “While revolutionary in delivering an affordable 3D accelerator with good quality 2D performance, the ViRGE earned the unofficial title as the world's first "graphics decelerator" due to its abysmal 3D performance.”
S3 Savage
Wikipedia's article on the “S3 Savage” line of cards: section on the “Savage4” card notes “this old card can do "Direct Rendering" in Unix and Linux operating systems using the "savage" driver. This opens the possibility of composite rendering” options.
Redition's Vérité standards

Vendor-specific standards were Speedy3D (API for DOS) and RRedline (for Microsoft Windows).

Wikipedia's article on the Rendition company: section about the “Vérité V1000” has glowing quotations of John Carmack. Specifically, the three quotations are: “We at id have been fans of the Vérité architecture since we first saw the spec, several months back.” “Now that we have some experience with the chip, we're even more pleased with it; in fact, it's our clear favorite among 3D accelerators.” And finally, “Vérité will be the premier platform for Quake.” Those were some powerful words during the timeframe between the release of the popular game DOOM and the release of the upcoming game Quake. There was, indeed, a Vérité-specific release of Quake for DOS, titled VQuake. However, the Wikipedia article goes on to cite the book “Masters of Doom” (written by David Kushner), stating “Carmack cited bad experiences with programming the Vérité as the reason for iD's shift away from proprietary APIs toward the industry-standard OpenGL.”

Wikipedia's article on the Rendition company: “Downfall” section mentions sabotage “- the result was that the chip couldn't be fully clocked.” The article goes on the say, “Today the Rendition brand exists only as the value line of RAM by Micron Technology's consumer memory division, Crucial Technology.”


Wikipedia's article on the Redition company: section titled “Dreamcast” notes that 3dfx was used by the Sega prototype named Blackbelt, though this prototype was rejected in favor of a competing prototype named “Katana” which eventually moved beyond the prototype stage and went on to become known as the “Dreamcast”.

“3Dfx Interactive”'s “Voodoo” graphics chipset, Glide API ( Wikipedia's article on 3dfx: “Glide API” section, Wikipedia's article on “Glide API”) is an interface for 3dfx hardware. Among emulation fans, this was famously used by UltraHLE (see Wikipedia's article on UltraHLE).

Wikipedia's article on 3dfx Interactive: section on Voodoo2 notes “The Voodoo2 required three chips and a separate VGA graphics card, whereas new competing 3D products” could use a single chip that resulted in lower manufacturing costs. If the single chip solution also provided higher performance, that naturally ended up becoming difficult for 3dfx. Wikipedia's article on 3dfx Interactive: section about the company's “Cause of decline” notes some difficulties by the company. Wikipedia's article on 3dfx Interactive: section called “Acquisition and bankruptcy” notes creditors initiated bankruptcy proceedings. “3dfx, as a whole, would have had virtually no chance of successfully contesting these proceedings, and instead opted to be bought by Nvidia; ceasing to exist as a company.” “The resolution and legality of those arrangements (with respect to the purchase, 3dfx's creditors and its bankruptcy proceedings) were still being worked through the courts as of February 2009[update], nearly 9 years after the sale.”

Wikipedia's article about the Allegro library: “History” section says, “Note that, combined with Glide and MesaFX (using 3dfx hardware), AllegroGL is one of the few available opensource solutions for hardware accelerated 3D under DOS.”

nVidia's “Real-time Interactive Video and Animation accelerator” (abbreviated “RIVA”) “TwiN Texel” (abbreviated “TNT”)
nVidia's RIVA TNT products represented yet another standard that competed with others. Wikipedia's article on “RIVA TNT”: “Overview” section notes, “After all, unlike the rest of the competition, Nvidia had come close to the Voodoo2 in performance in some games, and beaten it in 32bit image quality.” Later, nVidia's products started using the brand name “GeForce”.

Allegro web site: hyperlinks to other projects (“Libraries”) section may list some other options. Allegro may be an older library with support for DOS and other platforms, with SDL supporting many newer platforms.

Other standards/implementations that may exist may include:

Wikipedia's article on the Redition company: list of competing chipsets

Wikipedia's page on Quake 1's engine: section called “Hardware 3D acceleration” cites VQuake and other supported chipsets.

General/Kernel Graphics Interface

See: Kernel Graphics Interface

Wikipedia's page on “General Graphics Interface” says “The project was originally started to make switching back and forth between virtual consoles, svgalib, and X subsystems on Linux more reliable.” Whether that text was meant to refer to the idea of changing a running program from one such interface to another, or if switching back and forth just meant being able to run new instances of programs in one environment or another with minimal effort, the concept of the former is appealing. The GGI Project's main page states that “GGI no longer aims to manage direct access to graphics hardware”.

(Inclusion of this standard in the list is not being done because of known widespread use, but rather as an excuse to proceed with discussing the theoretical concept, of being able to move a program from one standard display technology to another. For text programs, such an idea may be somewhat implementable by using a terminal multiplexer, and then attaching to the terminal multiplexer in different environments. However, there may be some more flexibility that could be obtained if software used a generic output method, and had the ability to switch actual video output methods without needing to restart the program.)

[#mltvidou]: Multiple video output displays

Outputting video to multiple displays can be very simple, or extremely challenging (or impossible), depending on various factors like what hardware and software is being used.

Using multiple displays in Microsoft Windows

Although multiple VGA-or-better cards were supported as of some version (maybe Win98SE?), the task was often very prone to not working well, if at all, prior to Windows XP.

Windows XP

With Windows XP, the operating system typically did support this.

Compared to earlier operating systems, Windows XP has many more success stories of effectively using multiple displays effectively, such as using multiple displays to support a desktop that is extended to use the pixels of each display.

The only real remaining sad part about Windows XP's support for multiple systems is that it seemed quite driver dependent. For instance, software may be able to switch operations from duplicating a screen onto multiple display devices, to extending a desktop to offer a larger desktop that is essentially stretched over multiple monitors. That's great. The downside is that the typical interface to do this is often found in drivers coming from the hardware's manufacturer, and not a standardized interface provided by Microsoft.

MultiMontiorTool (by Nirsoft) may be helpful. (e.g., see eadmaster's answer to his own SuperUser.com answer.) The NirCMD command may also support this (see: “ monitor off ”, or “ setprimarydisplay ”).

DualMonitorTool 1.4 documents Dismon*. (There are newer versions of the software.)

Newer Windows (7?)

(This might have been introduced with Windows 7. Or Vista?)

In Windows 7 and newer, holding the Start key and pressing P will end up running a program. This may be displayswitch. According to information obtained by a forum post by Shawn, these command lines are supported:

  • /internal: use primary display (only)
  • /eternal: use secondary display (only)
  • /extend: extend desktop to use two displays
  • /clone: have both displays show the same thing
  • (no parameters: causes the software to display a graphical interface)

Why Start+P? People giving presentations would often use this to set how a laptop interacts with a projector. So P presumably stands for “presentation”, or “projector”.

The useful site ss64.com notes, “netproj” as well as “displayswitch”. (ss64.com run)

Multiple displays in DOS

This was not widely supported. However, perhaps the most widely supported option is to use a VGA card and an EGA (or older, e.g. CGA) card in the same computer. For instance, this was supported by Borland Turbo C++ 3.1 (for DOS), if I recall correctly.

This couldn't be done with two VGA cards, because the two VGA cards would both try to use the same I/O port/address.

As an example of how uncommon it would be for a computer to use multiple displays, early versions of a game called DOOM supported using multiple displays. However, it involved using multiple computers (and IPX networking) to pull off this feat.

[#elecpowr]: Power
[#psu]: Power Supply Unit(s) (“PSU”)

Power supplies have been created with varying degrees of quality. Poor power supplies are a very unfortunate thing. A very bad power supply might refuse to power on. Other power supplies may provide (extremely) low quality power which can reduce the life span of other components.

HardwareSecrets.com article 410: “Why 99% of Power Supply Reviews Are Wrong” (old URL: HardwareSecrets.com article 410: “Why 99% of Power Supply Reviews Are Wrong” (as archived by the Wayback Machine @ Archive.org) starts out with some warnings about information being published. In particular, part of the first paragraph notes, “contrary to other hardware parts like CPUs, motherboards and video cards, one must have deep electronics knowledge in order to test a power supply. Since most reviewers are simply users with a above-the-average knowledge in computers ? but not in electronics ? almost all PSU reviews posted on the web are completely wrong and they do more harm than good,”

Watts are good

A power supply with a higher amount of maximum output will generally indicate a higher quality device. Even if the power supply is capable of producing more electricity than what is needed, this is a good thing and not a bad thing. Decent power supply units do not tend to fry devices by actually forcing devices to take more electricity than what the devices request. As a generalization, a power supply unit capable of outputing a lot of power may be a unit that was manufactured using some high quality components, and so it may have some of the better power efficiency numbers available.

Quality is worthwhile

A person with a very limited budget (perhaps because the young person, as a legal minor, has never yet had much in the way of legitimate opportunity to acquire substantial amounts of money) might be tempted to place dollars elsewhere. A video card that is 30% better might result in a more pleasurable experience, while a power supply unit that is 30% better than the minimum required might provide... no superior experience that is easily and instantly detectable by an end user. So, there seems to be an incentive to not place limited budget dollars into the direction of a power supply. However, low quality power supplies have rendered systems inoperable on many occasions.

Generally every device inside a computer, and possibly many devices outside of the computer case (such as a keyboard and a mouse) will be using power that ends up going through this power supply. A malfunctioning power supply that delivers bad power could shorten the lifespan of other equipment. The decrease in lifespan may be substantial (20%?) or complete (100%, by instantly frying devices).

The power supply units have sometimes been compared to a heart. A heart gets blood to circulate to where it needs to go, and a power supply unit feeds electricity, which is essentially the “lifeblood” of electrical devices, to the devices where it needs to go. The difference is that the power supply unit also performs basic roles of liver and kidneys: ensuring the quality of what gets circulated. Spending a whole lot on a power supply might provide no immediately perceived benefit, but spending way less can cause equipment to not function. If a person is lucky, perhaps the computer equipment will simply refuse to do anything, and not work until some decent power is provided. In less fortunate cases, usage of an inferior power supply might be permanently destructive.

Make sure that quality is part of any purchase.


For a typical Power Supply Unit designed for an ATX form factor, or the older AT form factor, power supplies should be boxes, which are the 3-dimentional equivilent of rectangles. Smaller units, like those for Nano-ITX boards, may be an entirely different, and much smaller design. However, for a regular PSU designed for full-sized desktop and tower systems, the typical shape is rather boxy. With this traditional shape, each side should be flat. If “flat” does not describe the shape of a power supply unit (e.g. if a side looks like it is bulging), chances are that the PSU is damaged or, less likely, that it is made of fairly flimsy metal (which also isn't a very promising sign), or worse, both. In such a case, see the “Power supply shape incorrect” section in the Troubleshooting section.

Make sure it supplies the needed plugs; if not, see if there are adapters to address the problem.
A power supply with higher watts is likely better. Having higher efficiency is a good sign.
The Trusted

Some supplies which are reputable: Examples include Antec and Themaltake USA (sometimes abbreviated “Tt”). (Both of these companies are known for quality, even if pricey, power supplies. Also, both of those companies are also known for fancy, if pricey, computer cases.)

Also, Corsair Components seems to have entered the business of providing power supplies. The Corsair name had received some amount of recognition for having a good quality product for RAM created by Corsair. (Like many other opinions about high quality, there are also some dissenters.)

APC Magazine: Best High-End Gaming PC buildable today gives glowing praise for another contender: “Enermax have been around for a long time, and if you haven't heard the name, you should look them up. Their PSUs are well known for their reliability and longevity” ... “and the Revolution 87+ line's the most efficient on the market, with 1,000w listed running 87-93% efficiency at 20-100% load.” APC Magazine: Most Extreme PC buildable today, another article on the same site, notes the Enermax MaxRevo 1,500W featured “1,650w max. output and 94% efficient”. Since capacity for extra watts are a good thing, it is comforting to see a review note that the name of the product is “1,500W” but the reality is capable of outputting 10% higher.


Some manufacturers have obtained reputations for being unreliable. Until they alter their practices enough to achieve superiority long enough to generate a different reputation, the wisest approach may be to not entrust these manufacturers with the electrical health of critical system components.

If an informational resource, filled with solid technical details, spreads information about a company, the results might be rather damaging to a manufacturer. This seems like quite a condemning penalty for a small number of incidents, such as a single bad power supply made by a company that creates many copies of the power supply. So, the inclusion of such a list is something that has only been offered after some thought was put onto the subject.

An article that is hyperlinked later, which is an independent review by the JohnnyGuru.com website, indicates that some manufacturers have obtained some really bad reputations on a wider scale. Combined with experience of multiple failures from one such company, a different reality became clear: some companies may have a visible presense because of their skill in marketing products at low cost. However, their history has shown a willingness to make a buck by passing on an inferior product. Such a practice is likely to cause hardware failures, and disappointments.

Companies utilizing such a practice should not just be tolerated and ignored. The responsible thing to do is to share warnings when this is due.


Often offered as a low cost alternative. JonnyGuru.com's “The Bargain Basement Power Supply Roundup Review” starts off with a pitiful Chieftec. The wiring of that PSU is such a low gauge that the wiring couldn't possibly handle the amount of current promised with the power rating claimed by the unit. The power supply failed in a test that used less than a third of the amount of power that was claimed by the stickers of this PSU.

Is it really worthwhile to spend less money on a power supply so that more money can be spent on a faster video card (which will likely have higher power requirements) just so the power draw can threaten the ability for other components to remain working equipment? Is a pursuit of a faster video card worth threatening the safety of that very same video card, as well as other components such as a data storage device (an SSD, or perhaps a “hard drive”)?


Is this manufacturer named Po-Work? Apparently not: The label (as shown on the review site about to be mentioned) may show “POW” with inverse colors from “ORK”, suggesting the syllable separation is in the very middle of the name.

JonnyGuru.com's review of a Powork PW-650 notes a hard time trying to make progress reviewing the product instead of laughing at the name. (However, the actual product wasn't given praise, either.)

For more losers, see the rest of the power supply units described by JonnyGuru.com's “The Bargain Basement Power Supply Roundup Review”.


Again, keep in mind the HardwareSecrets.com article 410: “Why 99% of Power Supply Reviews Are Wrong which starts out with some warnings about information being published.

Printer-friendly version of JonnyGuru.com's “The Bargain Basement Power Supply Roundup Review” (related to JonnyGuru.com's “The Bargain Basement Power Supply Roundup Review”), Xbit Labs review of some Chieftec power supplies (conclusion page) (earlier pages show some photos), Review on Tom's Hardware site, by Frank Vö, called “Inadequate and Deceptive Product Labeling: Comparison of 21 Power Supplies.

In a pinch, to compare two power supplies with seemingly identical specifications but different manufacturers, and if research doesn't seem to be an option (because of a combination of factors like a time crunch, a need to get it up and running, a store that's about to close, etc.), the heavier one probably has more resistors and capacitors and so forth, and is likely the better one to go with. It might also mean that the manufacturer wasn't trying to cut corners by using flimsy metal that is light weight and easier to ship. (Although not telltale, that's probably not a bad sign about the manufacturing process.)
Power cord details
Power cord connectors

On the external side of a “power supply unit” of many computers there may be a three-pronged port which is implemented using a standard shape defined by the IEC 60320 C14 standard. The cords that plug into such a port utilize the IEC 60320 C13 standard. These are often called C14 ports and C13 ends of power cords/cables. Many servers will have multiple power supplies for redundancy, so either or preferably both of the C14 ports can be plugged in. (If both are plugged in, and one power supply becomes problematic, many servers will allow that power supply to be shut off and replaced while not rebooting the computer, because the other power supply can keep things going.) Many laptops may come with a power cord which leads to a power “block”/“brick” which then has a C14 port (and then peole can use a standard C13 cable).

Since different parts of the world use different connectors for a standard wall outlet, there are different types of C13 cables used throughout the world. The term C13 refers to the end that gets plugged into the C14 port. The other end of a C13 cable will typically plug into a standard wall outlet.


AWG, which stands for “American Wire Gauge”, is thicker with smaller numbers. So a 14 AWG cable is thicker than a 16 AWG cable. Gauge requirements and/or recommendations may vary.

One professional indicated that 16 AWG might be fine for a typical computer desktop using a PSU of less than (or maybe equal to) 700 Watts. However, some situations may provide good reasons to use thicker-gauge cable, such as when providing power to devices that require more power, and/or when more critical/centralized infrastructure/components get used, and/or situations where there may be multiple devices (such as any sort of “Y”-cable splitter, or a multi-outlet sitaution like using a multi-port surge protector or power tap).

Battery backup
[#olbatery]: Old batteries

Old batteries may experience various problems, such as safety hazards (leaking toxic chemicals, or heating up which could be a fire hazard). Sometimes the batteries have been known to swell/bulge, which may even make them challenging or infeasible to easily remove from the UPS (making replacement a challenge). (ServerFault.com question about a UPS giving off a “Rotten Eggs” smell tells a tale about “batteries were deformed”. Later commentary notes “the smell may actually seem sweet” depending on the concentration level.) These types of problems are well known to be a problem with a UPS that has been in operation for a long time. For example, DSL Reports about batteries indicates a buzzing sound may be hydrogen leaking out; both that forum thread and MCBSys technical blog: “APC Battery Disconnected” Message “May Mean Overheating” discuss batteries that were fairly hot even after they stopped being used for hours.

A safe recommendation is to replace a battery which has been used beyond its expected lifetime. This is particularly a good approach for unattended devices, like UPS units, which have batteries that are not regularly seen by any end user. As a generalization, that may last years. (This topic is mentioned further in the section about UPS costs. See: UPS costs: battery life.) By the time that a UPS battery's lifetime has passed, battery technology may improve to the point that a better type of battery may be on the market. (So, in the case of a UPS, getting an entire, better UPS unit may make more economic sense.)

Laptop batteries

Laptops are known to operate off of battery power. SuperUser answer about battery usage notes that laptops may be designed to use more power than what a battery can suitably supply, but power reduction techniques may simply reduce the system's power usage when it is on battery power. This results in teh system being less powerful in this condition. One of the comments on the page notes that a VBIOS (video BIOS) may cause a GPU to make adjustments. Another comment on the page notes that this may be done to prevent fire.

[#upspower]: “uninterruptable power supply” (“UPS”) battery backup units
Buying a UPS

If looking to make a buying decision, consider which companies have been supportive by viewing NUT acknowledgements (archived from the Wayback Machine @ Archive.org). For those philosophically/altruistically motivated, this endorsement may provide some companies a leg up over some of the alternatives.

Supporters of NUT
[#eatonoss]: Eaton (previous supporter of open source software solutions)

This organization has demonstrated support for NUT and continues to be friendly to the open source community. The NUT acknowledgements page called Eaton “Our main supporter”.

In the past, this website tried to point out Eaton's support of NUT. That support of NUT is now historical. Eaton page (American Edition) about Open Source software has ssated (when checked June 7, 2014), “Eaton no longer supports Open Source software.” NUT's web page of acknowledgements: section on Eaton says “Eaton does not support NUT anymore.” Also, “please do not consider anymore that buying Eaton products will provide you with official support from Eaton, or a better level of device support in NUT.” (Presumably Eaton may offer some official support for a bought product, but would not be providing any official support for NUT.)

Prior to Eaton, NUT had identified and thanked MGE UPS SYSTEMS as the prominent supporter. At that time, MGE included what is now Eaton. MGE UPS SYSTEMS basically got split at one point, and Eaton was the name for the portion of the company called MGE Office Protection Systems (which is the portion that handled products up to 10 KVA). So the people who run Eaton seem to have a history of supporting solutions using open source software code (regardless of whether they were working for the group called MGE Office Protection Systems or, later, a company named Eaton).

When MGE was split, one portion continued under the name of Eaton. The other portion of the split continued to use the MGE name, and became acquired by Schneider Electric, which presumably now competes with Eaton. There was once an address at http://opensource.mgeups.com but it now seems the mgeups.com domain name is now affiliated with Schneider Electric. The website at the http://opensource.mgeups.com address did not seem to be providing information when checked in early February, 2013. So, it seems the brand name of MGE may not have continued to be related to supporters of “open source” after the split. Later on, the http://mgeups.com redirected to a page at www.apc.com.

Information related to supporting open source software solutions may be seen at: Eaton's page about Open Source (Eaton Open Source (alternate URL), Eaton Open Source (destination of redirection).

Although NUT's supports clearly documented Eaton as the primary supporter, other companies have also provided hardware. Other companies provided information. The NUT Acknowledgements page mentions several companies.

A recognized brand

For many years, the company most famous for being the leading brand name has been “American Power Conversion Corp” (“APC Corp”, “APCC”, or “APC”). APC has since been bought by Schneider Electric, which is a French company (meaning that APC is no longer quite so “American”, despite the origin of the name).

MGE UPS SYSTEMS has partially also been swallowed up by Schneider Electric. However, Eaton's position of open source shows that Eaton seems to have inherited and continued the practice of being friendly to open source environments.

Unpacking a UPS

When unpacking a UPS made by APC, check for the slip of paper showing whether the unit passed the quality control checks. A UPS has been known to be unpacked and not working. Sure enough, the slip of paper showing the QC check showed that APC's tests had detected that it didn't work. Way to be checking quality! (<cough> <cough> {hack} wheeze... That's sarcasm at its fullest: a decent quality control program would have prevented such a failed unit from being fully packaged and shipped out from the factory.) Still, despite those “ferociously underqualified, bad, awful results” (“FUBAR”), the documentation of the problem made the situation so it didn't take a lot of further investigation to figure out whether the technician installing the unit had made a technical mistake with perfectly working equipment.

[#upsbatlf]: Ongoing costs (UPS Battery Life)

Some related reading: also see the note about old batteries (which does hyperlink back to this section).

APC FAQ: Expected life of APC UPS batteries says, “Most APC batteries should last three to five years. There are many factors which affect Battery life including environment and number of discharges.” (Content from that quoted page also available at Schneider Electric Technical FAQ: What is the expected life of my APC UPS battery?. At least some APC equipment has been known to sport the Schneider Electric name. APC FAQ: “What is the expected life of my APC UPS battery?” might be a newer URL.) It then goes on to explain how/why results vary.

Some people tend to recommend less time. KVar's response on an APC thread says, “Some APC Back UPS models may have a shorter battery life expectancy. Please reference the user's manual of your APC Back UPS to determine the exact battery life expectancy.” Multiple people said 2 years as a response to sharptooth's ServerFault.com question, “What is the lifespan of an typical UPS battery?”.

People also suggest having batteries with more power than what may be needed. Actually, they often re-phrase that as suggesting that people don't use as much power as what the battery can support. Such recommendations have been made to suggest increasing battery longevity and runtime. Load recommendations have included 80% (an APC page) and 50% (cited by KVar's response on an APC thread, and Ward's answer to sharptooth's ServerFault.com question, “What is the lifespan of an typical UPS battery?”).

APC's Ask the Experts: question about expected battery lifespan may show the same sort of thing. Additional guidelines may be provided from those pages, such as details about storing batteries.

Other possible resources/discussion: Discussion about heat.

Note that a battery does degrade, losing capacity and/or life span, through usage. However, it also degrades (to a lesser extent) when sitting on the shelf. (This is according to a very experienced and extremely trustworthy professional who routinely sold batteries.) Therefore, if you have a chance to buy batteries at a nice price, getting extra stock to be used years later might not be the best approach. (One reason is that batteries degrade when sitting on the shelf. The other reason is that improvements to battery technology have been getting developed over time, so purchasing a battery later might result in a superior battery being obtained.)

Don't over-discharge

APC FAQ: Expected life of APC UPS batteries notes, “There are many factors which affect Battery life including environment and number of discharges.” (Then, the article elaborates more later. Part of that text presumes that an APC-branded battery is being discussed.) “Only perform runtime calibrations on your UPS one or two times a year, if necessary. Some of our customers want to check their systems to verify that their runtime is sufficient. However, consistently performing these calibrations can significantly decrease the life expectancy of your APC battery. It is also important to remember that a UPS is not designed for constant deep discharges. We do not reccomend using your APC Smart-UPS/Back-UPS as a portable power supply that is repeatedly discharged/recharged (like on a portable equipment cart). These units are designed for emergency use during unexpected power outages or momentary loss of power.”

APC's Ask the Experts: question about expected lifespan says, “Do not exceed 80 percent of a UPS unit?s rated capacity due to the reduction in run time. When loads increase, runtime lessens. In the event of a power failure, a UPS loaded to full capacity will drain and discharge its battery quickly and will lessen the life expectancy.” APC's UPS Selector, APC UPS Upgrade Selector
Interacting with a UPS
Testing a UPS

Commentary from “sawdust” at Superuser notes, “Testing a UPS by pulling the plug out of the wall socket is a bad idea, because you are removing earth ground while a battery source continues to provide 120VAC to the equipment.” Instead, the recommendation is to flip the power switch on a surge protector that is being used between the UPS and the wall outlet. This more faithfully reproduces the situation of what will happen when electricity stops flowing to the UPS, without removing the earth ground (which would still exist even if electricity stops flowing to the UPS).

Of course, a person could theoretically pull an electrical plug from the wall, and a UPS should be able to handle such a situation. Still, being unnecessarily hard on equipment may be less than ideal.

Software commands may have a system calibrate. See: communicating with the UPS.

[#upscomms]: Communicating with the UPS

A UPS may have some sort of data connector, possibly as part of the UPS or, in some cases, as an add-on card that is placed into a UPS. The specifics vary depending on what model of UPS is used. Connector types could include Ethernet, USB, or RS-232 (serial port). Note that some UPS units may have jacks that look similar to standard ports, but which aren't. For instance, APC may expect that special serial cables, using some custom pin-out wiring, are used instead of more standard RS-232 pin-outs. (APC FAQ (#5): “Which is the appropriate serial cable to use with my UPS?”).

Note: a message about being disconnected may indicate a problem with the unit. Perhaps the connection was solid, but the unit just failed to communicate. MCBSys technical blog: “APC Battery Disconnected” Message “May Mean Overheating” indicates the issue may be a result of batteries. Check if the battery is old. If so, consider replacing it. Otherwise, check if the batteries are hot. If so, quickly do what is needed to remove those batteries, and replace them.

Zonker's APC UPS Console Clues may provide some wiring details. Also, Jon Steiger's APC SNMP details

So, once the data connection is made, how does one interact with the UPS? Again, implementations may differ, but there may potentially be multiple ways available. Consider any of the following:

Standardized protocols

Try communicating with the UPS using some standard protocols, such as: HTTPS, HTTP, SSH, SNMP, telnet

Of those, HTTPS and HTTP may be the most likely to have a more customized, pleasant interface for manual interaction, while those same protocols may be more challenging (than the alternatives mentioned) to be trying to use for automation.

Specialized software
Software built into the operating system

For modern systems, there may often be some support bundled with the operating system. If the system uses a graphical user interface and has a control panel, checking there for an icon related to “Power” (or perhaps “Battery”?) might be fruitful.

Microsoft Windows

Here is some information.

Windows 7

(The following was written while the effects/results were still untested, but this seemed reasonable.)

Command line

A command called PowerCfg might be useful? Superuser commentary suggests using “ Powercfg.exe -qh ” to “also display power related settings that are not shown within Control Panel.”

Using the graphical interface

Control Panel, Power Options. Choose a plan to use for viewing or altering the settings. (If desired, first use the “Create a power plan” hyperlink. Then choose the newly-created plan.) Choose the “Change plan settings” hyperlink that is related to the desired power plan. Then, choose “Change advanced power settings”. (Then, there might be a UAC hyperlink that says “Change settings that are currently unavailable” which might make some more settings visible? Screenshot shown by: Superuser commentary.) One related settings may be in the section called “Sleep”, under “Allow hybrid sleep”, where the setting is called “On battery”. Other settings may be in a section called “Battery”.

Windows XP

e.g. Wikipedia's article on PowerChute: section about the Windows XP UPS service.

Other software
[#upsnut]: Network UPS Tools (“NUT”)

Connect the UPS to the computer. Then, try to make sure that connectivity is actually working. For example, if a USB cable is being used, see the section on USB ports to install software as needed.

NUT in Unix

Then, as for configuring the actual UPS software, the section about responding to a signal from a UPS currently have more details than this guide.

Using NUT in Microsoft Windows
Google Code for WinNUT may have a newer version than the software found on the old WinNUT project page.
Software from the UPS's manufacturer

The following software may provide much of the same functionality, but have some key differences (such as providing better support for communicating using IP over Ethernet, rather than USB, or vice versa).

Network Shutdown
Editions of PowerChute include Business Edition and Personal Edition. Oh, and on a historical note, there's also the version built into Windows XP.
[#surgeprt]: Surge Protectors
  • The following is true for surge protectors. At least, typical cheap-ish surge protectors that are based on using a component called a “metal oxide varistor” (often abbreviated as “MOV”)...
Some More Notes On Surge Protectors

note: NYTimes.com Wirecutter: The Best Surge Protector links to University of California Office of the President: Risk Services, BSAS (“Be Smart About Safety”), “Surge Protector and Power Strip Safety” “Information from Yale University Office of the Fire Marshal” which seems to have good advice, but also says “Surge protectors or power strips should have a cord of no more than 6 feet in length.” But sometimes they do, and presuming that the manufacturers make safe equipment, it would suggest that PDF may just be stating a rule intended for a particular place (a college). Since many professionally manufactured surge protectors come with cords longer than that, that advice (and maybe other device) may be based on specific recommendations mainly intended to target a certain audience, and not just being general advice that is universally recognized as a bad idea to not follow.

NYTimes.com Wirecutter: The Best Surge Protector has stated, “Most estimates put the average lifespan of a surge protector at three to five years.” Even more specfiically, elsewhere on the page, that text is also stated and expanded on a bit... NYTimes.com Wirecutter: The Best Surge Protector has stated, “Yep, that’s right: Surge protectors don’t last forever. Most estimates put the average lifespan of a surge protector at three to five years. And if your home is subject to frequent brownouts or blackouts, you might want to replace your surge protectors as often as every two years.” (Emphasis, via boldness, was added and not part of the original quote.) That quotation cited a HowToGeek.com article which provided a time-based rcommendation of “every two years or so, but any recommendation like this one can only be a rule of thumb.” The reason that the isn't more precise is described elsewhere in that article, where it explains that after a “1000 joule surge protector takes an 1000 joule hit, it’s done for. But it’s also done for if it takes ten 100 joule hits or if it takes a thousand one joule hits. It’s all cumulative.” “Surge protector lifespans aren’t measured in years they’re measured in joules.” So, the number of years cited in prior paragraphs are estimates.

“If you never want to replace your surge protectors again, you can look for high-end series mode or hybrid surge protectors that can last indefinitely. But their prices can easily soar to 10 times the price of our main pick and runner-up, making them an impractical option for most people.” Audiogon.com Discussion Forum: Surge Protector - Brick Wall/Zero Surge vs SurgeX shows some pricing information (in the top post with the questions), and has a post by “Jim”, “President of Zero Surge”.

This text recommends at least marking the surge protectors. When you plug in a surge protector, see if it has been written on already. If not, use a marker and write (on the bottom or back side of the surge protector) the date that it started getting used. Maybe also note the warranty length and level of Joules protection if that is written on packaging that is about to be lost. Ideally, equipment will then get replaced in a suitably timely fashion due to a well-organized effort to stay on top of tasks. However, even if financial challenges or disorganization doesn't lead to replacement when ideal, at least having the date written down will allow a person (years later) to clearly confirm any suspicion that the device is old. While replacement might get delayed for one reason or another, there seems to be little downside to being able to helping to enable future decisions to be made from a situation of being more well-informed.

Refrigerators, air conditioners, and some other devices should not be protected with a common surge protector. Page on GE Appliances says, “We do not recommend connecting a refrigerator or freezer to a surge protector. The compressor is sensitive to temperature and current overloads, and will shut itself down with a surge. ... A surge protector will override this system, and if there is a power surge, your refrigerator may not restart.” A Home Depot page was quoted by Google to say, “ it is not recommended for newer GE Refrigerators to be used with a surge protector” ARE good to be protected by a surge protector, but not necessarily a common surge protector. Special surge protectors have been made that better support such devices. They are sometimes called surge protectors intended for “large appliances”.

ThisOldHouse.com article: “Whole House Surge Protectors: How Effective Are They?” reports, “damage inflicted by” “minor power fluctuations” might “not show up for some time.” Perhaps no immediate damage is readily noticeable, but that web page quoted Andy Ligor, a consultant with A.M.I. Systems Inc., a firm that installs both residential and commercial surge-protection systems, as he explained a possibility: “Then a year or so later your microwave stops working.” This damage could come from various possible sources, “even the cycling on and off of laser printers, electric dryers, air conditioners, refrigerators, and other energy-sucking devices in the home.” Electric usage from off the property might even lead to some amount of impact.

TheWireCutter.com Reviews: “The Best Surge Protector” seemed like a pretty thorough review, including having someone with electrical expertise perform some testing. In the late 2017 review, the electrical expert went so far as to be disassembling the devices to check their components. For the 2018 version, the article notes, “In 2016 and 2017, Johnson dissected each surge protector to assess the components inside. He compared the thickness of the wiring, the size and arrangement of the MOVs, whether any filters or capacitors were incorporated into the designs, and the overall construction quality. For reputable brands selling surge protectors in the $15-50 range, the guts were so similar that the dissection didn’t yield any useful information, so we didn’t tear down the seven models we tested at the end of 2018.” Still, electrical testing was done for the 2018 review.

TheWireCutter.com Reviews: “The Best Surge Protector” noted, “According to the Institute of Electrical and Electronics Engineers, no home would ever experience a power surge over 6,000 volts, and most don’t even come close. The major exception to this would be direct lightning strikes, but at upwards of one billion volts, no home surge protector is going to save your TV from one of those.”

So why do surge protector marketing materials show lightning if they can't protect against a direct lightning strike? Maybe because the idea is that they might be able to prevent some indirect damage when the lightning strike isn't quite so direct, but was located somewhere down the block.

Should Joules be distrusted?

TheWireCutter.com Reviews: “The Best Surge Protector” had some comments available in February, 2019. Some sharable URLs were provided. However, URLs like http://disq.us/p/1zh84sx and http://disq.us/p/1x1fj1a just redirected to the article, and the web browser didn't even jump down to the comment... Also, the URL didn't include a reference to the year, so presumably the comments may get replaced when a new year's content gets released (e.g., in 2020).

So, once those comments disappear, a person may not be able to verify the quoted source. Still, this text was found in a user's comment on that page, and the perhaps-unsource-able text seemed worthwhile to quote. (Presumably the comment might disappear when the article gets updated in the year 2020?)

The quoted comments in the following couple of paragraphs were made by “Always-Learning”.

“One last comment on joule ratings, take it with a grain of salt. Unless the manufacture states how the number is derived, it's basically meaningless. The manufacture sums the joule ratings of all the protection elements. There are at least 3 across each prong of an outlet and one or more on the Ethernet, Telephone and/or Ethernet protection elements. If a manufacture puts two protection elements in parallel he can claim twice the number of joule ratings, even though in reality one protection element will never turn on. By way of an analogy, consider a tire that is rated for 50,000 miles of tread wear. Slap 4 on your vehicle and a 5th as a spare and you can claim 250,000 miles of tread wear. It's silly but now you understand how silly or meaningless a single high joule number is without an explanation.”

“the other MOV's in parallel never entered the picture for protection but were placed there so that the company could claim a higher joule rating which is "marketing hype". In this case 6 dual MOV's would cost about $1.50 instead of $0.75 but the company can claim twice as many JOULES as their competitor.” (The next paragraph was the following text.) “Recall that JOULES are what companies use to lure consumers. Consider a tire rated for 50K miles of tread wear. Install 4 on a car and you can claim 200K miles of tread wear, put a 5th on the spare in the trunk and you can claim 250K miles of tread wear.”

One of the early comments on that page recommended “hybrid device using both a GDT (Gas Discharge Tube) and MOV (Metal Oxide Varistor) like the Morgan Mfg models”... “Google KF7P Metalwerks (as they are the new owners of the Morgan devices)” However, Always-Learning seemed to indicate that such a technique doesn't add significant protection. Instead, he recommended using a “Series Mode” protector, and noted, “The 3 companies that make these products are Zero Surge, Inc., Brickwall (a private label of Zero Surge) and SurgeX.”

Always-Learning also noted, “Besides many MOV based protectors that "protect" ETHERNET can only do 10/100 megabit connections”, so “they cannot secure 1000” megabit, a.k.a. “gigabit connections.” (Actually, what was said was “1000 gigabit connections”, but given the common networking speeds or the year 2019, it is more likely that was a typo than an attempt to refer to terabit.)

Comments also noted that “P3 Kill a watt” can be nice.

Amazon.com: Digital Energy: Surge Protector says, “An avg home gets 345 power surges per year” [(“avg” is a standard abbreviation for “average”.) The next statement says, “Power surges are the #1 cause of data loss”. (Actually, insufficient backups are likely a better cause to note, although less applicable for trying to sell that particular surge protector.)

That quote came from a page that sells a surge protector with ten outlets, 2 USB ports, 2 RJ-45 (Ethernet) ports, three RJ-11 (landline telephone) ports, 2 coax cable (“F-Type Connector”) ports, and lights for being grounded and actively protecting. The cord length can vary from 8 feet to 25 feet.

Another feature-filled option is Tripp Lite TLP1208TELTV which comes with a shorter power cable and no Ethernet ports, but it does support a larger grand total of electric outlets (at twelve). If just a large number of outlets is desired, Maxuni's 12 port powr strip has even more USB ports (and is cheaper), but lacks the RJ-45 and RJ-11 and Coax ports.

[#elcpwtap]: Power Taps

These look like surge protectors, and might be. Some power taps are definitely not surge protectors. However, many surge protectors have been known to call themselves power taps. Therefore, if a device is marketed as a power tap, it might or might not be a surge protector.

Power taps which are not surge protectors have the following advantages over those that are surge protectors:

  • Power taps which are not surge protectors wouldn't fall in the same “don't daisy chain” category. Fire hazard concerns and overloading concerns may be some reasons to not daisy chain much, if at all. However, they shouldn't contribute to the issue where a surge protector can cause another surge protector to protect less, leading to n overall reduction in the ability to protect against surges.
  • Power taps which are not surge protectors are likely to not have the “metal oxide varistor” (“MOV”) component that are used in most cheap surge protectors, but which tends to age most quickly in a surge protector. So, power taps which are not surge protectors may be fully functional longer than most surge protectors.
  • Power taps which are not surge protectors are likely to be cheaper than surge protectors.

https://www.techwalla.com/articles/power-tap-vs-surge-protector says that power taps use the [safety] “standard from Underwriter Laboratories (UL), the evaluator of power taps and surge protectors. Surge protectors meet UL 1449 while power strips meet UL 1283 standards.” This is different than an “extension cord”, because “Extension cords are for temporary use only and have a different standard”.

https://zerosurge.com/certifications/ indicates UL 1283 actually covers surge protectors. The relocatable power tap may be UL 1363.

Some terms that might be more common to describe a power tap which is not a surge protector are “relocatable power tap” or “power switch”, and/or “power block” (and/or perhaps “power tap”).

UL 1363A may indicate devices that are not as easily movable. (PDF page 5)


Mains Power

Dave Tweed's answer to JYelton's Electronics.StackExchange.com question on voltage levels says, “In the US, the electric utilities are supposed to deliver power to residential customers at anywhere between 110 and 125 VAC RMS.” A comment to that answer notes, “in the UK, voltage was generally specified as 240V +/- 6% while mainland Europe used 220V, in both cases with some regional variations until the grids were tied up. As part of EU harmonisation, the compromise of 230V +/-10% was reached which covered both previous ranges”.

A quick glance indicates that most resources identify small differences, like 110V vs. 120V (or 115V), as insignificant and not worth fussing over. However, the difference between 120V and 220V is considered noteworthy enough to make sure such considerations are handled correctly. Failure to handle such large differences correctly could result in ending up with equipment having fried electronics inside.

[#elcpwapi]: Power interfaces

An “application programming interface” (“API”) related to power can instruct hardware to use less power. (Although the concept of an API is more of a software focus than a hardware focus, using such an API is often done with a mindset of controlling hardware. Also, hardware needs to support the API. For this reason, information has been placed here, in the hardware section.)

General power-related standards

(These standards might be about the whole computer, or perhaps about the motherboard. However, they do support power, and so are in this section about power interfaces.)

These standards are most well known for helping to save power. Slowing down a CPU may be one method to save power. (Laptops have been known to have an option to operate more slowly when running off of battery power, to try to waste less energy when speed is not needed.) So, as noted in the section about APM, there may be some details available about interacting with CPU speed.

[#advpwrmg]: Advanced Power Management (“APM”)

Some brief information is available at Ubuntu tips about a program called hdparm (which might be specific to working with hard drives, and has some support for interacting with APM). OpenBSD Laptop mini-HOWTO

The section about CPU usage: power standards may also have some applicable info.

[#acpi]: Advanced Configuration and Power Interface (“ACPI”)

PDF file about a suspend/resume framework in OpenBSD (visible via Google Docs) section 3.2 says “OpenBSD has its own” ACPI “implementation. The only other open source alternative which most operating systems are using is Intel's ACPICA”. “There is also a third closed-source implementation used in Microsoft Windows.” (The PDF file provides further references.)

This “standard” may be more loosely designed and be less standardized than some other alternatives (like APM). It seem that ACPI was a/the focus of theme for the OpenBSD 4.5 release, although the OpenBSD 4.5 song: “Games” showed, “[Sorry, no commentary]”. TechNet page on PnP has referred to “the Advanced Configuration and Power Interface (ACPI) Specification, a hardware and software interface specification that combines and enhances the Plug and Play and Advanced Power Management (APM) standards.”

Perhaps: acpidump -o firstPartOfOutputFilenames and dmesg | grep -i acpi (or checking the /var/run/dmesg.boot file).

Some further info might be available at: stopping a system with ACPI.

Some further references to ACPI (or perhaps similar-ish technologies, like PnP) may be on the page about detecting hardware.

Operating system support may vary. ArchLinux guide to pm-utils (suspend/powerstate setting framework, ArchLinux guide to Uswsusp: user space software suspend, ArchLinux guide to suspending to RAM using hibernate-script, and hyperlinks.

Some specific power standards
[#vesadpms]: Display Power Management Signaling (“DPMS”, unrelated to DOS Protected Mode Services which uses the same abbreviation)

(This DPMS is unrelated to DOS Protected Mode Services, another technology that uses the same abbreviation.)

This may also be known as VESA DPMS (where the term VESA refers to the Video Electronics Standards Association). PC Monitor DPMS specification explanation explains how “Suspend” mode saves more power than “Standby” mode.

On. If a web page is being read, then the setting of “On” is being used.
[#dpmstndb]: DPMS Standby

PC Monitor DPMS specification explanation indicates this is set by turning off the horizontal sync signal, but leaving the veritcal sync on. The referenced web page also indicates that turns off the “RGB guns”, while keeping the power supply on and tube filaments energized. (Some of that terminology may be a bit specific to a specific implementation: a CRT monitor. Presumably other displays would simply do a similar approach, turning off whatever method of RGB output is used.)

[#dpmsuspn]: DPMS Suspend

PC Monitor DPMS specification explanation indicates this is set by leaving the vertical sync on, but turning off the horizontal sync signal. The referenced web page also indicates that turns off the RGB output (“RGB guns” for CRT displays) and turns off a power supply in the display, saving more power, but still keeping tube filaments energized.

[#dpmsoff]: DPMS Off
Some circuitry may still monitor the horizontal sync and vertical sync signals so that the system knows when to switch to another DPMS display mode. However, substantial other functionality may be turned off to reduce power consumption.

Allows software to cause a signal to be sent to the power supply so that the power supply will turn off a computer. (For information about sending this signal, see the section about shutting down a system.) While the computer is powered “off”, there may still be some electricity being supplied to, and used by, the computer. This is generally mainly to watch for certain signals to power the system back on. Such signals may include WOL (“Wake on LAN”), keyboard input (often just some specific keyboard input, but this depends on the BIOS and/or BIOS settings, and perhaps any key and/or mouse movement may trigger the power on), or waking up at a certain time, waking up in response to a “ring” (from a modem), or perhaps some other signal from a PCI device. Which of these signals (and perhaps others?) may be responded to is something that may commonly vary a bit between computers, based largely on what the BIOS supports.

Other power standards
Energy Star
The Swedish name “Tjänstemännens Centralorganisation” is abbreviated as TCO. The English translation of the game is “Swedish Confederation of Professional Employees”.
Software to report power
Microsoft Windows

Windows Vista has a command line tool, as noted by TechNet Documentation for Powercfg Command-Line Options.

That list of options may not include absolutely everything. A command called PowerCfg might be useful? Superuser commentary suggests using “ Powercfg.exe -qh ” to “also display power related settings that are not shown within Control Panel.”

However, the official list of options (from TechNet as noted above) does provide details of some options (which is apparently somewhat incomplete when it comes to -q). Here are some highlights:

Note that the program seems to accept a slash or a hyphen before the name of an option.

Powercfg -l

Powercfg -l ” or “ Powercfg -list ” will show a list of power schemes on the system. For instance:

C:\> PowerCfg /LIST

Existing Power Schemes (* Active)
Power Scheme GUID: 381b4222-f694-41f0-9685-ff5bb260df2e  (Balanced) *
Power Scheme GUID: 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c  (High performance)
Power Scheme GUID: a1841308-3541-4fab-bc81-f71556f20b4a  (Power saver)


Note: Some OEM computers may come with their own scheme, such as “8759706d-706b-4c22-b2ec-f91e1ef6ed38  (HP Optimized (recommen”“ded)) *” in addition to the three shown above, or “49ef8fc0-bb7f-488e-b6a0-f1fc77ec649b  (Dell) *” (alongside Balanced, and instead of the “Power Saver” or “High performance” schemes commonly found elsewhere).

PowerCfg -aliases
Shows some GUIDs and some shorter names that can be used in any time when the PowerCfg program can take a GUID. Some of these aliases may correspond to the GUIDs used by Power Schemes, while others may refer to GUIDs that correspond to various specific settings/options.
Power Scheme GUID: 381b4222-f694-41f0-9685-ff5bb260df2e  (Balanced)
PowerCfg -s GUID
or PowerCfg /SETACTIVE GUIDorAlias

(Use one of the GUIDs revealed from the command that listed available schemes. If you have determined an alias for one of the GUIDs, you can use that alias instead.)

C:\> PowerCfg /SETACTIVE 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c

Here is a sample series of commands utilizing an alias. This determines the current scheme, and after figuring out an alias for the current scheme, sets the profile to that same scheme. (The point of this example is mainly to show what the command will accept. In a more real scenario, the destinating power scheme would probably be something else seen from “ PowerCfg -list ”.)

C:\> PowerCfg /LIST
[... info cut for brevity]
Power Scheme GUID: 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c  (High performance)
[... info cut for brevity]

C:\> PowerCfg -aliases | find "8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c"
8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c  SCHEME_MIN
C:\> PowerCfg /SETACTIVE 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c
PowerCfg -export outfile GUIDorAlias
Specify a filename, and a GUID (or an alias for a GUID)
PowerCfg -export infile optional-GUID
Specify a filename. If a GUID is specified, the settings will go into that GUID. If not, a new GUID will be made.

(These options might be somewhat newer?)

PowerCfg --duplicatescheme oldGUID newGUID
PowerCfg -changename newGUID NameForNewGUID OptionalDescription
PowerCfg -delete GUID
specify a GUID name to remove the entire scheme with “PowerCfg -d ” or “ PowerCfg -delete ”, followed by a GUID or Alias.
PowerCfg -q optional-GUID-or-alias optional-subGUID-or-alias
queries a scheme. By default, the current power scheme is queries. Another alternative is to specify either just a scheme GUID (or an alias to a GUID) to query, or to specify both a scheme GUID (or an alias to a GUID) and then a “Sub_GUID” which is either “the GUID of the subgroup to display” (quoting the TechNet page), or an alias for such a Sub_GUID.
Easy options

PowerCfg -x ” (or “ PowerCfg -change ” allows changing some values. For instance:

(The TechNet page indicates a form of “-change settingvalue” at the time of this writing. However, the sample command line also lacks some of the required spaces, so this is likely a typo that should have said “-change setting value”.)

C:\> PowerCfg -x -monitor-timeout-dc 5

The possible settings listed on the TechNet page are:

  • -monitor-timeout-ac
  • -monitor-timeout-dc
  • -disk-timeout-ac
  • -disk-timeout-dc
  • -standby-timeout-ac
  • -standby-timeout-dc
  • -hibernate-timeout-ac
  • -hibernate-timeout-dc

The possible values are all the same: a number of minutes.

However, there may be more than what the TechNet article provided. For instance, Windows Event Messages, Kernel-Processor-Power shows an example which helps Intel SpeedStep technology to be enabled. It looks something like:

C:\> PowerCfg -x Portable/Laptop /processor-throttle-ac ADAPTIVE
Thorough flexibility

PowerCfg -setacvalueindex and PowerCfg -setdcvalueindex

Both take 4 parameters, which the TechNet page describes as:

Scheme_GUID Sub_GUID Setting_GUID SettingIndex

Recall that an alias can be used. (Perhaps an example can be shown here, in a future version of this document, using “PowerCfg -q ” to determine the details for “LIDACTION”, and then use the LIDACTION alias?)

sleep states

PowerCfg -availablesleepstates” (PowerCfg -a” or “PowerCfg -availablesleepstates”)

PowerCfg -devicequery query-flag


There may be an icon called “Power” in the Control Panel. The powercfg.cpl file is related to the Control Panel's interface for power.

Tray icon

More conveniently, there may be an icon related to power in the “System Tray”/“Message Notification Area”.

Third party

Nirsoft BatteryInfoView may support providing information in easily handled data formats.

[#elcpwlwr]: Power reduction

Note that turning off video output can save power. Simply blanking a screen, by having it output black pixels, might lower power consumption, but generally insignificantly. (However, if the screen is placed in DPMS's “Standby” mode, that may save some amount of power.) Having flashy screensavers generally does not result in any serious amount of power reduction. (On the contrary, if the screensavers use more advanced video card features to support 3D environments and/or partial transparency, additional calculations may actually drive up the power usage.)

Following are some discussions about turning off the screen (placing it in a standby mode). Some information may also be provided about blanking a screen, despite the fact that such a practice doesn't save nearly as much power as placing the video display in a standby mode.

Turning off the display in Unix

As documented by OpenBSD FAQ 7 (Keyboard and Display Controls): section about blanking the console (FAQ 7.7), this may be doable by running “ wsconsctl display.kbdact=on ”, which will enable the screensaver and cause keyboard input to restore the screen's contents, or by running “ wsconsctl display.outact=on ”, which will enable the screensaver and cause the screen's contents to be restored when output is sent to the display. Setting display.vblank=on causes the vertical sync to be turned off, which may cause supporting (non-ancient) monitors to go into an energy saving mode. Setting display.screen_off=10000, or some other specified number of milliseconds, will cause the screensaver's timeout value to be the specified amount of milliseconds (e.g. 10k ms=10s). These files may be affected by a /etc/wsconsctl.conf file.

The xset command may come with X, and be able to switch the video to various DPMS states by using a parameter that contains the phrase dpms. Additionally, a screen blanker may come with X, which is also able to be used with xset command, by using the s parameter.

(If other options are desired, they are discussed in the screensavers section.)

The following command lines show some of the related xset commands, also described by X.org documentation: the xset command.

Using xset to control DPMS
xset +dpms
Enables DPMS features
xset -dpms
Disables the DPMS features
xset force on
Sets the DPMS setting to the DPMS “on” state.
xset force standby
Sets the DPMS setting to the DPMS “standby” state.
xset force suspend
Sets the DPMS setting to the DPMS “suspend” state.
xset force off
Sets the DPMS setting to the DPMS “off” state.
xset dpms number(s)

e.g.: xset dpms 10 20 30

If the first parameter after the word “dpms” is a number, then that parameter specifies a number of seconds before the system changes the video output to use the DPMS “standby” state. That number may be the first in a set of numbers (separated by a space): the remaining numeric parameters are optional. The second parameter (after the word “dpms”), if it exists, specifies a number of seconds before the video output should be changed to the DPMS “suspend” state. The third parmater, if it exists, specifies a number of seconds before the video output should go to the DPMS “off” state. Specifying zero (“0”) seconds for one of these states will disable that state.

xset s other-parameters
If the goal is simply to show a pattern such as a blank screen, see the section about using xset s.
Turning off the display in Microsoft Windows
Turning off circuitry to save power

This may be commonly implemented by having the operating system perform a shutdown procedure.

Command line software

Perhaps (in Win Vista... older as well?): use PowerCfg.

(If that doesn't exist, perhaps NirSoft's freeware called NirCmd may provide an option, using “nircmd monitor off”.

Also, MultiMontiorTool (by Nirsoft) is another product that could be used to turn off the display, using “ nircmd cmdwait 1000 monitor off ” (based on information from the example at HowToGeek.com: Create a Shortcut or Hotkey to Turn Off the Monitor).

For a hybrid solution that starts with a command line but then may require some user interaction, use the command line to start the control panel applet. MS KB Q192806: Control Panel tools accessible by command line shows that in Windows 98, this may be with control powercfg.cpl while Windows 95 used the slightly more convoluted syntax of control main.cpl power.

Windows Vista
Control Panel. If not using “Classic View”, may need to choose “Hardware and Sound”. Then (whether in “Classic View” or not), there is an option called “Power Options”. (If not in Classic View, there may be some additional hyperlinks, like the one called “Change power-saving settings”.)

Rob van der woude's page about shutting down and rebooting has several details. (Note: despite the name similarly, Rob Van der woude is not clearly known to be related to Scott Conrad VanderWoude, the creator of the Cyber Pillar website.)

There might be further details on the page about stopping a computer.

Blanking the screen
Modern versions of Microsoft Windows come with screensaver support. Simply use a screensaver with a name of “Blank”. It seems likely that it would be possible that the effect might also be simulated with some other screensavers, such as “Photos” (if directed to look at a location which contains only a black image), or perhaps a Marquee screensaver with blank text.
Hibernating in Microsoft Windows

After installing UltraDefrag, a program may be found at “ C:\Windows\System32\hibernate4win.exe ” (or perhaps something similar, using %windir). UltraDefrag source code: Hibernate shows source code for a fairly simple program that uses Microsoft's SetSuspendState code. The program only hibernates if the “now parameter is provided.

The program is mentioned by UltraDefrag Handbook: Console Interface. It appears older versions of that handbook contained text saying the program was : “included in the UltraDefrag package.  It was especially created to hibernate the computer through the command line after a disk defragmentation, which may take some time to finish.”

Networking/Communications hardware

See: Networking/Communications hardware. (The [#potsmodm] : Dial-up modems section has been moved to there.)

[#sysclock]: Clock

The hardware clock is pretty commonly supported. On the PC architecture, IRQ 0 is dedicated to supporting the system clock.

OpenBSD FAQ 6: section on OpenNTPD (section 6.12) notes, “many people have noticed that their $5 watch can keep better time than their $2000 computer.”

This section is about not just setting the clock, but similar concepts like software support for the concept of time zones. (Similar information may be at Networking technologies section on time.)

Seeing the current time
  • This might often be able to be done by using software that provides an ability to change time. Often this software may be called date or time.
  • Some websites will report the current time.
[#setclock]: Setting the clock's time
Desired order of operations

If a computer is a “host computer” running virtual machine software, then it makes sense to set the time accurately in the host computer's environment before trying to make changes for the virtual machine. This is simply because the virtual machine may use the information provided by the host computer, so setting up the virtual machine first, and then changing the host computer, may cause changes on the virtual machine as well. (That may be a less big deal if the time is simply being adjusted by minutes, and if the virtual machine is set to automatically set time. However, it may cause a more substantial adjustment if time zone settings cause a computer's clock to be shifted by a matter of hours.)

See: OpenBSD manual page for the settimeofday() system call, “Caveats” section. This discusses a topic using language meant for computer software programmers, but what it basically says is that making small time adjustments is preferred over making large time adjustments. That being said, if a large time adjustment is needed, doing that once right away is probably more sensible than trying to allow many small adjustments to be placed over a great length of time. However, the manual page's other recommendation probably is good to follow: “Time jumps can cause” ... “programs to malfunction in unexpected ways.  If the time must be set, consider rebooting the machine for safety.”
Overview: why to set things initially manually

One option is generally to keep the clock's time automatically. In theory, just getting that set up will also take care of any initial incorrect time set. However, there are some reasons why it may be nice to set things with a more manual method:

  • Changing the time with a more manual method may be faster and easier to do than to take the time to set up an automated method.
  • Automated methods may require use of some network bandwidth. This generally isn't an issue due to the tiny amount of bandwidth, but it could be an impacting factor for offline systems.
  • Some software may limit how large a single time adjustment my be. The impact is shown by OpenBSD FAQ 6: section on OpenNTPD (section 6.12) which describes this effect with openNTPD: “Once your clock is accurately set, ntpd will hold it at a high degree of accuracy, however, if your clock is more than a few minutes off, it is highly recommended that you bring it to close to accurate initially, as it may take days or weeks to bring a very-off clock to sync.” (Emphasis in original quoted text, not added to the quote.)
  • If there is a clock that is known to be wrong, or suspected to not be accurate, then setting the value to a correct or nearly-correct value may limit the negative impact that the machine has on other machines which may compare with peers.
Manually setting the time

Note that this section is mainly about setting the time once, initially, and not necessarily to keep it up to date (especially after the machine may have powered off, during time changes (mainly handling Daylight Savings Time, but perhaps also coordinating with GPS to change local time, and any changes made between time standards like UTC changes from TAI/GPS/EAL.

If this hasn't yet been done, set the current time and the time zone. (Ideally this is done as soon as possible, or as soon as networking is supported if NTP is being used. However, security is usually an even higher priority, and getting remote access working can be preferable. It may make sense to do this after installing some packages, such as NTPD. However, in case some software installation process includes a time in a log, perhaps noting when the software was first installed, it makes sense to try to get this set before installing a ton of software.)

Setting time in OpenBSD
Setting a time zone file

For OpenBSD, run:

ls -l /etc/localtime

Example output:

lrwxr-xr-x  1 root  wheel  27 Jan  2 16:36 /etc/localtime -> /usr/share/zoneinfo/PST8PDT

Make sure that /etc/localtime is a symbolic link that points to a desired time zone information file that is located under /usr/share/zoneinfo/. (If not, properly re-create the symlink. (Admittedly, it would be nicer if more detailed directions were here.))

[#obclklcl]: Getting the kernel to use the desired offset, so clock may be set to local time

Make sure the correct time zone offset is being used by the kernel. If the system clock is using local time, either adjust the operating system to work with that clock, or adjust the clock. OpenBSD FAQ 8.25: Dealing with clocks that aren't set to UTC has details about adjusting the operating system kernel to use a system clock that is set to local time. The following also summarizes making that change.

First, as a standard precaution, back up the file.

cpytobak /bsd

If the kernel is /bsd, then run:

sudo config -ef /bsd

At the ukc> prompt, type timezone (and press Enter). That will show how many minutes are added to the current time to reach UTC.

Determining the DST setting to use

Daylight Savings changes occur from spring to autumn. As an example, in the US, the dates tend to be early March to early November (e.g., in 2015 AD, it was March 8 to November 1).

OpenBSD tends to release at the start of May (May 1) and the start of November (November 1). This means that the release of even numbered versions comes right on the day of the DST change.

As a generalization, this means that nations are typically using DST for times when the latest version of OpenBSD ends with an odd number. (e.g., version “4.7”. When determinine whether the release is an “even number” or an “odd number”, this text is only referring to the last digit, after the decimal point.) There might be a small amount of time when that isn't the case (e.g., if DST starts on November 3 instead of November 1), but that's pretty close to being true.

For dates when even-numbered released versions are the latest version, it will start off very near (and possibly exactly on) the date when DST is not active (winter months). After a certain date (such as early March), DST is then active for the remaining time that an even-numbered release date is current. So, if the time is approximately early March, then think about whether the DST time change has happened recently.

we're just looking at the last digit), the vast majority of the time (possibly excluding just a few days If you wish to make DST decisions only when installing the operating system (which might always not be extremely accurate, but may be a generalization that is simple to apply and which may be accurate much of the time), then Daylight Savings Time should be in effect for the odd-numbered OpenBSD release version. Since Daylight Savings Time starts around early March (the third month of the year), and the Daylight Savings Time continues to be active until a date extremely close to the next release date of OpenBSD.

Specifying Daylight Savings Time

e.g., in winter months, Pacific Time Zone is in PST (Pacific Standard Time), which is UTC -0800.

ukc> timezone 480

In summer months, Pacific Time Zone is in PDT (Pacific Daylight Time), which is UTC -0700.

For PDT, the way this gets specified, ideally, is to specify an offset of 8 hours, but to also specify that daylight savings time will cause the time to be an hour ahead of that. So, specify 480 minutes (8 hours) with a 60 minute (1 hour) offset. That will result in the desired 420 minutes (7 hours).

Set this daylight savings preference as needed by typing the following into the Usermode Kernel Config prompt:

ukc> timezone
timezone = 0, dst = 0
ukc> timezone 480 1
timezone = 480, dst = 1
ukc> timezone
timezone = 480, dst = 1

The second parameter of the timezone command at the User-mode Kernel Config (ukc> ) prompt affects the Daylight Savings Time additional offset. Setting the second parameter to one might be similar to adding 60 from the timezone offset. When typing the command, both parameters default to not changing the corresponding value. Therefore, running timezone at the User-mode Kernel Config (ukc> ) prompt, without any parameters (as shown in the example above) simply prints the current values. The first parameter is required and sets the timezone offset.

OpenBSD “manual page” about kernel “options”: section about “Operation Related Options” states about the “TIMEZONE” option, “Double quotes are needed when specifying a negative value.” However, that doesn't seem to be necessary when using the User-mode Kernel Config (ukc> ) prompt.

Making changes effective
ukc> quit
Saving modified kernel.

(The quit command saves, unlike the exit command or pressing Ctrl-C.)

Changes may not be visible until the kernel is reloaded. (That involves rebooting the operating system.) However, before rebooting, it may be desirable to set up OpenNTPD first, so that changes to /etc/rc.conf.local can be tested when the reboot happens.

Setting the time in OpenBSD

After setting the time zone and the kernel's time offset, the recommended way is to just quickly set up OpenNTPD. Other ways could be:

  • using rdate to get the code from a providing server (e.g. “rdate -v pool.ntp.org
    • although that may often not work: a mail archive page states “Most of the servers in the pool will only support ntp. rdate is a different (and nowhere near as good) protocol.”)
  • or to use the date command.

If ntpd is working, but is adjusting the clock slowly, try running the following commands.

sudo ntpd -n
sudo ntpd -dsv

If successful, the second command will not exit (on its own). (So, after the program adjusts the clock, feel free to manually stop the program, by using another method to stop the software. For details on how this may be done, see options from adjusting what is running.)

Misc notes
Would running a command to set the time accurately, on each bootup, eliminate the need for some configuration (and also be easier)?
DOS commands to set time
Basic time handling

Many versions of DOS, probably including most popular versions when DOS usage was more mainstream, may not support any sort of complexity related to time zones. To change the current time, run time (which is a command internal to the command line interpretor). To change the current date, run date. To see the current time or date without changing it, one may be able to do this using a “ /T” command line switch to either command. (Use a /? command line parameter to check, or just try it.) Alternatively, just run the command with no parameters, and press Enter once or twice. (Twice may be needed if the implementation asks for both the date and the time.)

Alternatively, include the new information on the command line. For example, time 1:00 will generally set the time to 1:00am while date 10-13-98 will generally set the date to October 13, 1998 (if using dates in the format common in North America). Different versions of DOS may handle some things differently, such as 24 hour time or specifying am or pm or a or p, issues with dates outside the expected range (before 1980 or after 1999), and internationalization support.

Getting time from a remote source
Perhaps use net (from LAN Manager and similar: precessor Workgroup Add-on for MS-DOS and successor NTLM)?
Microsoft Windows

The commands from DOS (date and time) may have an effect. For some reason, when using the time command interactively, the input to the command may end up showing up in the command line history (as if it was a command). (This has been seen in Windows Vista.)

The clock in the lower-right corner may be able to be used to set the time.

There are other multiple possible solutions that may work well, depending on what operating system is being used. Relevant commands may be:

net time /?
w32tm /?

The latter option may show information similar/identical to: TechNet: Windows XP help for the W32tm command.

For further details related to network syncrhonization, see Network-synchronized time.

Interface Hall of Shame notes (describing Windows 95): “changes to the date are accepted as they are entered, causing an immediate change to the system date, without the user's having selected the OK or Apply buttons.” (Font decoration has been removed from quoted text.) For further discussion, see the referenced web page.

Other operating systems/Implementations
The section about Network-synchronized time may have some options.
Keeping a clock accurate
Ongoing synchronization is covered by the section about Network-synchronized time.
Some standards
[#midisnd]: MIDI sound

Details about the MIDI file format are available.

For computers, the collection of sounds (called a “sound bank”) may be stored on the sound card, or handled by drivers that are meant to be specific to the sound card. Therefore, many solutions have been fairly specific to certain hardware, which is why MIDI is covered in this hardware section (even though the exact sound assigned to each instrament could be implemented using software).

Some quality sound banks may include:

Microsoft GS Wavetable SW Synth

This might be available after installing some drivers, including “ES1688 AudioDrive (WDM)”, a driver that was meant for ESS Technology, Inc.'s “Sound Blaster”-compatible ES1688 AudioDrive circuitry. Note that a non-WDM driver (simply called “ES1688 AudioDrive”) might not result in Microsoft GS Wavetable SW Synth being installed. (This information came from using Win98SE.)


This may require a separate download, but QuickTime player has provided a fairly good option for a MIDI sound bank.

Eawpatches (version 1.2 or 12?) is recommended by PrBoom's Download Page for software designed for Microsoft Windows.
Sound banks for a GUS

Historical note: This information is likely to be of only very limited benefit for people in the future, since Gravis UltraSound sound cards were primarily ISA cards that don't work with many modern motherboards. However, the quality was outstanding enough that it may be worth mentioning just for historical value. Anyone fortunate enough to use such a card may be able to benefit from some very nice sounds.

A very good set was made available for owners of the Gravis UltraSound sound cards, distributed by Gravis for use with the sound cards. However, when Timidity software for Linux started using these, a company called “Eye & I” objected to the sound being freely used by many people who were not using Gravis UltraSound equipment. (The company called Eye & I may have also been referred to as “INI Productions”, where INI presumably refers to sounding like Eye & I when each letter is stated.)

TOOGAM's Software Archive: Gravis UltraSound may discuss this in more detail, and have reference to a third party shareware sound bank that was used with GUS cards.

It seems, that Eye & I may have provided licenses to other sound card manufacturers. Voice Crystal's History page says “The Voice Crystal sound sets have been incorporated with some of the top selling PC sound cards on the market, including the following products: Turtle beach Maui, Rio and Monterey; Advanced Gravis UltraSound, UltraSound Max, UltraSound Pro and UltraSound Plug & Play.”

[#sndac97]: AC '97

Many sound devices seem to support a standard called Audio Codec '97 (very commonly, probably most commonly, abbreviated to something like AC'97 or AC97). Perhaps this standard is most commonly supported by sound circuitry built onto a motherboard. Some BIOS setup programs may provide an option to enable AC'97 support for the embedded hardware. This would make a single working AC'97 driver seem highly desirable (because of the wide variety of hardware that such a driver might support).

However, there are enough differences in hardware implementation that a driver that works with one card may not work with another card. So, such a universal AC'97 driver may not really be all that practical to create.

A little bit of further information, including some drivers that may work with some (but not all) AC'97 hardware, may be on TOOGAM's Software Archive: section about sound cards.

[#sndpcspk]: PC Speaker

Info at: Wikipedia page for “PC Speaker”. A lot of people are familiar with this only being capable of making beeps. Actually, the PC Speaker could be used to play WAVe files (which is a standard way of recording actual sound), as noted by TOOGAM's Software Archive: Windows 3.x which mentions a driver for Windows 3.x and Win9x. That driver was notable for having an option to set a number that determined the quality, and a number which also had a characteristic which was quite unusual for a sound driver: it could result in slowing down responsiveness of the entire system when a sound was actively playing.

Even after modern “sound card” circuitry became standard in PCs, many PCs continued to have the PC Speaker, despite that being a cost and despite many people not caring about it (or even knowing about it). A key reason why the PC Speaker remained was compatability with the tradition of being able to use the PC Speaker as an output device even if other common/primary output devices were not working, such as when there was no functional video circuitry, or a major electrical issue like if an unkeyed (E)IDE connection to a hard drive was plugged in upside down.

OS-specific Details
Microsoft Windows interfaces

rundll32.exe shell32.dll,Control_RunDLL mmsys.cpl,,0 displays tab number zero of the Multimedia properties page. rundll32.exe shell32.dll,Control_RunDLL mmsys.cpl @1 may have a similar effect. Using the first syntax, selecting various numbers after the commas may show different tabs. With Windows Vista, there may be as little as three tabs (“,,0”, “,,1”, and “,,2”).

Audio may be adjusted by software, including Nirsoft NirCmd which may be a free way to interact frmo the command line.

See also the section about supporting sound cards in DOS.


There are various standards for sound cards. The most commonly used/supported may have been PC Speaker, Adlib, which was older, and Sound Blaster, which fully supported Adlib, and General MIDI, and Gravis Ultrasound. Many older pieces of software would only support some of those. Newer software might support those and/or other sound cards. Allegro version 4 (e.g. version 4.4.2) became a good option for supporting multiple sound cards with software that required a 386 or newer. (A web page, About Allegro.cc, refers to Allegro.sf.net which identifies itself as the main page.)

Some drivers may be available at TOOGAM's Software Archive: Sound card drivers.

Sound Blaster support

Many/most well-written programs that supported the Sound Blaster would check for an environment variable called BLASTER. The variable would store multiple values. The exact values that were impactful could vary, particularly depending on what sound card was being used. The most commonly supported were:

I/O port address: Most commonly set to 220 with 240 being the next most commonly used. Note that 240 is an address that may also be commonly used by NE2000 cards.
IRQ, most commonly set to 5, although this may have defaulted to 7 with the original Sound Blaster card
DMA, most commonly set to 1 or 3
Sound card type

As an example of setting an environment variable for the BLASTER variable:

set BLASTER=A220 I5 D1 T1

(The above was meant as a cursory overview. More hyperlinks to sections like IRQ, and details about the Type value, may be added at a later time.)

Some info may be at: ZSNES documentation: DOS sound.

Data storage
[#fixeddsk]: Fixed disks

The term “fixed” refers to the idea that the drives have typically been installed and then not moved frequently. The term “fixed disk” is meant to be a concept in contrast to a drive that uses “removable” media.

[#hdd]: Hard drive (a.k.a. “Hard Disk”, less commonly “Hard Disk Drive”, quite commonly abbreviated “HDD”, sometimes abbreviated “HD” (though the latter abbreviation may often refer to “High-Def”))

The term often refers, mostly correctly (in traditional computers), to the primary storage system on a computer. (The term has often been used, very incorrectly, by laypeople to refer to the computer case/tower that houses the hard drive. Such mis-terminology can easily cause a bit of confusion to people more familiar with correct terms.) Perhaps this confusion came from people explaining that saving a file to the “hard drive” means that it is then stored in the main case/tower.

The primary features that most people know about hard drives are size and speed. There may be others, such as TLER and hard drive locking (which is a term not meant to refer to the older characteristic of hard drive parking).


An annualized failure rate may be a more telltale metric than MTBF. More information may be in the “disk failures” section of disk checking.

[#hdderc]: Limiting time for self-recovery

Wikipedia's discussion about Desktop edition drives and RAID (Enterprise) Edition (“RE”) drives provides a nice write-up about a difference: The RAID edition drives may use “time-limited error recovery” (“TLER”) so that such a hard drive does not enter a “deep recovery cycle”.

TLER may limit a drive's self-recoverability attempt to a short period of time, like seven seconds, realizing that RAID hardware may give up on a drive, and drop the drive from the RAID array, if the drive does not respond within a short amount of time such as seven to fifteen seconds. A deep recovery cycle may cause a drive to take longer, perhaps even minutes, in an attempt to report correct data.

The feature to limit recovery time may have different names. Seagate drives may support “Error Recovery Control” (“ERC”). Western Digital may have a feature called “Time Limited Error Recovery” (“TLER”). Hitachi may support “Command Completion Time Limit” (“CCTL”).

WD notes that TLER may not be disabled. smartctl -l seterc /dev/sd0a may show current values (even if using WD's TLER rather than Seagate's ERC), while additional comma-seperated parameters may set the values. [H]ard|Forum website: a post about hard drives and features seems to indicate that at least one setting on at least one type of drive might not be settable with newer hard drive firmware versions, so YMMV.

Active hard drive protection
Wikipedia's article for “Active hard drive protection” discusses technology for a drive to detect physical acceleration, which might indicate the hard drive is in a falling object which will soon experience a bump. Various companies have provided various names for implementing such a feature.

Wikipedia's discussion about Desktop edition drives and RAID (Enterprise) Edition (“RE”) drives discusses TLER and notes, “WD Caviar Black, Caviar Green, and Caviar Blue hard drives are not recommended for and are not warranted for use in RAID environments utilizing Enterprise HBAs and/or expanders and in multi-bay chassis, as they are not designed for, nor tested in, these specific types of RAID applications. For all Business Critical RAID applications, please consider WD?s Enterprise Hard Drives that are specifically designed with RAID-specific, time-limited error recovery (TLER), are tested extensively in 24x7 RAID applications, and include features like enhanced RAFF technology and thermal extended burn-in testing.”

There are claims that drives labelled as Enterprise-grade may simply mean that the drives come from a batch (of manufactured hard drives) that went through some additional testing to indicate it was made successfully, and that there is really very little if any actual increase in drive reliability.

Wikipedia article for “hard disk drive”: section titled “Failures and metrics” noted, “Typically enterprise drives (all enterprise drives, including SCSI, SAS, enterprise SATA, and FC) experience between 0.70%?0.78% annual failure rates from the total installed drives.” (The quoted text was followed with a “citation needed” tag.)

Hard drive parking

Load Cycle Count, as noted by [H]ard|Forum website: a post about hard drives and features. WordPress article says “a HDD can do this 600000” (times?) Wordpress blog entry states, “Harddrive manufacturers seem to claim most harddrives can handle at least 600.000 Load_Cycles but this is probably an average under ideal circumstances. My harddrive started to die slowly when at a Load_Cycle_Count of 200.000.”

Very old hard drives (pre-IDE?) may have often been parked, particularly by hobbyists, as a method to prolong drive lifespan by reducing the likelihood of a “head crash”. This may have had more merit with older hard drives, particularly those using the MFM and RLL standards that pre-dated ATA/ATAPI/IDE.

Perhaps see: Hard Disk Drive Myths (page 3)

Compatability: Size limits

One size limit is going to be the stated capacity of the drive. However, there can also be additional limits. Most famously/painfully are some of the limits that have been imposed by startup code (BIOS), or perhaps a hard drive controller. A lot of these affected people when IDE hard drives were more common, involving the 127.5 GB limit and smaller limits. There are multiple websites that document several limits, including PCGuide page on hard drive limits (see the left frame of the page), and another site offering information on Hard Drive Size Barriers, In Depth. Some more details about bytes in a BIOS are provided by a page on BIOS IDE Harddisk Limitations. Some of the most famous ones have been:

[#hddhpa]: Artificial limits: HPA

Hard drives may support a “host”/“hidden” protected area (“HPA”), which is typically not supported by most standard disk utilities. This can allow for some “hidden” data. This has been used by some hardware to allow certain data, such as software used to try to fix hardware, from being unwritable by operating system installation programs. It could also increase compatibility, as a 160GB drive with a 32GB HPA might look like a 128GB drive, and then the first 128GB may work fully on a computer that has LBA-28 limitations that cause problems when the computer accesses a drive with more than 128GB of capacity. Wikipedia's article on “Host protected area”: “Use” section notes, “Some vendor-specific external drive enclosures” ... “are known to use HPA to limit the capacity of unknown replacement hard drives installed into the enclosure. When this occurs, the drive may appear to be limited in size (e.g. 128 GB), which can look like a BIOS or dynamic drive overlay (DDO) problem.” (Hyperlink added to quoted text.) discussion on drives misreporting size

Whether an HPA is used, and the size of it, may be easily configurable for people comfortable with using Linux. skrilnetz.net (“Taming the Penguin”) - “The Truth About” “How to Securely Erase a Solid State Drive” mentions how to use variations of “hdparm -N”. (Namely, be sure to use p if you want a permanent change that survives a reboot.)

The 2TB limit

The MBR disk structure caused a limit of 2TB. The need to be able to support larger drives came out around the same time as CPU's migrating from 32-bit to 64-bit. Traditional compatible BIOS code was often being excluded from newer computers which instead relied on (U)EFI. (For further commentary on BIOS/(U)EFI, see system startup.)

The LBA28 limit (127.5 GB)

LBA-28 (28-bit LBA) supported up to 267,386,880 sectors. With the sector size still being the standard half-kilobyte, this led to a maximum size of 127.5 GB (130,560 MB = 133,693,440 KB = 136,902,082,560 bytes). The number of sectors, which is 267,386,880, comes from multiplying 65,536 cylinders x 16 heads/tracks x and 255 sectors per track. The number of sectors (268,435,456 sectors) is is not the exact same as the number reached by raising 2 to the 28th power, because only 255 sectors per track (not 256) are commonly supported.

By sheer coincidence, this is very close to the maximum size repesented by a file allocation table of 16,320 KB storing 4 byte representations of 32KB clusters, so this ends up being very close to what ScanDisk's limit (FAT filesystem: FAT32 maximum size limit). That limit actually came from how information is stored in RAM, so the fact it was so close to LBA-28's limit seems relatively coincidental. (Of course, many of these numbers are based on powers of two, so it's not too surprising when some similarites crop up.)

This got resolved by adding support for LBA-48 (48-bit LBA).

504 MB and 7.875GB limits

The 504 MB limit (1024 cylinders x 16 heads/tracks x 63 sectors per track x 1/2 KB per sector), which came from a combination of various limits.

A workaround to the 504 MB limit was sometimes implemented, to support a Bit Shift Translation that faked the number of cylinder and heads and tracks. Doing this allowed addressing up to a maximum of 7.875 GB (8,064 MB), which came from the following math: 1,024 cylinders x 256 heads/tracks x 63 sectors per track x 1/2 KB per sector.

That limit was resolved by enhancing code that could be called using the BIOS address of the twentieth interrupt, which is interrupt nineteen when using a zero-based decimal cound, and written out as 13 when using the hexadecimal notation that is more common when discussion BIOS interrupts. So, these became known as the INT13h extentions, and they provided a feature known as “Logical Block Addressing”.

Disk layouts: partition size mentions limits can also come from the various filesystems that may be used.

Actual size (Hidden Space)

Maumee River's response to Russel's SuperUser.com question refers to the HPA (“Host Protected Area” or “Hidden Protected Area”) or DCO. Maumee River's post indicated that new drives with bad sectors may simply have those sectors marked as bad, effectively hidden into the HPA. Wikipedia's page for “Host protected area” notes, “Computer manufacturers may use the area to contain a preloaded OS for install and recovery purposes (instead of providing DVD or CD media).” Wikipedia's article on “Device configuration overlay” says, “Usually when information is stored in either the DCO or host protected area (HPA), it is not accessible by the BIOS, OS, or the user. However, certain tools can be used” these sections of disk space.

[#solsttdv]: Solid State Drive (“SSD”)

Faster, more durable (to impacts/shaking), and physically smaller than hard drives. When solid state drives were first introduced, these were pricier and had smaller capacity than hard drives.

There has been some discussion about SSDs having “write fatigue”, similar to flash drives. (For further details about that, read the section about USB drives.)

[#rmvbldrv]: Removable drive/media

Some media that plugs into an externally accessible connector might be treated more like a fixed disk. This may not matter, but if there is any impact between whether a device is treated like a fixed disk or a removable media, it is generally better if the device is treated as removable media. (This might require making changes such as altering software settings, or perhaps even plugging a drive into a different connector.)

[#usbdrive]: USB

Many USB-enabled devices are identified, by many operating systems, as being removable storage media. Some devices are made solely to act as a storage device, including memory card readers and other USB memory sticks, which (like memory cards) have sometimes called “flash memory”. Imaging devices (digital cameras, scanners, and even printers) may often have support for being treated as USB media.

It would not seem surprising if many USB-based methods of storage media had characteristics similar to “flash memory” (such as “write fatigue”).

With Microsoft Windows, drivers may sometimes be installed when the drive is plugged in. (Further information may be availabele in the section about Automatic installation of Windows drivers for USB devices.)

(For more information, see the (very next section, which is the) section about “flash memory”.)

[#flashmem]: Flash memory

Flash memory has often been implemented as a USB drive, including flash memory that is permanently part of a USB stick, and situations where the removable “flash memory” into a reader that gets plugged into USB.


OpenBSD FAQ on Disk Setup: section about using Flash memory as bootable storage (FAQ 14.17.2) has a section called “Write fatigue”, which says, “Much has been written about the finite number of times an individual flash cell can be rewritten before failure.” People have been had the same sort of discussions about SSD drives. The FAQ states, “Most users with most” such “devices will not have to worry about "write fatigue". You would probably experience more down time due to failure of "clever" tricks done to avoid writing to the” ... “drive than you will by just using the drives as read-write media.”

That said, the disks might benefit by not participating in non-required disk intensive tasks that involve heavy writing to a disk, most notably “defragmentation” (or other such “disk optimizing”). However, when trying to do much more advanced things, like altering what drive is storing the main data, consider the advice of the previous paragraph.

Optical disc

(This section is about newer format(s) that have been discussed on the world wide web. After this section, there are details about other, already existing formats.)

Wikipedia: “Optical disc” article, “History” section contains sub-sections such as Wikipedia: &lduqo;Optical disc” article, “Fourth-generation” section.

TheInquirer page about Archival Disc says, “At launch, a single Archival Disc will hold 300GB of data, but the alliance, both of which participated in the marketing of Blu-ray, have committed to perfect the format with a view to eventually supporting 1TB of storage though implementation of multi-layering, increased data density and” other technologies/improvements.

Wikipedia's article on “Blu-Ray Disc”: “Ongoing development” section describe some of results of efforts to make discs with a higher capacity than the standard Blu-ray discs. (There is some further discussion in the upcoming section about UHD Blu-ray.)

[#bluray]: Blu-ray

A Blu-ray Disc (“BD”, or sometimes “BRD”) stores more data than a DVD.


Hugh's News: BD FAQ: Disc Capacity provides some specific figures.

The figures here do not represent the overhead of a filesystem. (So, when a filesystem such as UDF is used, the capacity for a user's data is less.)

A common capacity for Blu-Ray discs is 25,025,314,816 bytes which is 23,866 MB (about 23.30 GB). That can be calculated by multiplying 381,856 times the size of a 64KB (65,536 byte) cluster. (Each such cluster contains 32 of the 2KB sectors.)

Yet the same site also mentions a size of 25,025,315,816 bytes (which is 1,000 bytes higher).

The site offers similar figures for other formats.

Wikipedia's article for Blu-Ray: section called “Physical media” also starts out with a chart showing some capacities.

The term Blu-ray was used as a marketing term, and ended up describing multiple technical formats over the years. These include:


Wikipedia's article on Blu-Ray describes LTH (Low-to-High) as a newer process that enables manufacturers to make discs for a lower cost, but these lower-cost discs don't work with some Blu-Ray players and, perhaps more critical, France “conducted a study” which “indicated that the overall quality of LTH discs is worse than HTL discs.” So, for maximum compatibility and better quality, go for HTL (High-to-Low).

[#uhdblray]: 4K Ultra High Definition (“Ultra HD”/“UHD”) Blu-ray

Supports greater resolution. The support for “High Dynamic Range” (“HDR”) refers to an “expanded color range” which “significantly expands the range between the brightest and darkest elements” (quoting Wikipedia's article for Blu-ray (last update from April 2016) (“Ongoing development” section)).

Blu-ray 4K is officially called Ultra HD Blu-ray - major new details on the spec extension from the Blu-ray Disc Association notes from the CES 2015 (convention), a format called “Ultra HD Blu-ray” will come with 66GB, with the possibility of a triple layer disc that gets things up to a 100GB format. That's not much larger than the 50 GB dual layer discs introducted in 2006. At least, not in comparison to some of the larger sizes that have been worked on. For instance, Digitimes.com: Pioneer showcases optical disc details announced in 2008 that some target capacities included having 400 GB by 2008-2010, re-writable versions of that by 2010-2012, and a 1TB disc in 2013. In comparison, TweakTown.com: Blu-ray format successor offically called 'Ultra HD Blu-ray' says, “It's expected that Ultra HD Blu-ray will hit the market early in 2016.” So the 66 GB format, that people are expecting, does not offer nearly the capability as the Archival Disc.

3D Blu-ray
3D Blu-ray was not initially supported by the Playstation 3, which was released in 2006, and which could support the original Blu-ray standard released in 2006. However, the Playstation 3 did add support for 3D Blu-ray (starting September 21, 2010, per Wikipedia's article for Blu-Ray: “Blu-ray 3D” section).
(Conventional) Blu-ray
This was the format that competed with HD-DVD, and was initially supported by the Playstation 3.

Panasoic: Blu-Ray: Archive Grade describes “Highly reliable discs featuring a 50-year archival life”. (These appear to be BD-Rs as they mention being available to be written to once.)

Didn't really catch on significantly. Ultimately, this format goes down in history as a “failed format” because the goal of this format was to become the next widely supported format, but HD-DVD experienced competition from another format called Blu-Ray. Eventually the producers of HD-DVD stopped competing with Blu-ray.

DVD Demystified: FAQ 1.1: What is DVD? has a subsection (1.1.1), “What do the letters DVD stand for?” It acknowledges various meanings (including usage by the “DVD Forum”), and concludes that there is no official meaning.

Some DVD-R discs by Sony have been labelled “120min/4.7GB”. However, those who treat the word “gigabyte” as using the “binary-based” numbers, rather than SI-notation, may be a bit disappointed to find available space is less that famed “4.7 GB”. Wikipedia article on “DVD”, section called “Capacity” contains multiple charts, including one with data that is noted here:

Type(1/2 KB) SectorsBytesMegabytes
DVD-R Single-Layer2,298,4964,707,319,808 4,489 +1/4
DVD+R Single-Layer2,295,1044,700,372,992 4,482 +5/8
DVD-R Dual-Layer4,171,7128,543,666,176 8,147 +7/8
DVD+R Dual-Layer4,173,8248,547,991,552 8,152

That places single-layer DVD capacity at between 4.375 and 4.385 gigabytes, and dual-layer at between 7.95 and 7.965 GB. (Those somewhat-rounded figures were reached by dividing the precise megabytes by 1024.)

Brendan Kidwell's SuperUser.com Question and Answer about DVD capacity notes that there may be filesystem overhead that may reduce that capacity a bit. (Disk image filesystems mentions some details about supported filesystems.)

Naturally, there may be other limits along the way. Most DVD players should support ISO 9660, but when doing so, some limits may exist. WIkipedia article on ISO 9660: section on a disc image size limits of 2 GB and 4 GB notes some limits of one byte under 2GB and one byte under 4 GB. The exact limits encountered may vary by operating system. One work-around may be to utilize DVD's other officially-supported file format, the Universal Disk Format (“UDF”). (Disk image filesystems has information on this format.) Naturally, that won't necessarily resolve all possible problems, such as some software being unable to process data files that exceed certain sizes. (A single 4.7 GB file may not be handled well by software that expects file handle location pointers to be able to be stored in a 32-bit number.)


e.g. standard CD's (frequently 80 minutes, although 74 minute may be a slightly more compatible standard), mini CDs 8 cm discs)

The most compatible limit for CDs may be 650MB (actually 333,000 sectors times 2 KB = 681,984,000 bytes, which is just over 650MB), although support 800MB may be pretty common.

CDs store uncompressed audio into 2,352 byte sectors. (As noted by Wikipedia's article on “Compact Disc Digital Audio”, “Digital Audio Extraction” (“DAE”) technology used a size of 2,352 bytes. That size comes from “98 channel-data frames” times “two bytes x two channels x six samples”. Abbreviating that slightly, 98x24 = 2,352.) Wikipedia's article on “CD-ROM”, section titled “CD-ROM format” states, “Like audio CDs (CD-DA), a CD-ROM sector contains 2,352 bytes of user data, composed of 98 frames, each consisting of 33-bytes (24 bytes for the user data, 8 bytes for error correction, and 1 byte for the subcode).” Mode 1 stores 2KB of user data per sector, while Mode 2 gets 2,336 bytes per sector. Besides Mode 2, there is also CD-ROM XA “Mode 2 Form 1” and CD-ROM XA “Mode 2 Form 2”.

Later, Wikipedia's article on “CD-ROM” states, “On a 74-minute CD-R, it is possible to fit larger disc images using raw mode, up to 333,000 x 2,352 = 783,216,000 bytes (~747 MiB). This is the upper limit for raw images created on a 74 min or ?650 MiB Red Book CD. The 14.8% increase is due to the discarding of error correction data.” That's the 74-minute variation; 80 minute variations are also common. Wikipedia's article on “CD-ROM”, section titled “Capactiy” states, “can actually hold about 737 MB (703 MiB) of data with error correction (or 847 MB total).” That same web page section notes “90 and 99 minute discs are not standard)”.

[#floppy]: Floppy disks

There are various sizes of floppy disks. They are all considered obsolete, having three notable disavantages over USB-based memory sticks. (Those are: reliability, capacity, and speed.) It is generally recommended, though, not to store data on floppy disks. For those who care about data (which is generally why information is stored on a data storage device), the most important shortcoming would be reliability. Floppy disks could be a fairly unreliable media, and some floppy drives have been known to affect floppy disks so that they stop working fairly quickly as the floppies continue to be used (by the same drive or even other drives).

It seems to be true, and does not seem to go against what intuitively seems likely, that using floppy disks may cause wear and tear to cause data unreliability, with increased usage leading to disks wearing out sooner. The impacts on wear and tear may be even more pronounced when the disk is used by multiple drives that interact with the disk. This unreliability could happen with even careful use. They were also pretty fragile.

Even with working drives, though, the media may have some limits to the “shelf life”. Exact numbers vary: many people consider floppy disks to not be suitable for archival storage. AtariMagazines.com article on floppy disk handling says:


As long as a disk is stored in the proper temperature range (50~F to 125~F) and humidity range (8% to 8%), the shelf life is 30 years (according to Verbatim) or "practically forever' (according to Maxell). As we all know, in personal computing, 30 years is practically forever. Magnetic tapes are susceptible to the oxide flaking off after a period of time; disks do not have this problem, and the magnetic life is virtually infinite.

[sec: the humidity range there looks wrong; perhaps was meant to be -8% to 8%? Also, “practically forever” was not symmetrically quoted.]

It is recommended that any data which is on a floppy disk, but which may still be desired, get copied on another format. USB memory sticks may sometimes be fairly unreliable by today's standards, but they are probably more likely to be reliable than floppy disks.

Another factor that is usually most impacting is the maximum capacity, which was most commonly at 2MB or smaller. Another issue why they were unpopular is that the drives were typically pretty slow (being much slower than optical disks).

[#ninetymm]: 90mm “Three and a half inch” floppy

Have you ever heard of a 1.44MB “3-and-a-half-inch” floppy disk? If so, then you've heard two mistakes!

Every try to measure one of these things? They are bigger than three and a half inches, and are not square, according to: “There is no such thing as a 3.5 inch floppy” article. This article notes the dimensions are 90mm (9cm) x 94mm (and are 3.3mm thick), and so calls the term “3.5 inch floppy” a misnomer.

Then again, 1.44MB never matched reality either. (This is discussed more in the section about high density floppy disks.)

Even standard 3.5 floppy disks, which max out at 2.0MB pre-formatted when used by a standard HD (high-density) drive, could hold more data if the magnetic material was handled in a way that left the data incompatible with standard drives.

[#fd32mb] FD32MB / FD-32MB

FD32MB (or sometimes written as “FD-32MB”) technology could format this up to 32MB (according to various sources, including Forum post with attached data). This technology was supported by (at least some) LS-240 drives.

Wayback Machine @ Archive.org's cache of an article on MacWorld, about the then-upcoming FD32MB format says, “Matsushita has developed a technology that takes a conventional floppy with 80 circumference-shaped tracks and increased that number to 777.” To be clear, the article mentions the drive will support “high-density (HD) diskettes, but the new drive will re-write the 1.44MB format and store up to 32MB using Matsushita's new "FD32MB" technology.” To elaborate further (than what is likely necessary), “Instead of the tracks on a floppy being 187.5 microns, the new technology reformats the diskette with a track pitch of only 18.8 microns.” (Everything2.com web page about FD32MB's reference to “87.5 microns” seems to be missing the initial digit.) Slashdot Article about 32MB on a floppy noted “Matsushita's FD32MB system employs zone bit recording”. The MacWorld article goes on to say, “The new technology increases the number of sectors per track to between 36-53 sectors, compared with its current number of 18 sectors, and its memory capacity per track can be raised from 9.2KB-18.4KB to 27KB.”

(The Slashdot article referenced in the previous paragraph cites Wayback Machine @ Archive.org's cache of an article on PC Market. That PC Market thread quotes the MacWorld article that may be read in a hyperlink provided in the prior paragraph.)

“Extra High Density” Floppy Disks

The 4MB disk, which had the same physical shape as the 90x94mm “3.5 inch” “1.44MB” disks, could be formatted to a “2.88MB format”.

These disks, and the drives for them, were exceedingly rare. By the time that the Extra High Density drives were released, CD-ROMs started becoming more common. Newer computers were typically sold with the much more common “High Density” drives that were limited to 2.0MB unformatted (and often called “1.44MB”).

However, what was common was BIOS support for these drives. Therefore, many people heard of the legendary “2.88MB disks” thanks to seeing a reference to them in an option in a BIOS. This BIOS support ended up being quite nice, because people who used the “El Torito” standard for creating bootable CDs aimed to use a disk image which was one of the standard sizes that was supported by the BIOS. Since so many BIOS boot processes supported the 2,880 KB format, creators of such disk images had much more flexibilit than if they had only 1,440 KB available for that task.

[#hdfdd]: “High Density” Floppy Disks

Sometimes errorneously referred to as a 1.44MB floppy, these disks were sold with 2.0 MB capacity. That is 2MB of pre-formatted data. “There is no such thing as a 1.44MB standard format floppy” article discusses this. The 1,474,560 bytes is 1.44 thousand KB (1,440 KB, using 1,024 bytes per kilobyte), which is less than 1.44 MB (since 1,474,560 is less than 1.44 times 1024 times 1024 which would be 1,509,949.44.) So if you want to look at millions of bytes, it is over 1.47 million. If you want to look at binary-based numbers, it is 1.40625 MB. The only way to reach 1.44 is to look at binary-based kilobytes of 1,024 bytes, and then flip to using decimal-based numbers half-way through. Worse than a “binary megabyte”/“mebibyte” or a number using just SI-recognized prefixes, these “megabytes” are half one thing, and half the other. Yuck!

Windows Vista's format/? text still makes a reference to “1.44”. However, Microsoft KB Q121839 (MS KB 121839) is an article about acknowledging the actual size, and even says, “Note that in Windows 95, the properties for a blank, formatted 3.5-inch 1.44-MB disk show that there are 1.38 MB of free disk space.”


Some drives might be designed to use only half of a disk's storage capacity unless the disk is flipped over. This was never done with “three and a half inch” floppy format, but older formats were sometimes supported in this fashion. There's really no difference at all between the physical disks: the difference just had to do with how the drives handled the disk. The Apple //e's double-density 5.25" floppy disks did this. The x86 platform did not.

Imation's SuperDisk brand name was applied to the LS-120 format, and later the LS-240 format. These “SuperDisk” drives could use media designed for these drives, which were 120MB (for the LS-120 format) and 240MB (for the LS-240 format). The drives could also use standard floppy disks, and could even format them at a much higher capacity.

Drives designed to do this could get 1,250 tracks using the same physical material that provided just 135 tracks with standard drives (according to Wikipedia's article on Floptical). This might sound impressive until one understands the better implementation of FD32MB.


Some data storage is typically much more reliable than other methods of data storge.

Linus Torvalds, creator of the Linux kernel, once wrote a Usenet post about data storage where he described a situation where he thought a data storage device (a “hard drive”) might malfunction. He used a second FTP site and stated:

(Only wimps use tape backup: _real_ men just upload their important stuff on ftp, and let the rest of the world mirror it ;)

To translate that a bit, for people who are less computer-saavy, he is saying the people like him place their data on public servers and allow the rest of the world to make multiple copies of the data.

Media for long term storage

Don't rely on floppy disks. Forbes article on keeping data states, “The JEDEC JESD218A endurance specification states that if flash power off temperature is at 25 degrees C then retention is 101 weeks?that isn?t quite 2 years. So it appears conventional flash memory may not have good media archive life and should only be used for storing transitory data.”

Multiple places (including SuperUser.com comment and The Rosetta Project's Very Long-Term Backup) have noted that using certain types of ink on non-acidic paper can hold out for a very long time, including thousands of years. The Rosetta Project's Very Long-Term Backup sought to use gold and silicon on a disk that was especially created for the purpose. However, “You need a 750-power optical microscope to read the pages.” That just isn't really practical. Writing on paper may be, but most paper is far more acidic than what is recommended. Also, although human-readable handwriting is relatively easy to create, there are other ways to store data that are far more efficient (in terms of bits) regarding the amount of space, materials, and time that get used up when doing things that way.

So, what's a bit more practical?

Using optical media

This guide isn't really trying to recommend using discs that are (optically) read with reflections of lasers, but it is trying to share some information that has been written about that topic.

According to Google's summary showing some cached contents, Verbatim's page about Archival grade gold DVD-R discs stated “Storage digital media for 100 years with Gold archival DVD-R discs.” The quoted page by Verbatim has stated, “In proper environmental conditions, these discs are designed to last as long as 100 years.” LinuxTECH.NET: Best Reliable Long-term Data Storage Media cited “a thorough long-term stress test by the well regarded German c't magazine (c't 16/2008, pages 116-123). According to that test, the Verbatim Gold Archival DVD-R has a minimum durability of 18 years and an average durability of 32 to 127 years (at 25C, 50% humidity). No other disc came anywhere close to these values, the second best DVD-R had a minimum durability of only 5 years.”

Facebook's Giovanni Coglitore stated, as quoted by ArsTechnica: Why Facebook thinks Blu-ray discs are perfect for the data center, “Each disc is certified for 50 years of operation; you can actually get some discs that are certified for 1,000 years of reliability,”

ComputerWorld: New Blu-ray Disc offers 'lifetime of storage' and 1000 year DVD discuss M-Disk, and another format is described by Wired: “Move over Blu-ray, the Archival Disc is here”. Sony and Panasonic unveil Archival Disc as Blu-ray successor also discusses Archival Disc.

SuperUser may suggest doing what the big guys do: Facebook has expressed interest in BluRay. Google has used tape. BD-R info mentioned that TDK uses multiple layers.

Avoid Low-To-High Blu-ray discs, Wikipedia: Optical meida preservation, dvdiaster,

data storage

Processing Power
[#cpu]: Central Processing Unit (“CPU”)
[#detctcpu]: Detecting what kind of CPU is used
Microsoft Windows

The system info tab may report this. (Generally the fastest ways: hold the Windows key and press Pause/Break, or right-click on “My Computer” and select Properties. Otherwise, go to Control Panel and select “System”.)

See also the section about detecting hardware for some software which may detect CPU information, and often other types of hardware as well.

More notes about downloadable software


psinfo may provide the processor type. In general this will show less info than systeminfo, although maybe some operating systems that psinfo will run on don't have systeminfo (true???), and psinfo may support remote systems using RDP.

Other software that detects low level hardware details may provide more details, such as the software in the #envctrlm(huh? bad hyperlink?) section.

Some of the Wikipedia's article on “System profiler” software.

Wikipedia's article on CPUID: “External links” section

Wikipedia's article on “System monitor” might as well.


such as CPU-Z (at cpuid.com, this uses CPUID according to Wikipedia's article on CPUID: “See also” section ), may provide info.

WCPUID ( H.Oda!'s Home Page: Download page (WCPUID and more)) also references an X port called XCPUID.


See “ dmesg | grep -i cpu ”.

Also, uname -v may list the kernel version which might provide some insight.

Perhaps see xcpuid (without much help from H.Oda!'s Home Page: Download page (WCPUID and more) which says “Members only” for xcpu* although the Win32 software is available).

Perhaps also see Wikipedia's web page about system profiler: section about Linux

Tracking an individual processor
Wikipedia's article about “Pentium III”: section entitled “Controversy about privacy issues”
CPU usage

To determine details about what is using the CPU, and/or controlling some of those details, see the section about CPU usage.

Maxing CPU usage

If the goal is to perform hardware testing, see hardware testing. There may be some software which can simultaneously put some stress on the CPU, and also perform tests on some other hardware at the same time. That may be more efficient than just performing one type of test at a time.

For any other reason, or simply for more information, see: Maximizing CPU usage.

[#cpuemu]: Emulating a CPU

(Note: This section was based on some old notes. (It may not be very cleaned-up at the time of this writing.) However, this section is still being provided as a possibly useful resource, in case this possibly unpolished section is still of use to somebody anyway.

For the 386 emulator that runs on a 286, see TOOGAM's software archive: Drivers: CPU. Also, there is software that tries to maximize usage of a CPU. (See also testing software, like the Breakin suite, and other things which may do other tests like maybe a video card tester???)

Note: there may be information about emulation/compatibility: see sections about virtualization and making compatibility.

Floating point processing unit (“FPU”)

The floating point processor was, at one time, a seperate chip. In addition to an Intel 80386SX, an 80387 could be placed into supporting systems. That would allow much faster work to be done with numbers that were not integers.

Shortly after such floating point processing units started to be used, their functionality started to be integrated into the main CPU. Now, they are not typically seperate hardware. References to an FPU might really just be a reference to a CPU's specialized support of handling operations that involve “floating point” numbers.

[#gpu]: Graphical Processing Unit (“GPU”)
(This should be covered in more details by the section about video card circuitry.) (The hyperlink to this section may move to the video output section. That probably should happen.)
[#physpu]: Physical Processing Unit (“PPU”)

There have been cards that were designed to help process virtual physics, in order to provide a superior expeirence for people interacting real-time with virtual worlds. Essentially, it means that these specialized cards were meant for people playing computer games.

PhysX was a standard released by a company called Ageia Technologies (or AGEIA). That company was purchased by nVidia. Wikipedia's article for “Physical processing Unit”: section about AGEIA PhysX notes, nVidia “announced that PhysX will also be available for some of their released graphics cards just by downloading some new drivers.” Since this sort of technology may be integrated into graphical processors, the section about video output may have details that help lead to successful use of this sort of PPU technoogy. (This seems like it will probably have more positive qualities overall, since having the physics functionality bundled with video card circuitry can lead to not needing a separate card that requires using up another separate slot on the motherboard.)

Wayback Machine @ Archive.org: CustomPC: nVidia offers PhysX support notes, nVidia “confirms its commitment to making PhysX an open standard for everyone”, stating that nVidia was “making PhysX a free API that?s available to anyone.” The article was clearly announcing that the standard would be available for nVidia's competitor ATI (which has been purchased by AMD).

[#rdmacmem]: RAM (“random access memory”, frequently just called “memory”)

In the olden days, such as the last 1980's, the term “memory” was considered to be more ambiguous (perhaps at least among users of Apple ][ computer systems). The term may have referred to temporary storage such as RAM, or possibly as long term storage such as saving data to a disk. As Microsoft Windows 95 was released, if not earlier, the term “memory” started referring to RAM.

[#ramandsw]: RAM details specific to certain software (like an operating system)
Memory drivers

This software may provide support to be able to use certain types of memory, or perhaps simply to use memory in specific ways (supporting a specific standard access method that software may recognize, or providing feature such as skipping over certain memory regions/segments (due to bad memory and/or some other reason, such as conflicts with certain implementations of hardware), or offering some sort of internal compression). In many cases, support for memory is simply built into the operating system, allowing straightforward memory usage to be easily used by computer users who don't typically interact much with drivers.

Memory tends to be fully supported by whatever drivers are included in the operating system. In the past there have been some examples where some special software might be needed in order to access some specific memory: Some older equipment may have required special drivers to support the special add-on cards that provided additional memory. However, such convolution hasn't been a modern issue. With modern hardware, memory should typically be automatically detected and used to the best extent that the operating system will support.

A well known example of how this didn't used to be the case is MS-DOS's drivers that added specific types of memory (called XMS and EMS) which was accessed different from the “convential” memory (and HMA/UMA/whatever/etc. (details????)) Some replacement drivers have been released, which offer improvements (but in some cases may have additional requirements or somehow reduce compatibility): UMBPCI and HIRAM. Converting one type of special-access RAM (XMS) to another type of special-access RAM (EMS) was a design that seems fairly complicated.


e.g.: The badram patch for Linux is often not recommended for ongoing use. This is imply because it is recommended, for reliability, to use hardware that may be 100% good, instead of something that has defects that have been identified.

[#hwmemdos]: Memory in DOS

Memory issues: Note: some of this information may be rather redundant with DOS memory issues/problems. That section may also have some similar information.

Some features, such as DPMS, DPMI, and Microsoft's EMM386 software, requires the use of a CPU which is compatible with Intel's 80386 model of CPU, including an 80486, Pentium, and compatible processors from competitors.

Many MS-DOS programs required the usage of “conventional” memory. Some programs supported more memory, and may have required more memory. There were different strategies used for this memory. The main ones supported with the drivers that came bundled with MS-DOS were XMS and EMS.

MS Q 37242: A General Tutorial on the Various Forms of Memory MS Q95555: Overview of Memory-Mangement Functionality in MS-DOS

XMS (“Extended Memory”)

XMS 3.0 Specification

[#himemsys]: HIMEM.SYS

XMS was most famously provided by a driver called HIMEM.SYS which needed to be loaded during CONFIG.SYS processing.

Q82712 (URL-???): HIMEM.SYS /EISA Switch

Documentation may be included with the operating system: See options may be similar to those of EMM386.EXE (e.g. CONFIG.TXT in Win98)

XMSMMGR.EXE could be run from the command line and provided the same functionality as HIMEM.SYS. This came bundled with some/all of the Microsoft Windows operating systems that used MS-DOS 7.x (Windows 95, 98, 98 SE, and ME). One problem experienced with this software (which seems very odd since everything else seemed to work as well as HIMEM.SYS, including providing the support needed to start the 32-bit Windows GUI) is that when XMSMMGR.EXE was used, Microsoft's MEM.EXE would freeze/crash the computer instead of displaying useful information. Another issue is that the program, like other *.EXE files, generally was not easy to run during the operating system's CONFIG.SYS processing, so any other software that had to be loaded during CONFIG.SYS processing and which required XMS, such as EMM386.EXE, wouldn't have the needed memory in time. This might (???) have been able to be worked around using techniques to run executables from the CONFIG.SYS. (Some further information about such software has been added to the section about processing the CONFIG.SYS file.)

This may be obtained at the “Links” section at the bottom of Uwe Sieber's UMB_PCI page (English), including source code for HIRAM.EXE. MDGx/AXCEL216's HIRAM.EXE section describes how to load DOS high and enable UMBs, then load UMBPCI.SYS to enable some memory (although not making it visible to DOS at this point), then to “make the UMA visible to DOS through a small XMS 2.0 handler” by loading HIRAM.EXE, and then using DEVICEHIGH to load HIMEM.SYS (Note to old DOS experts: yes, that did say the seemingly absurd thing that it just said) so HIMEM.SYS is loaded into the high memory area.

EMS (“Expanded Memory”)

EMS may have been available on 8086 systems (noted by FreeDOS Wiki: Ponderings about EMM286) but more commonly was provided by a driver that used a system's memory to provide EMS that confirmed to the LIM 4.0 specification. LIM were initials of Lotus, Intel, and Microsoft, who cooperated in making the specification.


Uwe Sieber's UMB_PCI page (English) is also available in German.

This software has two functions: One is to enable the UMA (as noted by MDGx/AXCEL216's HIRAM.EXE section which shows how this function may be useful by itself) and the other is to provide EMS. This software does not switch the processor from standard mode into protected mode, which was the number one cause of incompatibilities when using EMM386.

This software does have some compatibility requirements of its own: Namely chipsets need to be specifically supported. The author's page says “Generally it seems that UMBPCI doesn't work on 486 class computers even” [though] ”they have a PCI chipset. That's because their PCI BIOS doesn't support read and write access to the PCI configuration registers.” Newer versions tend to be larger with the benefit being that more hardware chipsets are supported.

This software is free to obtain although the author does have restrictions about what the author wants regrading redistribution. This was “based upon the source-code published by c't in 1995. For the updated versions, “Source code (TASM 3.x) is available on request.”

This was available as a freeware download (for non-commercial use). For downloads/details, see TOOGAM's downloads (section on EMM286).

EMM386 was used to provide EMS and may have had some additional functionality, such as proiding UMB support. In some versions of DR-DOS (including OpenDOS), EMM386 may have helped to provide support for multitasking (as buggy as the operating system's multitasking may have been).

Different filename extensions

Some operating systems (including MS-DOS 6) came with a program called EMM386.EXE while other releases were called EMM386.SYS. Both the *.EXE and *.SYS versions were generally designed to be loaded during system startup, by being loaded during processing of the \CONFIG.SYS file. Versions that ended with the *.EXE extension were able to interact with a previously-started copy. The initialization was not designed to work simply by having the command be run at the DOS command line.

MS-DOS's EMM386 requires XMS to be usable before EMM386 may be loaded from the \CONFIG.SYS. Although earlier EMM386 system files were called EMM386.SYS, later ones were named EMM386.EXE and could be executed from the command prompt in order to interact with the driver that was already loaded from the CONFIG.SYS. Running EMM386.EXE from the command prompt was rather useless (at least in MS-DOS) if it wasn't already initialized earlier during CONFIG.SYS processing.


(Admittedly, this information may benefit from some review/cleanup.)
In MS-DOS 6, one may run HELP EMM386.EXE. (Does this work with MS-DOS 5???)
(Does this work with Win9x releases???)
(Or could QBASIC.EXE/QHELP ???)
In Win9x, may also have info in CONFIG.TXT.
To get EMS, use parameters like
RAM 32768 (or a lesser value)

Other parameters that may help are:

Could gain some extra memory on some systems. However, might also cause the computer to freeze on other computer systems.
On most computers, the three finger salute would cause a software-initiated reboot. On some systems, perhaps especially after EMM386 is loaded (perhaps due to being flipped into protected mode?), that keyboard combination might have no impact or might freeze the system. On some computers with such problems, using ALTBOOT might fix the problem. However, at least one of those symptoms could occur on an even smaller number of systems when ALTBOOT is used unnecessarily.

Other (third party) memory managers known to work with EMS could include Quarterdeck's QEMM, 386Max, CEMM

Other memory areas
High memory area, upper memory area, upper memory blcoks, etc. MS-DOS 6.x MemMaker, Quarterdeck's Optimize (modified system startup files)
Virtual Control Program Interface (“VCPI”)
Provided by EMS managers, this sort of memory was incompatible with environments including Windows “386 Enhanced Mode” and compatible environments. Wikipedia's page on VCPI says “VCPI runs programs in Ring 0, which defeated the purpose of x86 protection.” This wasn't widely used, and perhaps with good reason.

Some shareware has been released that combines the function of a RAM driver with disk cache / virtual memory / etc.?

RAM Compression

RAM Compression got a bad name for itself. e.g.: Wayback Machine @ Archive.org archive of Win95 FAQ: Part 9.7: Does RAM compression really work? (No.)

There has been plenty of reason for such widespread skepticism about RAM compression. However, the theory remains rather sound. And, apparently there actually was some pretty good RAM compression software for Mac OS (pre-OSX). The software may have had a lot of its positive impact by improving upon the implementation used by Apple's operating system. Connectix's product called “RAM Doubler”, and an application called “Virtual”, earned a positive reputation for this type of software. (See Wikipedia's article about Connectix, page about Connectix RAM Doubler. The software may have also optimized some other system functions, improving operating performance by using techniques other than just RAM compression. Another product designed to double RAM on early Macs was Optimem RAM Charger. (e.g. web page proclaiming OptiMem's benefits for Apple's System 7 operating system)

The popularity of a good solution or two for early Macs may have been tempting for people who wanted to make a similar product for Microsoft Windows. Likely due to the claim that “an elephant never forgets”, an elephant is sometimes used as a mascot for memory, and this was demonstrated on a drawing of a product's box that showed an elephant getting squeezed into a box (or... something like that... a picture of such a box has not been located). There was a market ripe for such a product for Microsoft Windows, and such a product was delivered upon an unsuspecting customer base.

Syncronys SoftRAM

However, such software for Microsoft Windows actually attempted to make the computer appear to have more RAM available, while actually having little to no positive impact and sometimes doing activities that caused substantial harm in the machine's stability and/or other system performance. United States of America (“USA”)'s Federal Government's Federal Trade Commission (“FTC”) case against Syncronys Softcorp about SoftRAM and SoftRAM95 shined some light which wasn't very favorable for the company. So, this sort of software got a very bad reputation. PCWorld's “25 Worst Tech Products of All Time” (page 2) remembered just how terrible the software was.

In Search of Stupidity, The Hall of Stupid High-Tech Products calls this “The Jerry Seinfeld of Software. It did nothing”, and the page essentially claims that Syncronys didn't know what they were doing, and so the Federal Trade Commission (FTC) did not prosecute. It is an interesting claim, because the FTC records show that action was taken by the FTC.

Microsoft KB Q135737 says that a “problem can occur if SoftRam version 1.03 is installed on your computer. SoftRam 1.03 is incompatible with Windows.”

More details may be mentioned at: Wikipedia's page about SoftRAM, Archive of a negative review that had uninspiring words about another similar product as well.


Like the section about CHD myths, it seems wise not to just dismiss the whole concept just because of bad implementations. However, RAM compression is probably often not practical. When compressed data gets used, generally the uncompressed version is stored in RAM. The fact that generalized data is not guaranteed to be compressable could complicate any theoretical savings. RAM contents also tend to be more likely to change more frequently than other types of data, which could require re-compression far more frequently that files that may be saved much more infrequently. Also, RAM is commonly used for tasks where speed is of fairly high performance, and data compression may not deliver that.

So, unlike disk compression (which is widely scourned, but often due to some of the CHD myths) which has seen some practical use, RAM compression might often be an idea that just is not quite so good.

Misc notes

(Here are some unorganized notes related to this topic.)

http://www.lowtek.com/maxram/rd.html (downloads/updates???)

Positive review: http://www.atpm.com/2.08/page12.shtml (says RAM Doubler may triple).

Let the OS handle it: A forum post (at http://forum.oldversion.com/programs-support/4098-what-replaced-ram-doubler.html) stated says "the OS can take away...". A program that is able to free up memory that would otherwise be used can cause benefits. These benefits might even offset the cost of needing to put data back into memory (by re-reading or re-creating the data) at a later time, if the computer system is less utilized (having more available resources) at that later time. For an operating system to do this effectively, that may require that the author of a memory driver to do a superior job (than the applications' programmers) at deciding when the programs need less memory.

RAM Drive

See: “RAM Drive”/“memory filesystem”

Memory holes

Memory Hole on PCGuide.com discusses how some cards may want certain specific address ranges to be used to support the card. This effectively can make the RAM unavailable to other applications.

Many BIOS setup programs support enabling a “memory hole” from the address range of 15MB - 16MB. This may refer to OS/2, such as AMI's “OS/2 Compatible Mode” ( http://www.dewassoc.com/support/bios/amisetup.htm ) The Linux-ready Firmware Developer Kit's code for the OS/2 gap says in the comment that the “OS/2 memory hole” “breaks linux bootloaders”. OnLamp article on BSD Disk Images refers “the famous Compaq 16MB memory hole” which might refer to this.

Manufacturer reputation

Some people do claim that certain brand names and/or manufacturers have a reputation for making memory sticks that are less likely to have bad bits.

The following is based on some general reputations: There are people who have strong differing opinions. This website is not, in any way, trying to suggest that these reputations accurately reflect reality.

Micron Technology makes quite a bit of RAM, and one of the brand names they have, “Crucial Technology”, has obtained a reputation of being the best. Another brand name, Corsair, also has its share of fans. These brand names may often be considered to be the top two.

A company called “G.Skill” was initially viewed as trying to market frivilomus flashy components to gamers, especially with the Ripjaws series that had cooling fins on the RAM chips. (The result is a top that looks a bit like the “teeth” of a sawblade.) However, over time it seems that their reputation has been getting increasing to be more positive.

Kingston is a well-known manufacturer that often sells RAM for a lower cost, and also getting a reputation of also being a lower quality than some of the other pricier options. Micron Technology also markets memory with the name of Lexar.

Detecting RAM details
Serial Presence Detect
WMIC PATH Win32_SMBIOSMemory Get DeviceID /FORMAT:LIST MSDN: Win32_SMBIOSMemory class WMIC PATH Win32_PhysicalMemory Get /FORMAT:list
Selecting RAM

Getting memory that operates at the same speed as other memory in the system is absolutely recommended to prevent potential severe degradation in overall speed. Even getting memory by the same manufacturer is recommended.

Crucial Technology's page about the “Crucial Technology”/“Micron Technology” company has stated, “We guarantee that the upgrades you find through the Crucial Memory Advisor tool will be compatible with your system, or your money back.”

Remote access hardware

Often, remote access software provides a workable solution. However, if an operating system freezes up, then some other solution may be needed.

Options outside the computer
IP-capable KVM switch
These can be nice. And pricey. Note that if the motherboard is locked up to the point where keyboard input doesn't work any better than network-based remote access, this may not resolve the problem.

If a UPS can be interacted with, some units may be able to shut off power to one or more electric outlets, while not turning off the UPS's ability to be remotely managed. If those electric outlets can then be turned back on, that may be one way to turn a computer off and then back on.

Granted, the amount of remote control options provided in this sort of case is fairly limited.

Dedicated circuitry
[#rmtacscd]: Remote Access Card (“RAC”)

A “remote access card” may be called “Out of band” management, or a “Lights-Out” solution. In this case, the term “out of band” likely refers to being able to be used without requiring an operating system (similar to how an “out-of-band” management method for a managed “switch” does not require using up the basic I/O of a switch, which would use up the switch's bandwidth).

These may often provide interaction via HTTP(S) (when supported web client software is being used), SSH, Telnet, and/or perhaps SNMP.

The SSH/Telnet interfaces may use a standard, or somewhat standardish, interface, providing a prompt that provides commands based on what directory is “cd”'ed to. This may be using a standard called “Intelligent Platform Management Interface” (“IPMI”).

Common features include remote access to the machine when it is showing text mode during the bootup sequence, including allowing interaction to entire the system's (BIOS) setup. Remote access for graphical displays may be available, although in some cases this may be a feature that requires a paid license. That remote access functionality may be designed to be used with a web plugin, such as Java. Powering off and on the main motherboard is typically supported, and perhaps other restart options. Getting more information using multiple interfaces is also common.

Be sure to disable any default accounts, or else there is a security vulnerability.

Often, specialized hardware (like the circuitry found in these “RAC” solutions) is not necessary to provide remote access. There are also software solutions for remote access. However, those solutions are generally only available after an operating system is fully loaded. By that time, the CPU may have already been set to operate in a specific mode (such as 32-bit protected mode, or x64's 64-bit “Long mode” as described by Wikipedia's article x64: section on operating modes). Usage of these RACs can often offer access to initial system setup (such as when running a BIOS/CMOS setup program), or access to option ROMs (which might be used to configure a hardware RAID controller).

Intel Remote Management Module
When tested, it was found that this web interface nicely allows graphical usage (as well as text mode usage).
Source code
Download page for Kiratool Open Source Code (kira_open_source_sdk-kira-kimasmig4-asmidc-intel_040200-5306_replacedREADME.tar.gz) seems ridiculously large: 143.22MB. (That may not be large for some modern software, but compared to the size of 2.41MB containing executable files for multiple platforms, it is a quite large size for the source code.) Perhaps see also: Kiratool Source download
Executable code
Description page for: RMM2 Kiratool Utility Executables for DOS, Red Hat, SUSE Linux, Windows server 2003, Windows Server 2008, EFI
More source
Source code downloads
Intel RMM3 user guide and/or Kiratool Source download
Integrated Lights-Out (“iLO”), by HP

Newer versions of this may be named Remote Insight Lights-Out Edition (“RILOE”).


This may involve searching for a service that is running. The name of the service may have a reference to Cpq (likely a reference to Compaq, a computer company which was bought out by HP).

WMIC PATH Win32_BaseService GET

Search for CpqCiDrv for iLO, CpqRib for RILOE, or CqpRib2. Also, the phrase “HP iLO” may show up.

Severe issues with older software
TechNet: CQPCIDRV.SYS driver update notes that versions of the HP Integrated Lights-Out Management Interface driver file CQPCIDRV.SYS which are older than version 1.8 “may cause kernel memory allocation errors.” (So, upgrade.)
Dell Remote Access Card/Controller (“DRAC”), including Integrated DRAC (“iDRAC”)
Remote Access
Virtual Console

For some reason, the iDRAC 8 platform seems to default to behavior that doesn't work easily. Fortunately, changing such options is not too challenging.

  • In the left frame, expand &dlquo;Overview”, and then choose “Virtual Console”.
  • Next to “Plug-in Type”, choose “HTML5&rdqou;, not the first/default option (“Native”, which might just be a duplicate of the second option) or “Java”.
  • Then, in the left frame, under “Overview”, click on “Server7rdquo;.
  • In the “Virtual Console Preview” window, choose “Launch”.
  • This pops up a window in response to the click, which then pops up another window. Web browsers may respond with some sort of anti-popup behavior that allows the first pop-up (initiated with a click), but resists the second pop-up that gets automatically created. Instead of creating the pop-up, the browser may show a message pointing to an icon related to handling pop-ups. Use that icon to allow the pop-ups from that site. Once you do, the pop-up might still not appear, but refresh the page (Ctrl-R or F5 are likely to work) and then you may see the virtual console.
    • (You can press the “Keyboard” button on the screen, which will then show a virtual keyboard in case you want to send certain keystrokes to the remote system without having the local web browser and/or operating system interpret those keystrokes as having a special purpose on the local computer.)
  • If the iDRAC website seems to be having troubles communicating with the iDRAC, then on the page reached by choosing “Overview” and then “Server”, in the left frame, you can choose “Reset iDRAC”, and cnofirm. (That might log you out of the website??) Then you may be locked out from interacting with the iDRAC for a while (maybe even 2-4 minutes??), but may then get in successfully. Note that this does not reboot the main motherboard, so the computer's main system should not be restarted or, in any way, affected by this.
PCI scan
It seems this may show up in a PCI scan: Detecting DRAC Option showed lspci (in SuSE) showing lines that included the name “ Dell ” and also the phrase “ Remote Access ”.
Using Dell's OpenManage Server Administrator, and WMI

This may require that Dell's OpenManage Server Administrator software is installed. Forum post indicates this can be done with WMI (and states “this will not work on Linux/Unix machines”, perhaps because the forum poster expected WMI can't work in Linux, though currently there may be a solution for that).

Perhaps (this has't been definitively tested):
WMIC /NAMESPACE:\\root\cimv2\DELL PATH Dell_RemoteAccessServicePort GET AccessInfo,AccessInfoIPv4,AccessInfoIPv6
WMIC /NAMESPACE:\\root\cimv2\DELL PATH Dell_RemoteAccessServicePort GET AccessInfo

If a value of is provided for the AccessInfo property, then that may mean the DELL software is working (because, after all, what is being shown is the properties value, rather than an error message), but perhaps it isn't configured.

American Megatrands MegaRAC Remote Management sounds, from its name, like similar technology. (The term RAC likely refers to the concept of a “remote access card”.) See: AMI MegaRAC Service Processor.
[#pcscrews]: Screws

If there is one tool that a computer technician needs more frequently than any other, that is likely a Phillips screwdriver. Some fancy cases have been designed to reduce or eliminate the need for a screwdriver (although some “thumb screws” may still have an +/X on top so that a screwdriver can be used if they seem too tight), such boutique cases tend to be pricier. So, there are still lots of computers that a technician may need a screwdriver for.

Although very early IBM PCs tended to use “standard”/“flat-head” screws to help get the case off, later Phillips screws became much more prevelant and are currently the modern standard for getting the case off, removing expansion cards, and using “mounting screws” (used for mounting drives and motherboards).


Sometimes these may also have a hexagonal top, which may be useful with an appropriately-sized nut driver, such as a 1/4 inch hex nut driver. (Note that computer technician toolkits may also commonly have a 3/16 inch hex nut, which can be useful for motherboard standoffs. Such a nut driver would be smaller than ideal, and quite possibly unusably too small.)

Wikipedia's page on “Computer case screws”, section titled “Gallery” shows some various tops. Of those, the thumb screws are often most convenient. (Some thumb screws have a larger grippable portion than what is shown by that example.) The hex-style head can be nicer if you happen to have a hex driver. Those are also probably more common. The “pan head” is a bit smaller.

Further pictures may be seen with some of the samples shown in the following text.

6-32 (Case screws)

These use a #6-32 UNC screw (e.g., Wikipedia's page on “Computer case screws”, section titled “#6-32 UNC screw”, often called a “#6-32”, or just “6-32”, or perhaps even “6”. Commonly 1/32 inches long.

By standard, they are can easily be handled with a #2 Phillips screwdriver. (The exception might be if using a fancier top, such as a “thumb screw” with ridges that hopefully support easily gripping with a thumb and finger. Even then, sometimes the thumb screws will have a +/X shaped hole to support a screwdriver.) The tip of a #1 Phillips screwdriver may be less ideal, but is often close enough to be able to get the job done.

6-32 screws are often used for the outside of cases.


Sample, using Hex/Phillips top

This is, in the opinion of the author of this text, the ideal top for a screw that needs to take up little space. (The only thing commonly nicer is a “thumb screw”, but besides being pricier, they can take up more space which can be less desirable.)

Besides the option of using a nice screwdriver, these Hex/Phillips tops can easily be handled with a 1/4 inch hex nut driver. Being easily supported by two different types of tools can be some nice flexibility.

e.g., a rather close picture at the following NewEgg page:

Mounting screws

These use M3-0.50 screws (citation: SuperUser.com: jcrawfordor's answer to Keltari's question about names for screw standards). It seems that more commonly, people simply refer to them as M3 screws.

Comparing M3 to 6-32

image showing a side-by-side comparison of a 6-32 screw located near an M3-0.50 screw, which seems to just be a text-added modification to image showing a side-by-side comparison of a 6-32 screw located near an M3 screw (with the 6-32 screw being the one on top), an image seen by Wikipedia's page on “Computer case screws”, section titled “M3 screw”.

SuperUser.com: jcrawfordor's answer to Keltari's question about names for screw standards had this great summary of the reason this difference can be notable. He noted, “rather annoying that these screws are fairly similar in size (although not in threading). This means that an M3 screw will fit in a 6-32 hole, and will even seem to screw in, but it won't stay in place well. A 6-32 screw will go in an M3 hole if you try hard enough, which results in stripping.”

SuperUser.com: Brian Minton's comment to jcrawfordor's answer to Keltari's question about names for screw standards notes, “3.5" HDDs use 6-32, but 2.5" SSDs use M3.”

With such closeness, many technicians have found themselves having one screw available when the other would be preferable.

Ideally using a #1 Phillips screw driver, or perhaps a #0 Phillips screwdriver (as noted by BuildAComputer101.com: Computer Tools). A #2 Phillips screwdriver may work, but try to act with care to get the best grip possible to reduce the likelihood of having the screwdriver end up gouging out the inside metal stripping the inside metal. Such destruction makes the screw extremely difficult to work with (including simply just to remove it), and sometimes the solution may require patience as well as using another tool like pliers or an adjustable wrench.

There are at least a few different ways that the top of the screws may look like. Here are some examples of each:

Hex top
StarTech.com: SCREWM3H15: Hex Head M3 Optical Drive Mounting PC Screws (15 Pack)
Flat top

These things are tiny!

Rounded top

In the opinion of this author, there is no significant functional advantage to this style. The roundness is unlikely to provide any significant safety benefit over the flat top, and although there is more weight, these are not usable with a hexagonal nut driver. There just doesn't seem to be any notable benefit to using this style over the other options.

  • StarTech.com PC Moutning Computer Screws M3 x 1/4in Long Standoff - 50 Pack
    • Being available at Office Depot, which has many stores in many locations, may be a nice advantage. Otherwise, avoid these. Besides being round-topped, as can be seen by the picture from Office Depot's website, it looks like they don't even have a strong +/X shape for the screw. Instead, the intersection is a bit rounded, which will probably result in the screwdriver having a harder time getting a nice grip, and perhaps increasing the likelihood of having a screwdriver gouge out the inside hole which makes the screw extremely difficult to work with (sometimes requiring pliers or an adjustable wrench, and patience, to resolve).
    • Office Depot: mounting screws (though they have a round top... it might be nicer for them to have a flat top, so they are just a bit smaller, or maybe a hexagonal top, so that a hex driver could also use them.
Forum post about Magnetic screwdriver
[#printers]: Printers

See: Printers.

Input devices

Many modern “game pads” (or “gamepads”) and joysticks may use “DInput”, or the less flexible (but perhaps easier to typically auto-configure) “XInput”. (Some discussion of this is seen at Reddit.com: MTCKC's announcement of ProconXInput, an XInput driver for Nintendo Switch Pro, comment by MTCKC.

Nintendo controllers on PC
Wii controllers on PC

WiinUPro open source announcement refers to WiinUPro and Easy-Pair. However, MTCKC's comment said, “About the only feature” that ProconXInput provided which WiinUPro likely lacked was “connection over USB”.

Although, one notable program is Steam. Steam may support the Nintendo Switch Pro controller without other add-on software.

Nintendo Switch controllers on PC
Super NES Switch controllers on PC

Nintendo released some controllers which looked rather like standard “Super NES” controllers. They were only available using a “Nintendo Switch Online” subscription.

The controllers could be used with a computer by pairing the devices using Bluetooth. AntiMicro (main website, page called “Releases”) may be helpful (to help map the controller buttons to desired input). (Related: AntiMicro Wiki.)

Despite these facts:

  • such controllers came with a USB Type C port that could be used for charging
  • such controllers came bundled with a USB Type C to USB Type A (male) cable
  • when controllers were plugged in with such a cable, they would be detected by Windows 10 as being Super NES controllers

... such a connection did not work well.

Microsoft Xbox controllers on PC
  • Controllers for the “Xbox One” system have an ability to work with Microsoft Windows??
  • There was an “Xbox 360 controller for Windows” which came with a standard wireless Xbox 360 controller, and also a USB device that could communicate with it. There were also some third party USB devices that should have been able to acomplish the same task, but didn't work as well. The official USB devices were labelled “Microsoft”.
  • For the original Xbox, this might not be very viable.
Gravis Gamepad

Before the USB-based “Gravis Gamepad Pro”, there was the “Gravis Gamepad” which used a “game port”. The Gravis Gamepad was notable for being mentioned within the Wolfenstein 3D game. The typical joystick standard of the day supported 2 joysticks with two buttons each. The Gravis Gamepad had buttons that acted like the 2nd joystick's 2 buttons, and for a while, quite a few pieces of software supported this method in order to support 4 buttons for one player.

Configuration hardware
(Other hardware...)

misc notes:

https://msdn.microsoft.com/en-us/library/aa389273(v=vs.85).aspx shows various hardware

WMIC PATH Win32_SystemSlot Get /FORMAT:list

https://www.experts-exchange.com/questions/23137751/How-to-access-SPD-information-in-DIMM-Memory-Modules.html had someone claim that Intel was not easily gathering info about memory.

Microsoft Word Document format: SMBIOS Support in Windows (WinHEC 2005 Update - April 20, 2005)

https://www.cnet.com/news/9-things-you-should-know-about-surge-protectors/ mentions power conditioners