There may be multiple names for this, such as “Disaster Recovery” efforts that involve using a “Business Continuity Plan”.
There are many elements that can be considered to be a part of disaster recovery. One of the most commonplace, and often critical, is a working solution for data backup.
The best way of handling a potential disaster is to prevent the disaster. Of course, there are many examples of possible failures that may be prevented. Another specific point being made here, though, is that the impact of certain failures can be mitigated so that they aren't so impacting that they are disasterous.
For example, a business which relies on a single server may have a plan on how to deal with the potential disaster of that server going down. However, if every feature provided by that server is load balanced between redundant devices, and one device goes down, then the result isn't nearly as much of a “disaster”. It may still be a problem that needs to be rectified, and it may even end up affecting end users who notice that the responses aren't as quick as when the desired service is being handled by two machines. However, it may not be as disasterous.
There may be some spare equipment. Such equipment may be “hot”, meaning that it is ready, or “cold”, meaning that it will need to be turned on, or otherwise enabled, before it may be used. The terms “hot” and “cold” could refer to things such as a service, such as Internet access, or an address which may or may not have electrical service and Internet service pre-setup. In the case of electrical devices, “hot” may often be an indicator that electricity is flowing through the device.
As an example of some redundancy, RAID 5 requires three hard drives. If there are four hard drives available, RAID 5 could be set up to use all four hard drives, or RAID 5 could be set up to use just three of the hard drives. The fourth hard drive could be largely unused, simply being a ready replacement for any one of the three hard drives if they fail. Another option would be to use the fourth hard drive a different way, such as using RAID 6. There may be some advantages and disadvantages to using RAID 6. A disadvantage may be speed (depending on the implementation), and another is that the ability to recover from two drives may require additional bits to be dedicated to prepare for recovery. An advantage may be that any two of the in-use drives could fail simultaneously and yet RAID 6 could recover.
- [#backup]: Backups
See: backup section for details about regularly creating a backup/“back up” (to be backing up data).
- Recovery documentation
- Noting things such as where credentials are stored, where software installation keys exist, and where media is located.
- [#recovrfl]: Recovering files
- Recovering from backups may be the best way. See the section about restoring data (such as the subsection about Restoring files and directories/folders from backup).
- File cache
If a copy of the file hasn't been fully deleted, getting a copy of the not-yet-deleted file could be a way to recover the file. For instance, if a file is in a location used by a web browser's cache, getting the file from that sort of location may be doable until the data is deleted.
- [#undelete]: Unerase
The strategy of unerasing can restore a reliable copy of data in some cases. In other cases, that method does not work so well. However, in cases where the data can be unerased, using an unerase strategy might be one of the fastest ways to get the data back. Since this method is not necessarily reliable, it should not be the only way that critical data is backed up.
There are some general rules to this. First is that writing to the disk may reduce the likelihood of data being able to be recovered. It is best to do this soon (before additional data writing). If possible, and especially if more than one file is being restored, it may be best to recover to another file system volume (if possible). (It has sometimes been seen that software designed to undelete a file may claim that a file looks safe to undelete, but then that file lost its ability to be safely undeleted when another file was undeleted.) If there is any other software running that is likely to write to the disk (especially if the software is likely to write a large amount of data to the disk), causing that software to not write to the disk (by pausing it?) is desirable. Shutting down software, including the operating system, may be undesirable.
- TERRIBLE advice
UndetePlus.com support page had a Q&A. The question: “Can I do something in advance to increase my chances of file recovery?” (Commentary: It is a great question.) The answer: “Yes, defragment your drive and check it for errors on a regular basis. Note: If you have accidentally deleted a file or files, DO NOT defragment your drive until after you have recovered your files.” Yipes!
In the opinion of the author of this text, that is awful advice! Defragmenting on a regular basis will increase the likelihood of being unable to recover a deleted file, as the defragmentation process may overwrite a section of the disk that had a deleted file. This is not to say that defragmentation is generally a problem, since undeleting is typically not an activity that is heavily performed. Defragmenting a filesystem may have some benefits. However, defragmenting is not considered an effective way to “increase” ... “chances of file recovery”.
Although defragmenting might, in theory, increase the chances that a single file is fully recovered, rather than just a part of the file being recovered (because another part, in another location, has been overwritten), this simply turns more files into an “all-or-nothing” rather than partial recovery. That sounds rather okay, in theory, but the more likely scenario is that the defragmentation may overwrite important data before a person realizes that the data is deleted. Many defragmenters will not only defragment individual files (by storing all of the data for a single file in nearby spots on a disk), but also defragments the volume (by storing all data in a single spot, typically at the beginning of the file). So, rather than use disk space that hasn't been used, an operating system may be more prone to use disk space that has previously had files.
Here is some software for various file systems. There may be some other options (some of which used to be freely offered but which then stopped being quite as free).
See FAT data recovery.
Also, ext2fs data recovery has some information about additional option(s).
- See ext3fs data recovery (and possibly other information in the section about recovering data from the Ext filesystem types).
- Other formats
- [#testdisk]: TestDisk
TestDisk website: section listing supported Filesystems lists several types of popular and historically popular filesystems. The software is also open source, portable, and comes with bootable software. This generalization might not work in every case, but it is mentioned since it may work as a solution for many of the more likely scenarios.
To undelete, run TestDisk. Choose the desired logging level, a disk, a partition scheme to anticipate.
Then, before going to “Advanced” (“Filesystem Utils”), go to “Options” (“Modify options”). Turn on “Expert mode”., set “Cylinder boundary” to “No”, and “Allow partial last cylinder” to “Yes”. After being “Done” with the “Options” screen, go ahead and choose “Advanced” (“Filesystem Utils”).
The next screen may look like it is asking to choose a “mount point”(/“drive letter”/partition) to perform a task on. However, do not start by just using vertical arrow keys to select the drive and then press Enter. First, be sure to use the horizontal arrow keys to select a task.
If some filenames/paths seem to not be shown in their entirety, it may be nice to increase the size of the terminal. (The TestDisk program for Microsoft Windows may not have an option to become wider than 80 columns, although vertical size may be adjustable.)
- Other options
In addition to TestDisk, other details may be listed in the section describing different filesystems. Information is available for at least the following: HPFS data recovery, NTFS data recovery, and FFS/UFS data recovery.
- [#flsigscn]: Scanning a drive for signatures
Even if a filesystem's file structure is completely lost, and so no filenames are available, it is possible to locate certain types of data. For instance, graphics (images) will often start with a certain sequence of bytes. Scanning an entire drive for that sequence of bytes may locate certain types of data.
There are some drawbacks to this approach. Disk encryption will generally defeat this approach. File fragmentation can also cause a file's data to be stored in multiple parts of a disk, and so a simple scan of a file header may simply retrieve a portion of the file's data. This approach often involves scanning an entire drive, and so may be fairly time consuming. Custom file formats may not have a recognized signature to scan for. The approach of scanning for file signatures is an approach that might be useful when backups have not been sufficiently created, and if “undelete” software did not find a specific file. However, the drawbacks mean that this should typically only be useful for last-ditch efforts. (The best way to be able to retrieve data is to use previously-created backups, and so pre-planned efforts should involve making sure that backups are enabled.
- GPL. Runs under DOS, DOS in Win9x, WinNT4/Wk/XP/Newer, Linux, OpenBSD/FreeBSD/NetBSD, SunOS, Mac OSX. There's also a Live Rescue CD provided.
- Some other options
- Forensics software may help. HowToGeek.com guide to recovering data with Ubuntu Live CD may provide some options: Foremost (“originally developed by US Air Force Office of Special Investigations”), scalpel. Helix
- Professional Recovery
Some experts may resort to taking a drive apart. Techniques may be even more advanced, using “clean room” environments that minimize risk of damage (even from minor things link dust).
DriveSavers reports doing the most recovery. ArsTechnica.com article: Files on nearly 200 floppy disks belonging to Star Trek creator recovered says, DriveSavers recovered data from computers using custom operating systems, with no working computers available. They “spent three months writing software that could read the disks in the absence of any documentation or manuals for the custom-built OS.”
- Kroll OnTrack
Perhaps made even more famous due to publishing Kroll OnTrack: Most Unusual Data Disaster Horror Stories for 2007. (Also available: Kroll OnTrack's Worst Data Disasters from 2014 article states Kroll “announced its 12th annual list of the top 10 data disasters from 2014.”
According to this forum thread, it seems the cost can typically run thousands of dollars (in simple cases).