Testing/Repairing Filesystem Volumes
- [#fsfixhow]: Repair notes
- Whether to start with a forensic copy
Note that actual repairing is something that may be unwise to perform before obtaining a forensic copy of the data. The reason is that repairs, especially if they involve anything more substantial than marking a disk block as being bad, may not go smoothly. Incorrect data may be used to try to repair other data, and this can make things much worse rather than better.
Perhaps the only possible exception to this is if it is believed that an automatic repair will be able to rectify the problem without human intervention, resulting in a repair which is done sooner. This may mean that network services (perhaps most usefully: remote access so a technician can work on the system remotely) can be brought online sooner. There is a theory that repairing an issue sooner, rather than waiting until after the filesystem is written to more often, may be more likely to generate success.
However, there is a gaping hole/flaw in the logic of that possible exception: It would generally be safer (increasing the chance of a quick successful repair) for the filesystem volume to not be written to. If there is an issue with a filesystem volume, and the issue is software-based, in some cases the filesystem volume may be used read-only (if the operating system supports doing such a thing) with little risk of further damaging data. However, a flawed filesystem volume might expose problems that lead to instability, possibly for the entire operating system. If the issue is actually caused by hardware that has a problem, even reading a disk sector could substantially increase the chances of the hardware causing substantial further damage, so it is best for the hardware to remain entirely unused until a forensic copy (of all data on that hardware) is ready to be made. (Then, proceed with making the forensic copy.)
Yes, it is realized that the term “forensic copy” was used a couple of times in the last paragraphs, and that a “forensic copy” may be time consuming and potentially expensive (if more hardware needs to be purchased). However, some data may be valuable enough to be worth such an expense, so the safest recommendation is provided (even if it may not be easy/convenient to put into practice). The term is meant to imply a bit-for-bit image: specialized software recognized by professionals in the legal field may be used, but such specialized software is not typically needed just for the purposes of increasing data reliability.
This section is specific to Unix.
If there are errors, there may be many, many errors. If the information/prompts are cryptic or just plain not understood well enough to make an informed and educated decision, simply allowing an automated process may be a sensible way to go. (This should only be done after it is abundantly clear that any critical data is suitably backed up by enough redundant, currently working, trusted copies of the data.)
If doing things automatedly, try using “
” instead of “
”. The simple idea here is that
will only automatically say “yes” to lesser problems that the disk checker thinks can be correctly repaired. Then, if that is successful, perhaps a bit of peace of mind can be obtained by knowing that things weren't so bad that a “
” encountered problems. If the “
” does give up, then the situation may be a bit more desperate, and a “
” may be an appropriate response.
- If file integrity checking program has been used, it may be able to provide a report of which objects on the filesystem have been removed or otherwise changed. This can help prevent data from being lost without at least being able to know about which filesystem objects were affected. This can ease the process of a proper restore process, and help provide some real, solid confirmation that things are in good shape.
- [#mixfstl]: Mixing filesystem tools
A word of caution: Be sure to use the right filesystem checking utility. In some cases, there may be multiple software packages that can work with a filesystem. It is possible that more than one implementation of handling a filesystem may come with a computer, and mixing software from different implementations, or (unfortunately) even using software from one implementation on data that has been created/modified by software from another implementation, may result in incompatibilities caused by variations of the filesystem implementations. These incompatibilities could result in software (especially filesystem checking software) to detect what it considers to be a problem. Worse, software may possibly even cause data loss, by trying to compensate for a perceived problem (e.g. by attempting to repair the “damage”).
Scenarios may vary based on filesystem types (e.g. FAT, ext2, FFS, etc.) and perhaps implementations (e.g. filesystem drives that came with a particular operating system, or other disk checking software). Although the following example focuses on ext2 drives specifically, there might be similar situations possible with different scenarios. Become familiar with the danger of substantial data loss posed by mixing e2fsprogs with other software packages. A possible problem is if the operating system automatically runs “
” based on details like the fields in the file system table located in the /etc/fstab file.
- Desired frequency
There may not be a very hard and fast rule. The more frequently a test occurs, the more likely that a problem will be detected quickly. However, heavy testing might result in unnecessarily heavier wear and tear for the data storage system. If it is pointless, then that may not be a point.
It is clear that Theodore T'so finds “never” to be an undesirable frequency: the (online) manual page for e2fsprogs's
command says (by documentation of the
-ioption), “It is strongly recommended that either
-i(time-dependent) checking be enabled to force periodic full”
“checking of the filesystem. Failure to do so may lead to filesystem corruption (due to bad disks, cables, memory, or kernel bugs) going unnoticed, ultimately resulting in data loss or corruption.” Additional warning text is given (earlier in the document, by the
-coption): “You should strongly consider the consequences of disabling mount-count-dependent checking entirely. Bad disk drives, cables, memory, and kernel bugs could all corrupt a filesystem without marking the filesystem dirty or in error. If you are using journaling on your filesystem, your filesystem will never be marked dirty, so it will not normally be checked. A filesystem error detected by the kernel will still force an fsck on the next reboot, but it may already be too late to prevent data loss at that point.”
It is probably best to have systems check the filesystems regularly, just as it may be worthwhile to perform hardware testing on a regular basis. Details about automating/scheduling are currently something that a hyperlink is needed for.... See also: reporting events.
- OS-Specific details
- [#unixfsck]: Performing a filesystem check/repair in Unix
A filesystem repair process should not be performed while the filesystem is mounted. Even if the drive is mounted in read-only mode, there may be problems from repairing the drive. The reason is likely because the support for the actively mounted filesystem may not be prepared to handle changes on disk. Forum post (by Otto Moerbeek Apr 02, 2011; 11:27pm) states, “A
rwmounted filesystem does not give ANY useful information. You can have both false positives and false negatives.”
Having such problems being encountered by the filesystem may not be nice to experience: Filesystem drivers (used to mount drives) have historically often provided with high levels of privilege, and so problems experienced by the filesystem drivers may affect the entire running instance of the operating system (perhaps causing a kernel panic). Some of that risk might be alleviated if the drivers are restricted to resources available by a standard user, as implemented by “Filesystem in Userspace” (“FUSE”).
(The following paragraph may not be widely true, but is written as a potential caution (which might not be completely necessary). It may have been written after having used
which uses “non-destructive” writing.)
Performing even read-only tests might also be unsafe on a mounted drive. If that is true, the reason is probably caused by some sort of implementation-specific assumptions. To avoid any such problems, it is probably best to ensure the filesystem is not mounted before checking a filesystem (with the exception being if the filesystem check is automatically started by the process of mounting the drive).
- Manually performing a filesystem check/repair in Unix
This may be done manually by running
If the command line appropriately includes a filespec, then
will check the device specified on the command line. If no such filespec is provided, then the command will check the system-wide file system table in the /etc/fstab file.
command will need to know what type of filesytem to use. If the filesystem checking software is using the system-wide file system table in the /etc/fstab file, then the type of the filesystem may be specified on the third field/column of that file. Otherwise, this may be specified with the
-tparameter (which might be optional in some cases). (It might also be looked up using the file system table located in the /etc/fstab file, if the name of the device object is located in that file?) (If desired, see: OpenBSD manual page describing the columns in the file system table located in the /etc/fstab file.) The effect of the
-tparameter may be to run another command. For instance, a command line that starts with:
... may end up just running a command called
in OpenBSD, although a command called
may be what gets run in Debian. The benefit of just using a command starting with “
” is that this same command may work with multiple variations of Unix.
The name of the device “object”, needs to be specified. This device object's name is similar to what may be passed to the
command (although the
command may be able to use a directory name and the file system table located in the /etc/fstab file, while this is probably not accepted by
This may test an ext2 filesystem found on the first partition of a SCSI device. The actual device name may vary between operating systems, so expect to need to customize that portion of the command line.
A word of caution: Be sure to use the right filesystem checking utility. Mixing filesystem tools can be a problem.
Some parameters may affect how the
program responds if it does detect a perceived error. Perhaps the safest option is to use the
-nparameter, which will cause the program to exit with an error code. If this error condition can be detected, and then responded to manually, that may be safer than trying to automatically fix errors. This, therefore, is the recommended approach. If errors are detected, consider what sort of actions are likely to be the most favored approach. One method may be to try to mount the drive read-only, and then backing up the latest version of the drive's most critical data, before running any repair software. While performing any such backup, it may be/is also wise to not delete the oldest version of the backup that is known to predate the last successful filesystem check. Also back up all other data on the drive, if that is reasonably convenient, because any portion of the filesystem volume, and even the entire filesystem volume, may soon need to be re-created (using backed up data).
If an even more automated approach is needed, which tries to automatically fix things instead of just reporting a problem that needs manual fixes, then that might be accomplished by using the
-pparameter which causes “preen mode”. When operating in “preen mode”, the disk checking software will act as authorized to try to automatically fix some types of errors (such as errors that seem likely to be successfully fixed), while not feeling as authorized to fix some more serious errors. (Note that there is no guarantee that “preen mode” is safe: It has in fact been known to cause data loss, particularly when mixing filesystem tools.) However, before using
-p(for “preen mode”) or
-y(to say “yes”, to affirm, every question asked by the disk checker about whether it should proceed to make changes), know that attempted repairs may result in data becoming less accessible. If there are errors on the disk, it might, or might not, be more worthwhile to first try to mount a drive read-only, instead of running disk repair software, and then to copy data from the drive (especially logs, and other files that have changed more recently than the most recent successful data backup).
The filesystem checking software may attempt to rely on whether the filesystem volume data claims that the filesystem volume is in a “clean” state. If the filesystem volume is claimed to be “clean”, then the disk checking software may conclude that a more extensive check is not necessary, and so success will be reported without performing any more extensive checking. To “force” a full disk check, even if the filesystem claims cleanliness, use
More verbose output may be available by using a
-vcommand line switch.
For possible further details, check to see if there is any information available which may be specific to the format/type of filesystem. (For instance, there is an available section about testing/repairing Ext(2+) filesystem volumes.) To perform additional testing, see the section on hardware testing of a drive.
Design and Implementation fo the Second Extended Filesystem does discuss what happens in each of 5 passes performed by
. This information may vary a bit when using other implementations (especially if using other filesystem types). However, the referenced documentation may still be useful as there may frequently be quite a few similarities.
- [#fschkdos]: Performing a filesystem check in DOS
If there is software called “Scandisk”, that is preferable over the older
software. In MS-DOS 6.2x, this is called
. In Win9x, there may be a graphical variation (called
For MS-DOS 6.0, it is recommended to obtain a (legally) free copy of the MS-DOS 6.0 to 6.22 upgrade so that Microsoft DoubleGuard may be used. For MS-DOS 6.22, Windows 95, Windows 98, Windows 98 Second Edition, and Windows Millenium Edition, using ScanDisk is recommended over using
. (As an additional bonus, the DOS-compatible
from the installation CDs of those mentioned 32-bit Windows operating systems was an executable which was able to be run from various versions of DOS before the Windows installer ran. In contrast,
executables may be more likely to report that the software is being run under an “
” That can be worked around with
Incorrect DOS version.
but it may just be more convenient to use executables that aren't limited to a specific recognized version of DOS.)
on FAT, ScanDisk may store found file fragments as \FILE
????.CHK files (where the
????represents a four digit number, using leading zero digits as needed.) If any such files are found, see the section on found file fragments.
- [#ckdskfat]: Chkdsk (with FAT)
The command to check a disk may vary in different operating systems. ScanDisk is preferred when available. The classic executable name for disk checking was
, so that is probably the best bet. However, the command
is frequently written out (a bit longer, although still abbreviating away the letter “i”), so checking for an executable with that filename might be helpful in some cases.
This command is available with many variations of DOS. Using “
” starts a scan of the file system named
C:. Uisng “
” starts a scan and may fix errors found.
On a side note, in at least some versions,
may report how much DOS conventional memory is free.
Historical notes: Q80496: MS-DOS 5 Chkdsk/Undelete had some issues fixed with MS-DOS 5.0a. Win98 RK Chapter 10: Desks and File Systems (section on “Troubleshooting Disks and File Systems”) has a section called “FAT32” which notes, “
will not fix errors on FAT32 drives; instead, use
Note about software which isn't quite as old: Microsoft Windows may have a
command. There may be issues with that command, as described by the section about checking filesystems in Microsoft Windows. If using Microsoft Windows, be sure to become familiar with those issues before just trusting the historical reliability of the
Like the successor ScanDisk,
may store found file fragments as \FILE
????.CHK files (where the
????represents a four digit number, using leading zero digits as needed.) If any such files are found, see the section on found file fragments.
/bparameter can erase information in a filesystem about which clusters are bad. (This might be only valid for newer versions of Chkdsk, like those built into Microsoft Windows versions.) This option may imply
/r, and may be useful after copying a filesystem image from a partially broken hard drive to a new hard drive that presumably doesn't have bad hardware clusters at the same location(s).
- [#ndiskdoc]: Norton Disk Doctor
The technology of this commercial product was licensed by Microsoft and was then released for DOS under the name
. (Norton's Disk Defragmentation software was also licensed and included with Microsoft's DOS.)
- [#spinrite]: SpinRite (by Gibson Research Corporation)
This software is not bundled with DOS. Why SpinRite is not recommended as a sole method for data recovery expresses some concerns also mentioned by Wikipedia's article on SpinRite.
- [#fschkmsw]: Testing a filesystem in Microsoft Windows
Checking the FAT drive may be done similar to checking NTFS drives in Microsoft Windows. It might be the case that this could possibly also be done using software similar to what is done with filesystem checking in DOS, but there may be some issues. Most notably, a lack of multitasking support by non-native software could lead to errors during checking and/or repairing, but another possible issue might be a lack of support for Windows devices/permissions/etc.
Notes about using
Warning: Do not rely on
on NTFS filesystems/volumes when the specified file system may be in use by any other software. (Details are provided in the following text.)
- [#nochkntf]: Microsoft's recommendations about not using Chkdsk
TechNet article on Chkdsk says “As a rule, run chkdsk only on volumes that are known to be corrupt.”
KB 837326 provides the following quoted text: “To obtain accurate results from the Check Disk tool (Chkdsk.exe), you must run the tool against a volume that is offline. You cannot always do this in a production environment.” So in non-test, non-debugging, “production” environments where the software is actually used for businesses, Microsoft says this cannot always be done.
Yes, it is possible, and even true, that Microsoft is really backing off from providing a Chkdsk tool that Microsoft confidently recommends using. Despite the fact that Chkdsk is an ancient tool harkening back to the days when Microsoft promoted MS-DOS, this software has been tossed by the wayside. For those who have been trained eons ago, such an astoundingly strong statement may require that it be backed up. This can be done as there is available a sufficiently abundant, plentiful amount of Microsoft-released documentation warning of problems.
On top of false positives where Chkdsk reports errors that may not exist, various issues have occurred such as Chkdsk trying to fix what it incorrectly believes are errors, and Chkdsk causes problems while trying to fix the errors. In some cases, operating systems end, being unable to start up or even continue to run. Microsoft has even attempted to resolve this by releasing an additional utility, VrfyDsk.exe, although even that was unsuccessful enough that Microsoft recommends that people do not use this failure with newer releases of the operating system, and no alternative remedying solution is provided.
Why has all this happened to Chkdsk? Simply put, Chkdsk has not adapted well in supporting two newer technologies: multi-tasking environments and the newer file systems formats which were the various versions of NTFS that have been released over the years. The tried and truer (though still not fully trustworthy, particularly with MS-DOS 5.0) Chkdsk programs from MS-DOS were not programs designed to support either of those technologies.
So where does that leave IT professionals? Again, the precise wording of the rule was, “As a rule, run chkdsk only on volumes that are known to be corrupt.” Following that rule literally would mean that even suspected, but not confirmed, corruption would not even be a reason to run Chkdsk. Running Chkdsk in read-write mode has been known to cause problems, so there might not be any reason to take the risk if it isn't needed. Running in read-only mode simply takes time and produces a report with results that may not be trustworthy anyway, unless the file system is not mounted which would most frequently occur when AutoChk is checking a file system before the multitasking operating system is started. In those cases, Chkdsk might not introduce a lot of problems other than using up time when there is no particular reason to believe that a problem exists.
- Unreliable reporting
KB 837326: Using Vrfydsk.exe says “if you run
on an active computer, on a startup volume, or on a data volume that another program or another process is using,
might report nonexistent errors.”
Troubleshooting Disks and File Systems says “The read-only
process can complete only if no significant corruption is found.”. Earlier on that page, there is a section called “
might fail in read-only mode or might report false errors.” That section says, “
is prone to falsely reporting errors when in read-only mode, and it might report that a volume is corrupted even when no corruption is present. For example,
might report corruption if NTFS modifies an area of the disk on behalf of a program at the same time
is examining the same area. To verify a volume correctly, the volume must be in a static state, and the only way to guarantee that state is to lock the volume. Chkdsk locks the volume only when you specify the
/xparameters. Thus, you might need to run
more than once for
to complete all stages in read-only mode.”
: A non-solution
With Windows Server 2003, a tool named
was included. In theory, that software would be able to run on a currently used partition, and the results would indicate whether the disk really needs to be taken offline so that disk repair (such as using
with a parameter that enables read-write mode) can fix the partition's file system.
Microsoft KB 837326: How to use the
tool to check a volume for errors without taking the volume offline in Windows Server 2003 says
is not for SP1. “If you have Windows Server 2003 SP1 installed on a system, we recommend that you use the
tool in read-only mode instead of using the
tool. Read-only mode does not use switches.” Note that this statement does not clearly suggest using
on an unlocked partition. On the contrary, the KB article still starts out by saying “To obtain accurate results from the Check Disk tool (
), you must run the tool against a volume that is offline.” Carefully considering this statement shows that this statement is not producing a recommendation/solution for how an unlocked partition may be checked/verified: the way to follow this recommendation that is being provided would be to perform the implied step of locking the volume, and then, as stated, using
instead of using
Having pointed this out, the curious have still been able to get a copy of
by downloading it. Microsoft may have the software avialable from the Download Details web page about Win Svr 2003 RK Tools. (That redirects to the Download of Win Svr 2003 RK Tools executable.) Some other documentation which may simply pre-date the recommendation to stop using
with Win Svr 2003's first service pack:
MS page on Storage: “Fact and Fiction” has stated, “Only if the
status report reveals an error on the volume does it become necessary to run
to repair the errors.”
Google conversion of DOC file says: “One such example is
vrfydsk, which helps to verify whether
should be run”
- Other known limits/issues
Some versions of
has been known to cause some problems, rather than fixing them. This may always be a risk when there is file system corruption, but certain old versions could actually cause errors when there were no problems before
(XP and Server 2003) deleting in-use security descriptors and
in Win2K deleting in-use security descriptors describe
incorrectly detecting a problem, and then making changes to attempt to fix the non-existant problem, and causing problems by making those changes.
There have also been some other known problems, although some of these may have since been fixed.
Q283340: Windows XP
may not detect corruption when run in read-only mode, Q121393: Error Message Claims NTFS Files Corrupt (but files aren't corrupt) (Win NT 3.1 pre-service-pack), Q160451: Chkdsk /f causes Win NT 4 to halt, KB 872952: Win NT 4 SP 4's Chkdsk does not support the version of NTFS that the operating system uses,
- [#ntfschkd]: Notes about using
- Additional/Misc info (Windows)
Additional information is in the section on Autoscan.
- Graphical approach for disk checking
Some guides also provide a list of directions for starting a graphical interface for some disk checking. (This might just apply to Windows XP and newer?) MS KB 315265
The first step is to get to the properties of a drive. There are multiple ways to do that. One method is to use Windows Explorer, go to (My) Computer, and select the context menu of a drive. Another way is to go to the Disk Management MSC (Microsoft Console), and access the context menu of a volume (from either the top portion of the window, or the bottom portion).
From there, go to the Tools tab. Then press the “Check now...” button. In Windows 7 (and probably Vista too?), there will be a UAC shield. UAC can be used to show that Microsoft Windows will try to run CLS ID A4C31131-FF70-4984-ADD6-0609CED53AD6
Before finding what UAC reported, some preliminary testing was performed to see if there is a simple executable file that can be used to run this graphical interface. Apparently, there isn't. Using Process Explorer to look for a process that read the disk heavily, it appeared that the activity of this graphical disk scanner would show up under
(if using “My Computer”), or
(if using Disk Management within Computer Management). Getting to that graphical interface for disk checking is a task that can be done using the methods already described. The easiest alternate method might involve using that CLS ID, and probably does not involve just running a
file from the command line.
However, there may be little to no advantage to using this approach instead of just using the version of
which is meant to be run from a command line (and which runs in text mode).
Wikipedia's article for
states, “The results of a
conducted on restart using Windows 2000 or later operating systems are written to the Application Log”, with some variations on what that log entry looks like. (That partially quoted statement might have meant Win2K or XP or newer. Although Windows ME was actually released after Windows 2K, many people treat Windows 2K as if it was newer than Windows ME.) Specifically, the log entry contains a “Source”, and that source may vary between different operating systems. The source may be “Wininit” or “Winlogon” or “Chkdsk”. In fact, the source of “Chkdsk” is used “on some instances of the Windows 7 operating system” according to Wikipedia's article for
. This implies that different releases of the same operating system may vary in this way, so do not expect the source to be easily and consistently predictable.
- [#cscofsck]: Cisco IOS
- [#autockfs]: Handling automatic checking of filesystem volumes
- Automatic checking upon system startup
Here is some code used by OpenBSD, in the
file (available online: OpenBSD's /etc/rc file):
"Fast boot: skipping disk checks."
"Automatic boot in progress: starting file system checks."
The quoted code above is chopped off: the remaining code then continues by checking the value of
$?as returned by
and then performing various actions or inaction based on that result. After some error handling...)
(Note: technically this is old information. Minor adjustments have been made after OpenBSD's
file version 1.428. However, the change has to do with layout of information, not making significant changes to the logic of what actually happens.)
The support for checking for /fastboot exists in OpenBSD but might not happen in other operating systems. (Implementations may vary.) Note that OpenBSD may later delete /fastboot in the same
file. This might be difficult to circumvent without modifying the
file, although the file can quickly be routinely replaced by placing a line in another file that gets automatically executed. (Further details about files that are automatically used during the system startup may be available on the section about system startup: files that get started automatically.)
This support for the /fastboot file is documented by OpenBSD's manual page for the
command (and, more specifically, it is the “
-f” parameter that mentions the /fastboot file.)
- Checking when a filesystem is mounted
Ext2 filesystems are known to be automatically checked when mounted. That type of filesystem is not the only type to do so. (Ext3, at least, is another example of a filesystem type that does this.) This behavior may be adjusted by adjusting/tuning a filesystem volume before mounting the command.
Automatic disk checking may happen when the filesystem volume is mounted by manually running a command, as well as if the filesystem volume is mounted automatically (such as during a system startup process). The
command may be run automatically when a partition is mounted. (This could substantially affect how soon the system starts services, including remote control services, when the system is booting up.) Factors may involve the 5th (“fs_freq”) and 6th (“fs_passno”) columns in the file system table located in the /etc/fstab file, as well as parameters used with filesystem tuning.
- [#autdscan]: Automatic disk checking in Microsoft Windows
- Windows 98
- Win98 RK Chapter 10: Desks and File Systems (section on “Troubleshooting Disks and File Systems”) had a section mentioning AutoScan (in the text configuration file that is typically called \MSDOS.SYS in that operating system).
- Dealing with errors
- See: repair notes.
Unix may store found file fragments inside a lost+found/ subdirectory. This subdirectory might not be off of the system's root directory, but may be directly underneath the most relevant mount point. For more information on dealing with these files, see the section about found file fragments. (Ext2 filesystems and successor: section about making a filesystem volume has reference to a
- [#chkfiles]: Found file fragments
If a filesystem found some data that didn't seem to properly be in a location on the filesystem heirarchy, the data may be moved from the non-location to a standardized location (which may vary depending on what software was being used to check the drives: Unix's
may use the most relevant lost+found/ subdirectory that it can find/make. DOS may create files named \FILE
????.CHK (where those question marks represent a four digit number, starting with four zeros, and then incrementing when there are multiple files).
According to some reports seen online, some newer Microsoft operating systems might use the same filenames (FILE
????.CHK) but may place those files in a directory named something like \FOUND.
0000and even a different directory like \FOUND.
The filenames may be mangled beyond all recognition, often/always because the filenames themselves were totally lost by the computer. This means that some rather unknown files may exist.
To determine what the file is, consider viewing its contents. (Familiarity with file formats may be helpful.) If that doesn't clearly indicate exactly what file it is, and if the file seems to have binary characters other than common ASCII text, a quick execution of “
” may help to determine what type of file it was.
If those actions don't provide enough clues to clearly identify the file, and if the file is whole (rather than being just a part of a file), then hopefully some previous information was stored using file integrity checking software may help to identify the file. (Thanks to AIDE's manual: “Miscellaneous” section for the excellent idea.)
Once a file has been identified, compare the found version of the file with any file that currently exists where the found file belongs. If the file in the current location is identical, then the lost file may not be needed. If the found version of the file is older than the newer file, then restoring the old version of the file may be undesirable. Then again, this version of the file may have had some desired changes. Some investigation into the file's contents, by someone who used the file, may help to determine what needs to happen.