This tutorial is meant to apply to newly installed operating systems, both as found on physical machines and for disk images such as those used for virtual machines.

It is recommended to first review Getting Trust. That is recommended so that the new operating system installation may be able to be trusted, and to have the best possible trustworthiness over the long term.

File status: Basic initial draft is complete. It would be better to have some greater care about backing up existing files. Some information about setting up the network, specifically about choosing a network address, may be better off going into a different tutorial.

Introduction

Like other tutorials, go back when completing a referenced section.

This section may have quite a few sections entitled “Overview”. Such sections probably don't have any actions that need to get taken. So, when reading such sections, don't try to hard to see what needs to happen. Just plan to absorb the information, in order to have an increased understanding of what is needed. If this information is known from a previous reading, and this guide is being used like a sort of checklist, then skipping past the overview sections may frequently be a very sensible approach.

Keeping a log

Keeping a log will likely slow down the process of setting up the computer. However, a thorough log will allow operating system re-installs to be able to be re-implemented much more quickly. Many professionals do not see a need to be able to re-install quickly, because they believe that a re-install should generally be unnecessary. While that's true, being unable to quickly re-implement a system's configuration may reduce the need to search for details (which might be missing), fail to apply some changes (which are hopefully fairly minor and easy to fix). Instead of aiming for a mostly smooth transaction where problems are relatively few in number, and hopefully fairly easy to fix quick enough to be tolerable, wouldn't it be even nicer to be able to have a more perfect installation with even fewer (even zero) problems?

Ideally, keep a record of every manually performed procedure (like installing some software), as well as every single file and option that needed to be manually updated.

This guide simplifies things by making a backup of files before manually changing them. By doing this, a list of the files that are modified could be seen by viewing the contents of the directory storing the backups. This won't include a list of files modified by procedures that involve installing packages. However, a list of packages that have been installed (and centrally registered by the operating system's package/program management system) can also be easily obtained. That should take care of most changes, allowing the remaining log to be relatively small (and, therefore, not quite so cumbersome to be making).

It is advised not to just refer to this guide. This guide includes a lot of details about how to perform tasks, and so hopefully is longer than what the log would be. (Also, references to a website could lead to some surprises if the website's content ever changes.)

Backing up

This guide may involve making several changes to a system. Backing up files that are about to get changed can be an excellent standard practice to be in. This guide may make several references to backing up files. Prepare to be able to back up files both quickly and easily, to minimize the distraction caused by making quick backups.

In Unix, this can be done by making a file called cpytobak using the guidelines from the section about backups: section about backing up by copying. This isn't too difficult to set up and then to regularly use, so take the time to do that.

In Microsoft Windows, the situation may be a bit more complicated since configuration often involves not just changing configuration text files, but making changes in a registry. Also, many technicians have commonly made changes by interacting with an interface, without clealry knowing how the computer keeps track of those changes. It can be more difficult to back up the changed settings when there isn't much clarity on where that information really gets stored. This guide may not have a suitable quick solution yet. Sadly, the common practice may often be to make backups rarely, and just hope that problems don't exist, and expect to need to take quite a bit of time to resolve a situation if a problem does occur.

Have elevated privileges

Some example command lines here may show the command sudo.

If sudo is causing problems (since it hasn't been set up yet), and if you are a superuser (often indicated by the command prompt ending with a number sign (“#”) instead of a USD sign (“$”), then just leave off the word sudo.)

Have write permissions

Note: This is typically not an issue for a new operating system installation just made from official media.

Run the mount command and see if any mount points are marked as being “read only”. (This might be indicated by having a comma-separated “ro” flag being shown.)

In any partitions are marked as read-only, and if those partitions might (or certainly will) be getting written to, then re-mount them to be using “read/write” mode. Especially, early on in this process, /etc/ will need to be writable. (If / is not its own mount point, then make sure that / is writable.)

For instance, if the / directory is read-only and needs to be made writable, run “ mount -uw / ” to update the mount parameters to support writing. That example mount command will affect things for the time being, although rebooting will usually set the same settings again. Keep this in mind when rebooting, or alter the startup process (temporarily, if desired) so that the partitions are opened in a mode that allows writing.

[#newsysec]: First steps common for many new machines: quick identification, and then early steps for security and remote access

Naturally, a lot of these steps assume that the person performing the steps has full permission to make the changes desired. (If the equipment is not owned by the person using the equipment, and the person has not been properly granted permission to use the equipment, perhaps the person should not be using the equipment.) This obvious statement is being brought up because many of the following steps will be unimplementable without those needed permissions.

Identify the machine

At least if this is quick to do, set the machine's local name early on. By doing so, the machine name can then be referred to later. This way, one can easily verify which machine is being worked on, which can help to ensure the important detail of having the desired changes being made to the correct machine.

[#hostname]: Setting the host name

Choose a good name. (See the references from the section titled “Commonly used DNS names” which refers to IETF BCP 17. Beyond that, host name guidelines may have some pointers, so follow those host name guidelines.)

Setting the system host name

Running the hostname command should show a host name.

(The referenced cpytobak is just a quick way to make a copy of the file by using the cpytobak program.)

Note that tee is not using -a, so this will just overwrite (and not append).

Some quick steps to perform

This should work on OpenBSD, and many other systems that store the host name in file using the /etc/myname location. For some other systems, this may need testing/adjustment. Back off from trying these quick steps if the /etc/myname file does not exist.

cat /etc/myname
cpytobak /etc/myname
echo newname.newdomain | sudo -n tee /etc/myname
cat /etc/myname
sudo hostname $(cat /etc/myname)
Fuller review/discussion about changing the host name

This host name can be adjusted most quickly by first backing up the old name (using “ cpytobak /etc/myname, if that file exists), and then using the syntax: “ sudo hostname newname.newdomain ” (or “ hostname newname.newdomain ” if sufficient permissions are available but sudo is unavailable). However, in some operating systems, the value may also be routinely set by using the contents of the /etc/myname file. If that file exists, view the contents of the text file: specifically the /etc/myname file. (Unless someone performed an error (like reverting to an earlier copy of a disk image), which does seem quite possible, some early usage indicated that running “ sudo hostname newname.newdomain ” seemed to have affected a computer for a single session, but then did not have long term effect. So, do not just trust the results of “ sudo hostname newname.newdomain ” without also checking that /etc/myname has been sufficiently updated (if that file exists).

Backing up the old /etc/myname

If the contents of the /etc/myname file are wrong, then those contents may need to be altered. The first thing to do is to make sure that those contents can be easily restored from backup. The old contents may not be something that is expected to be useful. However, as a general rule, when making changes, it is definitely wise to have a backup, and it can often be convenient to have a recent backup handy. So, let's make one.

Unix users who want a quick implementation, which can likely be used with software that is already included in the operating systems, see backups: section on copying files. For other users, perhaps review that section or just see the more generalized section on backups.

If following the example way of doing things on Unix systems, one may be able to run:

cpytobak /etc/myname

Hopefully it is just that simple, because then many other examples for backing up files will also be (equally as) simple.

If the host name in that text file is wrong, using the appropriate symbol for redirection to a text file may be the fast way to handle that, although really any method of editing a text file may work. Perhaps check the manual page for the hostname command (e.g. OpenBSD's manual page for hostname) to determine what the file should look like. The following example might work well:

cat /etc/myname
cpytobak /etc/myname
sudo rm /etc/myname
echo newsys.localnet | sudo -n tee -a /etc/myname
cat /etc/myname

Once the file has been updated, either reboot (which will be far more time consuming, unless the system is going to be getting shut down anyway) or run:

sudo hostname $( cat /etc/myname )

The desired result here is that running the hostname command, with no command line parameters, should show the desired host name. Also, if the prompt has been customized to show the host name, then any newly created prompts will start to show the updated hostname.

(The following may just be speculation: until this document is clarified, if /etc/domainname does not exist, and if yp isn't being used for authentication, then don't worry about it.) The domainname name command may read from the /etc/domainname file. (e.g. OpenBSD's manual page for domainname)

For possibly related information, there may also be a manual page for some functions. e.g. OpenBSD's manual page for gethostname

Updating the /etc/hosts file

UW notes on Alpine notes, “The fully-qualified name should be listed before any abbreviations.” This is not referring to an FQDN ending with a period. What this is saying is that a full domain name, such as sysname.example.com should appear before any partial/relative names, such as just sysname. For instance, a full line might look like:

192.0.2.210 sysname.localnet sysname
[#vissysid]: Easily Visible Indicators for Quick System Identification
Customizing the system's prompt(s)
Overview: Rationale

This may be more critical when a command prompt is going to be the primary user interface (e.g. Unix in text mode, or Windows Server 2008 Core).

Many professionals may consider the cosmetic action of customizing a prompt to be less important than, say, setting up security. This may be more than just cosmetic: having a unique command prompt may help a person to keep track of which system is being impacted by any one particular window. A nice reason to get the prompt customized is so that one doesn't end up elevating permissions on the wrong machine, which may mean that the right machine doesn't provide elevated permissions where those permissions will be needed. Such a mistake may be a bit cumbersome to figure out what happened, and may be more than a bit cumbersome to correct if permissions were revoked for one user because of a belief that another user had them elevated. Especially if resuming a terminal multiplexing session after a reboot, it could be easy to think a prompt represents an ssh session to a remote machine when it really represents a local machine, or perhaps vice versa. There is some real potential for confusion about which machine a prompt belongs to.

Now may not be a bad time to adjust how the command prompt is displayed.

Graphical background
Quite a bit of time could be spent creating a custom background for machines. The following may represent some quick ways to have a customized background.
Microsoft Windows machines
For Microsoft Windows machines, Sysinternals BgInfo is a downloaded application that may be a way to add a host name fairly quickly/easily. After starting the program, click on the button, near the upper-right corner, which is showing a timer counting down. Otherwise the program will automatically proceed when the timer is done.

Note: There may be more files that could be customized, including logon banners. However, focusing a lot further on assigning the computer name may be a task to delay until after some more security measures are taken. When the time comes, such details are covered in the section about providing a system name.

Securing/disabling privileged accounts from a new account

There is an implication that securing/disabling from a new account will require that a new account is created. (So, this section starts by discussing creating/using a new account. Then, this section will move on to details about handling existing accounts.)

A trusted account that has a secure password should be used to secure all existing highly privileged accounts. This includes accounts that may be “superuser” accounts, “Administrator” accounts, and preferably this task of securing these most important accounts will be followed up with the task of also securing “service” accounts. One desirable implementation for securing an account is to do two things: set the passwords to do a secure password that is not known by untrusted people (or automated programs that will cause harm), and making very sure that the new password is sufficiently documented in a secured location. The other option for securing an account, which may be done after applying the previous technique described, is to disable and/or delete the account.

[#lndsblsu]: Overview: Why the recommended approach is (logging in as, and) using a new account to disable the “superusers”

The recommended way to remove administrator access to an account (possibly by adjusting permissions, or using one of the various methods for Disabling a user account) is to use an account which will continue to be having Administrator credentials. (This long-term account may be a new account: the history of the account doesn't matter so much. What matters is the current and expected future status of the account.) This is especially true if the remaining account is a new account which might not have fully tested certain process including being able to log in with the expected credentials, being allowed to log in remotely, and obtaining elevated permissions.

By using the account which is expected to remain, if the remaining account has a problem performing the administrative task, then the account's failure to be able to make the changes results in no changes are made. The administrator account which isn't yet disabled may still have full permissions to help out.

There are a number of potential problems that could be actual problems: problems may exist with being authenticated to use any remote access methods that are needed, logging in, or running a program that elevates permissions. Even if the user is allowed to run the program that elevates permissions, the program may conclude that the user isn't a member of a necessary group, or because the program does not acknowledge that group as being a group that has the necessary permissions. (The group might not have the needed permissions because it wasn't marked as an Administrator by an authentication database/server, or perhaps the group has those permissions but the machine isn't yet trusting the authentication database/server.) Some of these scenarios problems may be more likely to occur on some types of systems (such as a Unix system which doesn't have a permissions-elevating program installed) than other setups (where the elevation program might be pre-installed). Still, making sure to use the new account which will retain permissions is a simple task. That simple task has the potential of noticing work-stopping roadblocks with any of those steps, and to have the problem be glaringly noticable before permissions are reduced on an existing account (which could make those problems become harder to resolve).

If any of those potential problems mentioned above turn out to be real problems when tested, then the new account may be unable to successfully revoke the superuser access from the old account. The good news about that is that it means that the old account likely still has superuser access. Therefore, the old account's superuser access can probably be used to correct the problem. If the new account is able to successfully revoke the old account's superuser access, then the new account does have all the configuration needed to make changes to the permissions of an account. In that case, the new account probably will have no technical restrictions that prevent the account from making other changes that might later seem desirable (such as if a need arises to re-instate the superuser access to the old account).

For these reasons, it is a good habit to use the new account to be making changes.

Note that short term success does not indicate things are set up sufficiently: There may be some configurations that affect how the system operates over the long term (especially after any reboot/restart procedure applied to the system). For example, perhaps a network address was assigned or a remote access method was enabled (by starting a service/program), but not in a way which will cause the same things to happen again when the system is rebooted. Then after the system is rebooted, for either of the example reasons just given, the result is the same unpleasant result: remote access is unavailable. In such a case, the new account's superuser access may not be any less sufficient for fixing the problem than the superuser access that was, and perhaps still is, possessed by the older account. So, once the new user account works, that doesn't mean that everything, including all long-term configurations, have been set up and tested sufficiently. It simply means that some of the early potential problems weren't work-stopping roadblocks when the work was performed.

[#mkrtacct]: Preparing an account which will be running with administrative privileges

This isn't just about creating the account, but also making it ready to be a remote administrator, including enabling remote access and so forth.

This may be helpful in identifying accounts to disable, and most implementations will likely require this in order to actually proceed with making changes to user accounts.

[#adsupusr]: Adding a user and providing superuser privileges to an account
Backing up the user database

The first step of this process is going to be to back up files that are going to be changed. (Granted, the original user database may not be all that exciting, and not be worth backing up, but this documentation encourages the regular practice of making sure things are backed up before changes are made. Not only does practicing this regular habit let somebody revert changes in order to go back to a previous version, but doing so also allows one to compare files to see what things looked like before the changes. In addition, documenting implementation of this practice serves as a way to list the files that a process is anticipated to change, so the documentation ends up being more usefully complete by including these details.)

Go ahead and back up the user database: Handling basic user operations: backing up/importing/exporting users may have some information on this.

Creating the new user
Choosing an initial password

Determine what this account is intended for. For an operating system installation which will be written to a disk image, it may make sense to use an easy password. This way people don't need to struggle with a more complex password anytime they start to use the disk image (or any derived disk image). Examples of where this may make sense include a disk image for a bootable CD, or a disk image that will be used as a parent/base image (with the intent that child images will be used on one or more virtual machines).

Details about using an easy password may be described in the text at #ezpwuse which refers to the text at #useezpw. Or, instead of lacking any strong characteristics whatsoever, it could be well-known as in “P@ssw0rd” (without quotation marks). (The password of P@ssw0rd is meant to be fairly memorable while having a capital letter, an easily-typed symbol, a number, and at least one lowercase letter). The password could even be a username/password combination meant to be a message to anyone using it. For example, if there is a username called stopthis and a password of acctnow, then the username and password may serve as a reminder to anybody who has to type that username and password. (Using that example may be a reminder, to anybody who needs to type the password, that the default account should be disabled quickly.)

On the other hand, systems that will be put into production use should have accounts with credentials that are as strong as feasible. (That statement sounds rather weak to those who would not want security to be taking a back seat to ease of use. Those interested in even more security may research and implement methods of more advanced authentication methods, to increase feasibility for using even stronger credentials. Others may benefit from a quick tutorial such as the one in the section about making passwords look random.

Adding the user

This portion may not be needed if the account already exists. Some operating system installation procedures may provide an option for adding the user account to the new system.

The basic process is at Handling basic user operations: Adding a user. (Actually, depending on the operating system/environment, it may be faster to combine this step with the next few steps. However, the simpler instructions to follow may be to simply create a new user first, and then advance to the next steps.)

Configuring settings for the new user
New user procedure: common follow-up
Setting/creating groups

If the process of adding a user involves an easy way to specify one or more pre-existing groups to add the new user into, then it may be convenient to add the group first so that it exists when the user is created. This way, if the process of creating a new user provides an easy way to specify one or more groups to b ecome a part of, then adding the user to the group can be done using that easy method. Some details may (depending on the operating system/environment) be available in the process of creating a new user and/or putting a user into a group.

Adding the user to any needed groups is recommended if there is such an easy, known, convenient method to do so. Otherwise, if there is no such method known, there is no need to fret: this guide will address this more thoroughly later. If the method is not convenient, such as if there is a known method that involves using an unpleasantly unfamiliar text editor, it may also be nicer to just skip this for now. However, if a method is known and convenient, going ahead and doing so now may just save some steps later.

[#prvideft]: Providing some nice defaults

Some environment settings may be nice to customize, with the intention being to make life nicer for the end user when the end user logs in.

In Unix, this could be done by setting values in the /etc/profile file, although the end user may not have much control to make changes to such customizations. (Or, even if the end user could remove the values of variables, it seems rather counter-productive to force some settings to be changed (when /etc/profile is run), and then expect a standard end user to force those settings to later be unchanged (when ~/.profile gets run). Instead, just edit a file that the end user can control, so then a user (who desires to do so) may simply prevent the global preferences from occurring in the first place.)

For instance, in Unix, back up and then edit a text file named ~username/.profile to run one or more commands. (Those commands could be in a centralized location like /etc/profdef if desired.)

Examples of some things to set are environment variables for the prompt (for easy system identification), and the default editor (for convenience and/or compatibility with scripts), and enabling a history file (to help troubleshoot past events). (Other great things to set include the TERMinal type and the file execution PATH, but those are usually pre-handled by default.) For example, perhaps run the following (customizing the specified username in the first line):

NEWUSRHM=~username

... and then the following may be copied (potentially verbatim... there is a script file with a name that may be considered custom, although it may be used as is without needing customization)...

sudo cat ${NEWUSRHM}/.profile | sudo tee -a ${NEWUSRHM}/.proforig
sudo rm ${NEWUSRHM}/.profile
echo . ${NEWUSRHM}/.proforig | sudo tee -a ${NEWUSRHM}/.profile
echo . /etc/profdef | sudo tee -a ${NEWUSRHM}/.profile
sudo cat ${NEWUSRHM}/.profile
sudo cat ${NEWUSRHM}/.proforig
NEWUSRHM=
or, here are some alternate directions to do the same type of thing...
[ -f ${NEWUSRHM}/.proforig ] || cp ${NEWUSRHM}/.profile ${NEWUSRHM}/.proforig
cpytobak ${NEWUSRHM}/.profile
echo . $(echo ${NEWUSRHM})/.proforig | sudo tee -a ${NEWUSRHM}/.profile
echo . /etc/profdef | sudo tee -a ${NEWUSRHM}/.profile
cat ${NEWUSRHM}/.profile

Then, unless this is a child image of a base image that already has this file pre-existing (feel free to check by viewing the file if needed), create/edit the text file named /etc/profdef so that the file includes the following:

[ "$VISUAL" ] || export VISUAL="nano -w"
[ "$EDITOR" ] || export EDITOR=$VISUAL
[ "$PAGER" ] || export PAGER=${VISUAL:=less}
[ X"$PS1" = X"\$ " ] && export PS1=""
[ "$PS1" ] || export PS1=\{\\!\}\\u@\\l:\\h:\\w/\\\$" "
[ "$HISTFILE" ] || export HISTFILE=$HOME/.sh_history

These lines set variables only if they don't already have a non-empty string for a pre-existing value. The line will cause PAGER to point at VISUAL if it is set (which it presumably will be), or else the less command. (The syntax is supported by OpenBSD's ksh Korn shell.) Some people may think it is a dumb idea to run an editor when a program is trying to call a pager, which may be a simpler program. End users could simply override this example (by pre-setting the PAGER environment variable before this script runs, or just setting the PAGER environment variable (to a desired value) after this script runs). (A main reason it was deemed good to set that value was simply to show off some syntax for this demonstration file.)

Verify whether the user can remotely access the system

On a very newly installed operating system installation, dealing with remote access settings may make sense to do later (after setting up remote access). However, if remote access is already set up and working, it may be worthwhile to consider whether a user should be able to access the account remotely. Also, regardless of whether or not it is desirable for the user to be able to access the account remotely, it may be worthwhile to determine whether or not the user is, in reality, actually able to access the account remotely. This is a very sensible step to be in the habit of doing whenever a user account is created. If anything isn't right, fix it. If anything is unclear (such as if an organization's management requested a new user account to be created for a new user, but hasn't specified whether remote access is desired), then either find out immediately, or make the account with a sensible choice (like not allowing remote access) and note that this uncertainty should be followed up on so that clarity is obtained.

Providing the user with superuser privileges (a.k.a. Elevating to have privileged permissions)

To proceed to modify user accounts, first get Administrator privileges/rights. Having these privileges available does involve being logged in and there's two basic approaches to obtain those privileges. One way to have these privileges/rights is to make sure that the system was logged in with an account that has these privileges/rights granted automatically whenever the account is logged in. The other way is to use another account to run a program that grants these privileges/rights when requested (and when the request is authorized, passing any credentials check that may be needed).

See Running programs as a “superuser”.

In Unix
The section on Running programs as a “superuser” has notes about modifying file(s) as needed to allow a user to run as a “superuser”.
In Microsoft Windows
Put a user into a specific group (in Microsoft Windows): specifically have the user be in the machine's local group called “Administrators”. This may be done directly, by adding the user, or indirectly, by adding a group that the user is a part of. (If this machine is going to be part of a Microsoft Windows Active Directory domain, and if the user is going to be a domain administrator, one way to do this is to make the “Domain Admins” group part of the machine's local Administrators group. This is generally done when the machine joins the domain, although it may be worth checking to make sure that worked right. Then, naturally, make sure that the user is part of the domain's group called Domain Admins.)
Perform customizations

If a custom prompt was made for the old user account (which may be a default account that is about to be discarded), determine whether that prompt should be used by the new user as well. If so, provide that customization for the new user. (One way to do this may be done by backing up any default files, and then copying the files from another user who has the standardized customizations, like a preferred prompt, already applied.) However, if doing this, make sure that the newly created copy is owned by the user who will need to use the file, and that the file has any needed permissions so that the new user will be able to receive the benefits of having the file there. Also make sure that the file's contents are as expected: if changes were made by the user who had the file being copied, then such customizations might not be desirable for the new user.)

Securing the accounts

First is information about the methods for securing/disabling the accounts: that is followed by information about identifying which accounts to secure. If there is a desire to review a list of accounts to be affected before changes get made, people are welcome to create such a list before making any changes. This order was chosen to potentially save time so that information, about how to disable accounts, may be conveniently applied immediately while reviewing the accounts to disable.

Overview and reason of method used

Using a new account to disable a superuser will help to ensure that the new account can be effectively used, including having elevated permissions to perform user maintenance. If this task is being done after remote access has been set up, using the new account also makes sure that the new account has whatever authorization is needed for the remote administration to be a workable option.

How to secure/disable an account
Securing the account(s)

Escalated privileges are generally needed. (These privileges should be available if the steps for preparing an account to use with administrative privileges were just performed. By this point, Running programs as a super user should be a readily available option.)

Disabling a user account

Disabling an account may be preferred because the credentials aren't changed, and so the same credentials are available again once the account is enabled. To perform this, see Disabling a user account.

Changing the credentials

Another way to help secure an account is to change (or disable) all the credentials that could be accepted by an account. A crucial aspect of this approach is to make sure that all accepted credentials are changed and/or disabled. For example, User Authentication: Simple logins has information about how to change a user account's password. However, if a computer may be set up to allow access to anyone possessing a certain key, then any such key needs to be disabled.

The main difference between this and the approach of temporarily changing the password (discussed in the area about disabling access to an account) is the long term plan. The plan in this case is not to temporarily change the password, and to perhaps change it back later. The plan is to change the password to something that is not known by whoever had the previous password, with no plans to share the password to whoever had it before.

Changing the password, instead of disabling the account, allows the account to be easily used by anyone who uses the new password. If the account is set up to automatically perform activities, this approach may lead to the account still being able to try to perform those activities. (Perhaps the attempt will be successful, and that could be a good thing. Perhaps the attempt will fail due to the password change, which could result in an error message that can alert an adminsitrator to take care of that problem. In contrast, if the account were disabled, the automated task might not even be attempted.)

Note, however, that simply changing a primary set of credentials may not be sufficient to fully block someone from using the account. For instance, a Unix account may have at least three sets of credentials that are accepted, all using technologies which are fairly commonly available (installed by default on an operating system). (Those are standard passwords, key files, and one time passwords.) So, simply changing the credentials may not, alone, be a sufficient approach.

Identifying accounts to be secured

Detailed instructions, such as using a separate account when making these changes, do follow, but the first step on how to disable certain account(s) is to identify which account(s) to be disabling. So, this first step is to gather some information.

To be very clear, it is not desired to simply go through this list and start disabling accounts without taking appropriate steps like making sure that an accessible, highly privileged account will still be available after disabling all the accounts in this list. Details will follow, but this first step is simply to identify all the accounts to remove.

At least some of the information may require using a privileged account to perform the steps.

Secure any account named “Administrator”
An account named “Administrator” is often targeted by attackers who may believe, and probably correctly so, that such accounts are more likely to have higher permissions. If there is an account named “Administrator” on a machine using Microsoft Windows, some technicians might think that disabling the Administrator account is not a good idea. This may require some further research to determine whether that is advisable in Microsoft Windows. However, at minimum, set a password so that the account may not be used by any untrusted individuals.
Disable any account named “root”

Use an account that isn't named “root” to disable an account named “root”. This account is often targeted by attackers because machines that use Unix and similar operating systems will traditionally, and commonly by default, have an account named “root” that has maximum privileges. Even users of other operating systems, such as Microsoft Windows, should probably avoid having an active account under this name so that attackers don't have an easier time obtaining an account (even if the account isn't privileged).

This is especially likely to be critical if the operating system is using one of the generally well-known passwords that are often used by default with some software configurations. If a protected machine has any of those passwords, make sure this gets changed!

Additional Administrators
Finding additional Administrators in Unix
  • Find out who is in a specific group called “wheel”.
  • Also review the /etc/sudoers file. Run the following single command line and see if it provides any output. (If so, investigate any such users/groups.)
    grep -v -e ^root -e ^\# -e ^\%wheel -e \^\$ -e "^Defaults env_keep" /etc/sudoers

    This searches for users and/or groups other than root (which is commonly the name of a user) or wheel (which is commonly the name of a group), comments, or lines that set some defaults. Any other lines in this file are uncommon. That may not be a bad thing, as it simply indicates some customizing took place. (Later, this guide may cause such a customization, so if this guide is used for a base image then a child image may show such customization is down.) However, “customization” could also be an attack that is unwarranted. Investigating that customization, so that it is understood well enough to be safely accepted, may be worthwhile.

Finding additional Administrators in Microsoft Windows
For any Microsoft Windows machines (whether on an Active Directory domain or not), check the local group “Administrators”. (See the section about Microsoft Windows in the instructions to “Find out who is in a specific group”.)
For machines using external authentication servers
Check the external authentication servers as well. For example, for machines trusting a Microsoft Windows Active Directory network domain, check the network domain account “Domain Admins”.
Other common names

Some names have historically/commonly been used for administrator accounts with more frequency than others. For example, any account with "admin" in the name, such as "admin" or "netadmin", may indicate it is a network administrator. Names including “sys” (especially the username “sysop”), “serv”, or “srv” may also indicated administrator or service accounts.

Accounts from a disk image

If this is a child/snapshot image, and if the “parent/base image”/“backing file” has any other Administrator accounts, then any accounts that were “inherited” from the child machine should probably be disabled. This is done so that those credentials are not accepted on this machine, even if they are breached due to a security problem on another machine that may use the same disk image (or any other disk image which is similar enough that it somehow has a record of the same credentials).

As an example of what is trying to be prevented: if an account exists from a “parent/base image”/“backing file”, a concern is that if anyone had access to read the parent image could then create a snapshot/child image and then perform password cracking attempts. If the extra image is moved onto a different machine, possibly one which is even less monitored by an organization's IT staff (such as if it gets copied onto an off-site machine that the attacker controls), then the organization may not be able to feasibly determine the full extent of damage done. However, even worse than obtaining the password from the parent image is if that same username and password provides a working username and password combination of other child/snapshot machines. On the plus side, if multiple machines are compromised, the bigger problems might result in it being more likely that the problems will be detected quicker. However, generally it is nicer if bigger problems are not created.)

As for how to find such accounts, the preferred way is to look at completely accurate documentation that was created when the disk image is made. Naturally, only a completely trusted disk image is recommendable.

If such documentation is lost, it could be re-created by comparing the list of accounts to the operating system default accounts. This may need some familiarity with the accounts that typically come with the operating system. A disk image used primarily for an administrative task, rather than serving end user accounts, probably will not be needing a bunch of custom accounts.

Service accounts
Finding service accounts

A psuedo-standard (implemented by some technicians, yet not by others, many of whom may not even know of the standard) in Unix may be to have service account names start with an underscore. So, running “ grep -i ^_ /etc/passwd ” may turn up some results.

In Microsoft Windows, a graphical method may be to use services.msc and then sort by the “Log On As” column. Any account other than “Local Service” or “Local System” or “Network Service” may be a customization to be aware of.

To find more, consider looking at all accounts that haven't logged in (like a normal user) for a dozen days. (This is enough time to cover someone who was gone for a solid week and a three-day weekend.) This particular method may be prone to show quite a few false positives, though those names that appear may be interesting to know about even if they aren't service accounts.

These sort of service accounts may not be as prone to be accounts that should be disabled or deleted, but rather they should be checked to make sure that their passwords are not old and likely to be known by others. Also the accounts should be restricted to not provide more privileges than needed. Securing these accounts effectively, without causing problems, may require being familiar with how the services are used (so that any other software, perhaps on other machines, doesn't break because of an unchanged password).

There may be some patterns to try to detect various service accounts, such as users that don't have both a first and a last name. One way to see active service accounts is to check what services are running. Another way to potentially find some others may be to review scheduled tasks. A common standard in Unix/similar operating systems is to have a service account called “nobody” which may be particularly restricted. Another indicator that the accounts aren't meant to be used by standard users is if the account doesn't have a user directory in a location where actual users do have home directories, such as if /var/empty is the home directory. For Unix, that can be checked in the /etc/passwd file, as can the user's shell to see if it is a restricted shell (such as /sbin/nologin).

Service accounts may be treated specially. With some organizations, that might mean that they don't have standard password policies enforced. So, they may have some credentials that may work even though those credentials may be quite old. However, these types of accounts may be used by automated software, possibly on a different machine. So, while changing them regularly may be a great idea, some care may need to be taken.

Check if any such service accounts are being given an unnecessarily high privilege level.

A full review of such accounts might be best initiated at a later stage. For now, though, review any known accounts and consider whether it is desirable to change their passwords at this time.

Other accounts to target

Another common account to exist is called “Guest”. This may be a default account for Microsoft Windows systems. Disable it if there's no chance that anybody should be using it. A “Guest” account should not be used by regular users. The is true just because regular users should likely have their own credentials (which probably provide them with some decreased restrictions).

If a publicly-accessible “Guest” account is seriously desired (which may make a ton of sense in some settings, although being highly inadvisable in other settings), give due consideration to factors such as how much access that account needs. It may be worthwhile to provide such access only on certain machines that have restrictions such as not being on the same network as the company network. It may also be worthwhile to implement this with a different account name (and then document the account name to use). The benefit to using a different account name, even if it is documented on a sign right by the computer, is that the other account name might not be automatically targeted by as many (remote) attackers who routinely, automatically, try to use an account name of “Guest” during some attack attempts. Such attempts may be more easily noticable if there are not also legitimate uses of an account named “Guest”.

[#netifup]: Determining, and setting, a (temporarily) usable network address
Determining addresses that may be desirable
Determining pre-existing addresses
In all likelihood, manually checking the currently used network addresses from a local display (or remote viewing of a local display, such as remotely viewing the contents of the video output of a virtual machine) may be the fastest way to see what addresses are already in use. However, in cases where that isn't nearly so convenient, time might be saved by reviewing methods to determine and then know the used network addresses that a machine may have configured and running. For instance, the MAC address of a NIC that is part of a virtual machine can often be configured when the virtual machine is powered down. If a machine is automatically configured to use a specific address, then using that address may be desirable.
General rules for address selection

Those with substantial skills/experience in the basic networking protocols being used (such as both IPv6 and IPv4) may just want to check out the alternatives to commonly used addresses. If even one of the basic communications protocols being used is less familiar, consider gleaning from the following text.

Before setting an address, it will be useful to determine whether or not the potential address will be useful.

Subnetting
Some knowledge that will be useful is an understanding of which addresses communicate with each other without requiring traffic routing/forwarding. Such knowledge can help one test connections early and possibly to enable and test some services very early. (It is known that a hyperlink to a guide for subnetting, either a rather full one or just one that deals with something like IPv4 /24 address ranges, would be helpful for this text. The Basics page may have some information added in the future.)
[#altcmnad]: Alternatives that are better than commonly used addresses
The page about avoiding commonly used addresses (like early subnets within 192.168/16) contains information that may be useful even to many networking experts.
Things to be aware of regarding link-local addresses
This is discussed further on the page about Network Addressing (perhaps Network Addressing: link-local/temporary addresses?)
Choosing an address for use

If this computer is not the first device to be configured on the desired network, identify the subnet which is used to communicate with the other devices on the subnet. This can generally be done by investigating the IP address and prefix length of another device on the network. (Many devices and computers that use IPv4 provide an IPv4 prefix length indirectly by providing a subnet mask.) The main thing to be careful of is to avoid a conflict with an address that is currently being used, or which may be handed out or reserved. (Further details to elaborate that would be useful.)

If this is the first device, determine what subnet would be desirable for the device should be placed on. The network addressing page has details about planning what subnets/addresses to use.

[#cfgipfxd]: Configuring reserved IP addresses

Note: this section only really makes sense after the network is supporting reserved IP addresses. (This may not be true for the first machine on the network; if appropriate, then skip this section for now.)

If this machine is going to be getting assigned a static IP address, then do the following:

  • If using the stateful automatic addressing protocol of DHCPv6, identify the DHCPv6 UID (which, may non-uniquely be abbreviated as “DUID”).

    Details about how to do this may vary based on which DHCPv6 client software is being used. (This software might need to be installed; in that case, either install the software (using a dynamic, non-fixed IP address if necessary) or just deal with this later.) Details about how to do this are described by the section about DHCPv6 clients and look for information about locating/creating a DUID.

    (For details about why this is being done, one may see automatic IPv6 addressing: DHCPv6 Unique Identifier (“DUID”) addressing.)

  • Update the automatic addressing servers. (For IPv6, see the section about the DHCPv6 server software being used.)
  • (Upon updating the configuration that will be used by the automatic addressing server, make sure that the running server software is using the updated configuration. In Unix, this might require doing something like re-starting the server software.)
  • Make sure that the client has any needed software to be able to support that automatic addressing. If that client software requires any sort of setup (e.g. WIDE-DHCPv6's dhcp6c may require a configuration file to usefully work), take care of that.
  • Testing the automatic addressing may be a great idea.

Then automatic addressing may be used to change the currently used network address.

[#chnetady]: Changing the (current) network address
Setting an address

See information about setting a long term network address. (See: setting a network address.)

Once a network card has one known address, some methods to assign a different IP address include:

  • One may try manually initiating a (re-)request for an address to be assigned automatically. However, unless changes were made, chances may be fairly high that the re-attempt will simply result in the same address being re-assigned. Therefore, to effectively implement a change with this preferred method, make whatever changes are needed to the automatic addressing server. (The specific way to do this will vary based on the implementation that is being used for automatic network addressing.)
  • Assigning an address temporarily (by changing some current settings, possibly in a way which does not save the changes long term)
  • or, assigning an address long term. (Note that changing the long term settings, which will likely take place after a reboot, may or may not take immediate effect. Whether or not there is an immediate affect may depend on the operating environment, although the changes can typically be made immediate in some fashion or another. Having the network adapter re-initialize the address, based on the newly-saved settings, may result in the network card being assigned by the new long-term settings.)

(The following paragraph may already be summarized elsewhere...)

Note that for a virtual machine using a hard drive image that is expected to be a “parent/base image”/“backing file”, it may be best not to store a specific, temporary network configuration long term. Naturally, one option to make a short term change may be to make a change that both has immediate effect and which also could take effect long term, but then remembering to reverse the changes so that the effect of the changes ends up being only temporary, short term.

Does some sort of ifconfig/netsh command lines need to be here? Or are they somewhere else?

Scope of these instructions

(This text provides background and provides details to satisfy curiousity, and is not essential reading for those wanting to just complete the steps most quickly.)

In order to set up a network interface so that it is usable, an address needs to be assigned. There are different approaches to setting up network addresses: these early instructions are simply meant to get the most basic network settings set, at least temporarily, enough so that the hardware can be tested and so that communication with another machine may be achieved and demonstrated.

Depending on routing settings, communication may only work from another machine on the same localized network/subnet. Communications with more remote machines may not work until routing is also set up: Adding routing settings is covered in other instructions. Getting things to automatically work again later is also covered in another section. (It is acknowledged that further details, or even just references to those details, would be good to have here.) Getting that to work may (depending on how things are done automatically) be more advanced, so the first step is to make sure basic communications are working. These basic steps may also be enough to make some remote access methods feasible, which may be very nice for some systems where using the local console hasn't been made comfortably convenient (which is often the case, due to practical reasons, for systems which will not typically be used locally very often).

(These limitations are covered, but to keep these early setting simple, they are covered in a later section, after setting and testing/verifying the IP address.)

Performing what needs to happen
NIC preperation

Several of these steps may not be needed in many of the cases: some may be able to skip straight down to setting a new IP address. However, these additional steps are listed in case they are helpful/needed, and should be referred to if troubleshooting becomes necessary.

Device needs to be visible

In Windows, make sure device shows up in Device Manager and IPConfig.

In Unix, generally the device should be shown in the list of available NICs. In some cases, the network interface may not be fully visible until networking is enabled.

[#enablnet]: Make sure networking is enabled

In many cases, this isn't needed because networking is generally enabled by default. However, since this might be needed in some scenarios, remember this as a possible troubleshooting step.

OpenBSD
Probably not an issue, as noted by OpenBSD FAQ 5.11.3: My IPv6-less system doesn't work!. (That FAQ is simply using an attempted IPv6 customization as an example of a generalized concept, so it applies even if IPv4 is not being used.)
Linux
The following commands might be needed:
Debian
The command “ /etc/init.d/networking start ” may be helpful (or use command line parameters of restart or stop)
Redhat
The command “ service network start ” may be helpful. (A similar command that may be able to be run would be “ service network restart ”.) (The “service network stop” may stop the service.) Or, perhaps try “ /etc/init.d/network restart

These commands may not be needed in many cases. See networking in BackTrack.

Microsoft Windows

This is probably not an issue: Basically the way to check if a networking protocol is enabled is to check network bindings on the device. (Further details may be added here at a later time.)

For Microsoft Windows 3.1 (and newer versions, possibly including Win95 but not OSR2?), IP-based networking may typically not be provided until added. (In modern networking, it likely makes sense to support a LAN connection by default. That probably was not done with Win98.)

Naturally, the specific networking protocols will need to be supported. For example, for Windows 2000 and Windows XP, IPv6 may need to be installed/enabled. (Further details may need to be added.) IPX had been enabled with older versions of OpenBSD, although such support has been dropped.

Media sense/state

Information about this has been moved. See: network link light.

Making sure the device is enabled

This sort of step may be something that must be taken care of before an IP address is assigned. This is the case for Microsoft Windows.

In Unix, the interface will need to have a flag of “UP(verify: which may require a valid network address to be assigned).

[#stnwipad]: Setting an IP address
Generally the steps involve setting up a network address that is on the same “subnet” (or “network” in the case of using a networking technology that doesn't use subnets) as other (current) network devices.
Assigning addresses the easy way: automatically
Overview

(This text helps determine whether an automated solution should be used. It also provides background and provides details to satisfy curiousity, and is not essential reading for those who both know whether or not to use an automated method, and who also want to just just complete the steps most quickly.)

Reasons why to consider not doing things automatically (even once)

If an automatic addressing service is not set up on the network, this “easy way” may be best to skip. This will be true for a brand new network with no devices other than the first computer that is currently being set up. Also, chances are that a machine which provides automatic addressing services might use manual network addressing, so the computer on this network probably does NOT want to use this route.

Despite these exceptions (common on new networks), this option is being mentioned first because, for many machines (which these instructions may be valid for), the automated approach ends up being simplest.

Note that although using automatic addressing may be one option, using such a service may affect more than just the network address and prefix length. Routing and name resolution are commonly affected, but aren't in all cases. So, before blindly deciding to proceed with changing a couple of things (like an IP address and subnet mask) automatically), just be warned that actions attempting to automatically update some settings may end up affecting other things.

For a system that is setting up its first network card, altering settings like routing and name resolution will usually not be a problem. (In fact, changing those settings may be the opposite of a problem: it may take care of a task that would otherwise be taken care of in the near future, so it ends up being a pleasant time saver.) However, changing some of these sorts of settings may be a problem that occurs when setting up a second network card for DHCP on the same system.

For some systems (namely those providing at least some network services), relying on a dynamically-assigned address may not be preferred (at least by some network technicians) for a long-term solution. However, those sorts of objections likely don't apply to this section: as mentioned earlier, these instructions aren't necessarily about setting network settings for long term use. If an automated solution sets IP addresses quickly, then that may be the super-fast way to achieve some immediate goals, so that probably is not a reason to skip automated addressing.

If the expected effects of automatic settings are deemed acceptable, and if the network already has an automatic addressing service set up, then using automatic addressing may be a very simple method to get a valid address assigned. Therefore, the generally safe, good, and useful recommendation is to go ahead and use that method (if it is available and simple).

Proceeding with using automated steps

The section on supporting IPv6 router advertisements on a client starts with some steps that may commonly only need customizing when supporting IPv6 (e.g. supporting the kernel configuration, enabling NDP on the interface). Make sure that the changes are made long-term.

If a specific address is going to be reserved, make sure that information is in the automatic addressing server. (See configuring reserved IP addresses.)

For further details, see instructions for setting up a client for automatic network addressing.

Setting up network addresses the manual way

For an IP network, determine what subnet to assign the addresses onto. If there are other machines on the network, find out the address of an existing card on the same subnet. Pick another address in the same subnet to use. Otherwise, select an address and a subnet. It is good to try to choose an address and a subnet that won't interfere with accessing other accessible machines. The network addressing has some recommendations (such as avoiding commonly used addresses (like early subnets within 192.168/16)).

DAD

(Note to self: simplify text? Referring to ARP tables? Actually, should DAD (duplicate address detection) be a separate section in Techn's that gets referred to here?) Attempt to communicate with that address: ping it. If there's no ICMP replies, abort the ping process and check arp tables. (That may reliably show a device.) If there's an arp entry, try choosing another address.

If the network does, or might, use an automatic addressing server, the safe way to proceed is to verify the scope of automatically assigned addresses, and to choose another address. That won't be a concern if it is known that automatic addressing isn't yet assigned. Choosing and temporarily using an address that might be, or is, in the scope will probably run the risk of creating an address conflict. That can cause serious problems, although doing so temporarily may be a risk that, in some scenarios (and not others), may be acceptable. (For example, in a home network without remote access, where it is known that nobody else is actively using the specific network being worked on, the unpleasantness of accidentally interfering with another device may be consequences that would be less undesirable than delaying a successful network.) In other, more rigid and strict environments where even temporary problems should be avoided even at substantial costs, such a risk might not be one that would be as likely to be deemed to be as acceptable.

If the intended address is not going to be a good one, re-selecting an address and a subnet may prevent problems.

Proceeding with the change
Once a good choice has been made regarding what address to use, see: setting network addressing manually.
Verification

Once it is believed that an address has been set, this may be verified by checking what network address(es) are in use.

Setting up remote access

(This section covers not only the initial setup of the ability to use a remote access protocol, but also how to set up authentication methods that remote users may use. Even if remote access is working with basic passwords, check out this section's details about using stronger methods for authentication.)

In some cases, remote access may be a low priority feature. (That may be especially true in cases where the system is not currently connected to a network.) However, in other cases, a machine might not be set up for convenient, comfortable local interaction. Spending resources, including time and effort, to fix such a problem (soon) may not be deemed worthwhile if the likelihood is that the machine will not be used locally very often. Virtual machines might be accessed through an interface which, although workable, isn't as convenient to use as a nice remote access method. Therefore, for some setups, getting this working nicely is going to be an early priority.

These instructions will require at least simplistic networking to be functional. More advanced features, such as network routing/forwarding and tunneling traffic over a VPN, may not be required based on what machines will be involved. Especially if such networking features are desirable in order to get remote access working well and tested, feel free to temporarily skip this section to make sure those items work. These instructions may be able to be followed more simply when more networking features/settings has been set up. In other cases, though, there may be no need to delay, so it makes sense to introduce this section nice and early.

Setting up and running the remote access server

Set up some remote access.

Choose a protocol and implementation

Although there's several choices available at the page, this guide recommends reading the following guidelines/recommendations before proceeding to the page about remote access solutions. That page will likely have further details.

If using Unix
Overview of recommendations
For command-line Unix, the recommended remote access solution involves using software that uses the SSH protocol to use a remote command line. For a graphical environment, a very popular route has been to use OpenSSH to implement encryption, and then using some other software document by the “remote access software”: section on sharing visual/graphical screens.
Steps to take

Whether performing steps graphically or via text mode, either way involves using the SSH protocol. The section on running an OpenSSH server is recommended; particularly the section on restricting which groups of users are allowed to log into the OpenSSH server. (That is within the section about changing OpenSSH options.) If any such options are changed, be sure to reload the OpenSSH configuration file.

Then, for a graphical system, after getting the SSH server set up, create a port forwarding rule using SSH and effectively use that to tunnel the traffic from some sort of solutions to remotely access a graphical/visual display. (For now, this guide may be most complete when using a Remote Framebuffer (VNC) implementation, so Remote Framebuffer (VNC) is the current recommendation if a less experimental path is desired.) Remote Framebuffer (VNC) implementations has been a popular method, a method which may be implemented freely, and a method which works with a number of graphical operating system/environment platforms (including non-Unix platforms). (Although XForwarding is probably the option which is most compatible among all sorts of different flavors of Unix, there are multiple security holes that can be introduced if it isn't set up correctly. Therefore, it is not the solution being generally recommended (by this guide) for people with minimal to no experience with Unix.)

Some other platforms which may be more specific to X11, and which might be better than Remote Framebuffer (VNC) implementations, are X Persistent Remote Applications (“Xpra”) or NX. Some of these options may replace Remote Framebuffer (VNC) as the general recommendation in the near future. Instead, Remote Framebuffer (VNC) may be a recommendation for those who simply want widespread compatibility, and Xpra or NX for those who prefer higher performance even if the higher performance solution is X11-specific.)

Options compatible with Microsoft Windows

For Microsoft Windows platforms, a very popular protocol has been the RDC, although it is imperative that such a protocol be secured through port forwarding with SSH or some alternative strategy to secure the connection, such as the protection that RWW offers. The Remote Framebuffer (VNC) option is another remote access method which is quite popular, although it also needs to be secured, a task which may be done with port forwarding with SSH. A nice thing about using Remote Framebuffer (VNC) is that the same protocol (and so some of the same software) is also widely used for other operating systems.

More options may be available from the remote access solutions page.

Determine how the connection will be locked down so that all connections are sufficiently secured
Ensure the traffic will be able to route to local software (so, adjust firewalls as needed)

If a firewall is blocking the traffic, including a software-based firewall solution, the remote access software may seem to not work. Plan to open up the traffic before attempting to make a remote connection.

Note: Some of this information may be moved to a page about traffic routing, firewall configuring, etc.

Unix
TCP Wrappers

If implemented, this can be very easy to overlook. This probably isn't implemented, but if there are troubles, consider checking for /etc/hosts.deny (and/or /etc/hosts.allow). For more information, see FreeBSD TCP Wrappers Guide, OpenBSD's man page for TCP Wrappers. (For even more reading: TCP Wrapper example found on netbsd.org site.)

As a quick command that may help in some circumstances, it may be useful to back up the relevant file and, as a superuser, use:

echo sshd : ALL >> /etc/hosts.allow
Firewall rules

If a firewall is enabled, perhaps it needs to have its behavior adjusted. This might be done by editing the configuration file(s) used to define the rules, temporarily/dynamically changing firewall rules (e.g. if using OpenBSD's default firewall: with pfctl?), or by disabling the firewall entirely (if that is safe to do so-- in the long term this is probably not preferable over just changing rules as appropriate.

(If this is suspected, see firewall rules for details.)

Note: Platforms that support this might also support TCP wrappers?

Microsoft Windows
Windows Firewall

The Windows Firewall built into Microsoft operating systems from Windows XP Service Pack 2 and newer may be blocking traffic. The following must be tested/verified/etc. Opening ports in Windows Firewall: MS Goodies blog at Blogspot: page on RDC says: HKLM\System\CCS\services\SharedAccess\Parameters\FirewallPolicy\DomainProfile\GloballyOpenPort In the right pane find this, 3389:TCP:*:Enabled:RemoteDesktop Now modify 'Enabled' to 'Disabled' to get rid off. Reapeat the Same for - HKLM\System\CCS\services\SharedAccess\Parameters\FirewallPolicy\StandardProfile\GloballyOpenPort

Testing the remote access server
Overview: motivation of testing versus adding security

(This might be overly verbose/trimmable...)

(This overview section describes, for the curious, why things are being presented in the order they are. This overview does not contain steps that need to be followed.)

(This overview was a bit long, and so has been moved to a separate page: why early testing of remote access, before security, is recommended.)

Creating a remote connection
Using SSH
See remote access: SSH terminal clients.
Using the remote connection

Once the remote connection is established, make sure to follow up to make sure that security is working. (Perhpas this info should this info be moved to later?) Then make sure that the user on the remote machine has the desired permissions. For example, make sure that privilege escalation works. If so, there is little reason to remain logged into the local console if the machine is at all inferior (due to the interface being used, or perhaps due to its position which is less comfortable to use) compared to the experience of remotely accessing the machine. One may still wish to be near the machine until some other items are completed, such as making sure that everything (including running the remote access server, the generally earlier step of network address assignment, and the even earlier step of determining which media to boot off of) will work after the system initiates a reboot. If it does, then, finally, at that point, it may make sense to leave the site.

[#rmtacsec]: Improving the security of remote logins

Note: If network addresses that are currently being used are temporary and are not going to be used long term, do not make long term configurations that restrict remote logins to only occur from those specific network address.

If the machine being created is meant to be a template, and so the hard drive will be used as a “base image”, then adding authentication methods may not be worthwhile. If any credentials supplied need to be revoked, then the additional acceptable credentials will just be another problem to take care of whenever a new child image is created.

However, for systems that aren't going to be having the hard drive image copied to other (virtual, or physical) computers, do consider using key files and single use credentials. These processes are recommended process for those who haven't used them before, even if there's no immediately-recognized need for them. They may be worthwhile to set up just for the educational aspect. Once they are understood, their usefulness may be more apparent, so potentially useful situations may be better recognized.

Using key files

Key files tend to be more secure than passwords, although key files may be more difficult to control than passwords and, like passwords, they lose their effectiveness if they are not controlled. The main reason that key files are preferred, from a security standpoint, is because the key length of the shared secret tends to provide more complexity than passwords, and that leads to the shared secret being more difficult to compromise through brute force measures. (Such keys are typcially generated with some randomness and so are also less likely to be set to a value that is easily guessed by somebody who knows the user that the account is meant for.)

Although there may be some additional difficulty in logistically handling the files to make sure that every file is accessible when needed, key files can actually be easier to use (e.g. if an agent for SSH keys is being used), actually leading to people's lives being easier. Although they may take some more time to set up, the general idea is that after they are set up, the end user just needs to type a password once each time the user logs into the local system. Then the user can log into as many remote systems (which are set up to support the SSH key), as many times as desired, without requiring any effort dealing with passwords.

See: Using key files for authorization. (This may take a while, at least several minutes, the first time that this guide is followed.)

Single use credentials
One-time passwords
It IS possible to type a working password on a computer with a keystroke logger, and not be susceptible to attacks from that password being used. To learn about an automated way of doing this, see the section on One-time passwords.
Reviewing previous steps and preparing for upcoming steps
Reviewing previous steps

Going through this guide without skipping steps is a great, recommended way to cover the material. However, in some controlled environments, there may be interest in getting a decent text editor installed before following other steps (such as editing text that disables unneeded accounts that have Administrator access). If there is the potential that such steps were skipped, review the prior steps (to take care of any remaining issue(s) that may have existed from skipping steps). Some steps may not seem quite as crucial to double-check because security ramifications (if the steps are not taken) may be anticipated to be minimal, and if the step is skipped then there is a high likelihood that the problem will be clearly noticed. However, others may have security ramifications and be something easily forgotten about. Those may be more worth the effort to double check. Here are some items to consider re-checking.

Preparing for upcoming steps
Preparing software package management

One step which may be worthwhile to do (and could already be done, but which is otherwise likely good to do at this time) is to make software package installation easier. Upcoming steps will likely involve installing software. To make software package installation easier, useful steps coudl include becoming familiar with package management tools, and making sure that the package management software has a valid pointer (to a location where the archived software packages are stored). If this guide has been followed closely, then the network is enabled, and the system is at least somewhat secure, so now may be a good time for this (by following the steps that are about to be provided).

Find out what software package management system(s) may commonly be used with the operating system that is being used. (That may be covered by the section on operating systems.)

Check out the section about software package installation: software package systems for a software package management system that is commonly used with the operating system being used, and see if there are details about how to set the software repositories.

Communicating with the Internet

Once an IP address is assigned to a NIC, see if a remote site (like google.com) can be reached. If so, then Internet communication seems to be working. If not, then see if a remote Internet site can be reached via IP address. For instance, if using IPv4, try to “ ping 8.8.8.8 ” (or perhaps one of the other usable DNS servers, since many of them will also respond nicely to ICMP.) If so, then the issue is likely just with name resolution. (The section about usable DNS servers may have more information.) If the remote system can't be reached, but the default gateway can, then there may be a couple of possible causes. One possible cause may be that routing isn't set up. Adding a default gateway may be the key. This is described in the section about network traffic routing.

Serving as key/core network infrastructure

Setting up programs that provide network services is something that may not be needed at all by many computers. In other scenarios, it may make a lot of sense to delay this step until performing certain other tasks, such as installing protection software, and making sure there is a decent program for editing configurations. (Namely, this recommendation of installing a text editor especially refers to Unix systems where having a preferred editor of text files is nice, so installing such a program may be a worthwhile early step. Installing a preferred text editor isn't considered to be quite as crucial for machines running DOS or compatible operating systems, since by MS-DOS 5 and Windows 3 those machines usually come with a program that edits text files and is at least somewhat “user friendly”.) So, in many cases it may make a whole lot of sense to skip this section until later, after some other steps are performed.

However, opinions may vary about whether it is nicer to perform some steps before or after other steps. So, this information is provided now: feel free to refer to it as convenient. (For first-time users, going through this process for an educational purpose, it is recommended to wait until quite a bit later. For those who are trying to restore previous functionality ASAP, spending a bit of time getting critical infrastructure available for multiple machines may be more worthwhile.)

Some of these directions may assume that earlier steps have already been taken. Most notably, that key security (such as securing an Administrator's account) is already taken care of, and that the machine has an IP address with working Internet access.

Basically, this is a hodgepodge collage of various critical services that a single computer may be responsible for. Appropriate hyperlinks should be getting added to this section at a later time. (At least for now, feel very free to skip this section!)

Running virtual machines

At least the first virtual machine may take a while to set up, and so may not be a very high priority. If the virtual machines don't exist yet, then other tasks may be much higher priorities. However, if already-configured hard drives for virtual machines do already exist, services may be able to be made available simply by starting a virtual machine.

Information about creating virtual machines is included in the guide for making a virtual machine (which is a guide that may refer to this guide). There is also a guide to setting up multiple virtual machines.

Allowing traffic forwarding

The scenario envisioned here is that the newly-created computer may serve as a router/firewall. Perhaps another machine can successfully communicate to this newly installed operating system (using ping or some other software, possibly using a protocol to provide remote access). And software running on this newly installed operating system seems to be able to reach the Internet. However, the other machine doesn't seem to be able to reach the Internet.

This may even be a higher priority than automatic network address assignment. One reason is because preventing traffic from forwarding properly can be a showstopper that prevents things from working, even for machines that may have the IP addresses set correctly (possibly due to being set up manually). The other machines cannot manually work around the issue. Once traffic forwarding works, other machines may be able to work (possibly by manually setting certain settings like the IP addresses), but until this works, other machines may not be able to communicate to the Internet.

Starting to forward network traffic may be all that is needed. If that doesn't resolve things, it may be worthwhile to ensure that the traffic is getting to the newly-installed operating system. (This may be less likely to be the cause, but it can be the cause, and this possible solution may be faster to check and fix.) Namely, make sure the other computer routes traffic to the newly-installed operating system. This is generally done by making sure the newly-installed operating system is what the other computer uses as a “default gateway”. (Checking and setting such a route may be covered in the section about routing tables, covered in the section about routing network traffic.)

Another possible cause of traffic not seeming to flow as desired might be that the traffic is being rejected. See: traffic forwarding discussion: section describing rejected traffic.

Another, similar possible cause is that firewalling may be preventing the traffic from going where it needs to be. Useful details about handling this issue might be found in the section about setting up firewalling software.

Network address assignment

Setting up DHCP may be even more useful than setting up DNS. A key reason for that is because DHCP can be an effective way to distribute name resolution settings.

Name resolution: Domain name lookups

DNS is the most widely used protocol on the Internet. Name resolution (like DNS) can also be pretty quick to set up, at least well enough that some domain names are effectively working.

It may be wise to try to get software protection operational before getting DNS working well enough that many computers can look up external domain names. Some malware may use DNS, but more impactfully, a lot of malware ends up getting distributed though some method that involves DNS working. If computers are sufficiently secure, including having whatever software protection is needed, then it may be nice for name resolution to be made available.

DNS does require working network access, including any sort of traffic routing that is needed to get the DNS traffic to and from the DNS server. If a DNS server can be reached with other traffic, such as the ping command, then DNS traffic will generally also be able to work with that DNS server. This generalization isn't absolute: there may be exceptions such as if a firewall blocked DNS traffic. However, such exceptions would not be common.

Once a computer can communicate with the Internet, getting a working DNS server (quickly) will probably be nice for people trying to use the computer.

Determining what server to use

First, identify what DNS server(s) to use.

Using an available internal DNS server

If internal domain names have previously been used on the network, and if at least one internal DNS server is available, then it may be best if computers start using that DNS server. That way, internal domain names can start working.

Using one or more available internal DNS servers might have even less network requirements than working Internet access. (For instance, broken routing might not affect a machine's ability to look up the name of an internal computer from a DNS server that is on the same subnet.)

If pre-existing DNS servers work to provide internal domain names, but if (uncached) external domain names aren't working, make sure that the DNS server has working Internet access. If that is indeed the problem, then getting that working is often more worthwhile than trying to switch other computers to start using external DNS servers. (Trying to work around the issue from another computer may not be the quickest approach. Often, in such a case, other internal DNS servers may also be affected in the same way. Also, if the DNS server cannot communicate with the Internet, the problem affecting Internet access may effectively prevent other computers from being able to successfully use an external DNS server.)

Using an external DNS server

If a computer can communicate to the Internet (directly, or indirectly using forwarded/routed traffic), such as if ping is successful, then the computer can generally use one or more external DNS servers to effectively get working DNS for external site names (like google.com).

If a pre-existing internal DNS server is not readily available, then using a publicly available DNS server may be a very fast way to start allowing DNS to work for external sites. This will allow many people and/or computer programs to be able to effectively use full Internet access. This doesn't allow people to fully access everything they may be used to accessing, such as resources located on an internal file server on the network. However, providing computers (and the people who are using those computers) with partial access may help some people. Even people waiting on access to internal resources may be pleased to know that external Internet is working. If for no other reason, just knowing that external Internet started to work is a visible sign of progress being made, so that may help inspire some confidence.

Knowing some publicly available DNS settings can be useful for this. The section about usable DNS servers has details about what options may exist for DNS servers.

Testing DNS manually

If a computer can get results by doing manual DNS name lookups, then it is probably worthwhile to have the computer's name resolution settings to point to a working DNS server. However, if there are any problems with the automated name resolution, it can be nice to know how to check that manual requests for name resolution causes positive results. If the results are not positive, then the computer's settings for automatic name resolution will not be likely to be useful.

On an IPv4 network, this can be tested by running “ nslookup google.com 8.8.8.8

Setting up DNS clients

Details on setting up DNS may be provided by the section about DNS client software

If automatic IPv4 address assignment using DHCP is working, then that can often be used to help other computers be able to successfully look up domain names. This can allow computers to start using external DNS quickly. Then those computers can have working external DNS while internal DNS is set up. Once internal DNS is set up for internal host names, determine if DNS forwarding is also working for external host names. If not, determine whether internal DNS or external DNS is more useful. Once DNS forwarding is working, or if internal DNS is more useful, update the server(s) providing automatic address assignment. Then spread the word, among those who will understand, that the clients should each renew the DHCP lease being used by the client. For those who do not understand those instructions, a longer (but simpler to understand) approach that generally works is to reboot/restart, or perhaps power cycle, the computer/device that isn't working yet.

Setting up name resolution so that a small number of computers can access the Internet is something that may be quick to do by manually assigning which DNS servers get used. Manually setting to use external DNS servers will not allow internal DNS to work to provide names for internal systems, and so this approach will need to be reversed later. However, it can be quick to set up, and so this approach can at least provide the benefit of quickly allowing systems to be able to start accessing external Internet sites. That can often be at least partially useful.

Other initial steps: common settings
[#givsysid]: Identifying this machine

Edit files that cause this machine to be easily identifiable. This is just yet another one of the steps that is ideally done early. When previously-discussed steps have been performed, the machine may be remotely accessed more often, and so it may be useful to have an easy way to identify the machine.

(This section may provide few details, at least in some areas.)

[#hstnmrul]: Naming guidelines

DNS will want to be used, so see DNS domain rules. Consider also reviewing commonly used DNS names.

There have been various strategies for choosing domain names, such as naming computers after characters from cartoons or video games or professional sports teams. Those names could work great as long as intellectual property infringement isn't a cause of concern. However, more practical names can sometimes simply be more useful. Naming the machine after its purpose can be a great idea. For larger scale operations where machines may perform identical functions, if machines are located in multiple geographical locations (like different cities), referencing the name of the location can be effective. The server could be named after a city, a street name, or a memorable feature local to that area. As an example, an organization with a single location near Seattle, WA, might use a name like “mtsaint”, referencing the (self-destructed) Mount St. Helens).

Before deciding to use a name, make sure there will be no naming conflicts. If someone wished to name a router after the Alaskan/Canadian “Mt. Saint Elias”, then naming the router “mtsaint” might not be good if the same organization is already using “mtsaint” as a reference to Mount Saint Helens. So check existing documentation (including partially-ade documentation describing current work) before solidifying a name. (And, once a name is solidified, document it, to prevent other new name conflicts from occurring.)

System's name

This may have been at least partially implemented, but there may be more than one location where the name should be set.

System name specified by name resolution
Modify DNS

If there is a DNS server, and if it hasn't yet been made to recognize this machine, and if the DNS for this machine should be handled manually rather than automatically, then edit the configuration used by the DNS server. (For further details, see the section about setting up a DNS server and editing the configuration of the DNS server.)

RFC 1912 (“Common DNS Operational and Configuration Errors”) Section 2.5: “MX Records” says, “It is a good idea to give every host an MX record, even if it points to itself!” ... “Put MX records even on hosts that aren't intended to send or receive e-mail.” This is not common practice, and this guide does not necessarily recommend this practice (despite being in an RFC). However, the author of this guide did decide it is a novel enough idea that it is worth mentioning for consideration. The basic reason provided is so that anybody who wants to complain about this machine will know what domain accepts E-Mail for such a complaint. Of course, RFC 1912 (“Common DNS Operational and Configuration Errors”) Section 2.6.4: “RP” provides another approach for this same sort of concept. If such a concept is to be implemented (via MX or via RP), and if such extra DNS RR's are being added manually, then adding this extra DNS (MX or RP) RR at the same time as editing the DNS AAAA and A records is probably a sensible course of action.

In the hosts file

This is more commonly used by Unix. (It may also be used by Microsoft Windows, but perhaps less commonly so. If starting to use Unix after being more familiar with this file from Windows, do not just decide not to update this file just because it wasn't commonly used in a Microsoft Windows environment.) See the location of the hosts file and editing a text file. There may even be an entry per NIC with OpenBSD's default install.

Others
If there are any additional name resolution methods, update those as needed.
Host name

See Setting the host name.

DHCP host name

This may be the name that the DHCP client provides to the DHCP server. (Additional review may be needed.)

Perhaps this is already done by default? Check the contents of /etc/dhclient.conf before getting too crazy.

Possible example:

cp /etc/dhclient.conf /etc/dhclient.conf.orig
echo initial-interval 1\; >> /etc/dhclient.conf
echo send host-name \"mySysNam\"\; >> /etc/dhclient.conf
echo request subnet-mask, broadcast-address, routers, domain-name, >> /etc/dhclient.conf
echo $( printf \\\\t )domain-name-servers, host-name\; >> /etc/dhclient.conf

Customize mySysNam from the above example.

The idea is that the last line of the text file shown there looks like:

domain-name-servers, host-name;
Quick methods for system identification

This may have been taken care of earlier in the guide. In case this was delayed, consider now working on setting up easily visible system indentifiers.

Host ID

In Unix, run “which hostid”. If there is no such command, then congrats: there may be nothing to worry about. If there is such a command, then run hostid. That will output the host ID.

Another method might be to use a function called sethostid. OpenBSD's page for sethostid: History section notes, “The gethostid() and sethostid() syscalls appeared in 4.2BSD and were dropped in 4.4BSD.” Furthermore, the next section of the document, called “Bugs”, notes, “32 bits for the identifier is too small.”

That being said, there has been a guide available to running the C function sethostid(hostid), passing that function an unsigned long as needed, and rebooting. A guide (aimed for IT administrators, rather than programmers) to accomplish this is shown by https://calomel.org's guide to faking/changing the Host ID.

Banners (announcements/login messages)

This should certainly be widely viewed as optional: it can be nice, but can also be disruptive (to programs that require a hushed login), so this should generally not be considered required. (Some legal staff may advise using banners for some purpose or another.)

Perhaps something may be mentioned by /etc/motd and/or sshd_config (probably in /etc/ssh/) or ~/.profile (where ~ refers to the home directory of a user; perhaps ~root/.profile may be different than other users). (This may not be a preferable method: see see hushed logins.)

Other banners may include server software for protocols such as SMTP (for which clients must support using a banner), FTP (which traditionally will often have a banner by multiple sites so client software should be okay with that), and perhaps telnet.

[#setclock]: Setting the time/clock

Setting the clock, at least once initially, is good to do very early on so that subsequent logs and filenames aren't needlessly inaccurate.

See: Supporting hardware: the system clock. It has information about setting times, including setting what local time zone to use (even though support for the concept of a “time zone” might not be implemented fully (or at all) by the system startup (BIOS, or similar) software).

Make sure that changes made so far will stick

There are some changes that may have been made that affect things in the short term, although they may not happen automatically during system startup. This means those configurations may/will have no effect when the system is restarted. Make sure changes are appropriately saved to disk. Here is a checklist of some things to consider.

[#axcsetip]: Network configurations
Setting network addresses/settings
Overview: List of configurations to check on
IPv6
IPv6 addresses, prefix lengths, and routes. Has a long term setup been made, using reserved non-temporary addresses that may easily be routed as needed?
IPv4
IPv4 addresses, netmasks/“prefix lengths”, and routes. Has a long term setup been made, using reserved non-temporary addresses that are easily routable?
Other (e.g. forwarding)
Other network configurations, such as if forwarding had been enabled on a non-permanent basis (such as, for example, by just running sysctl to change a sysctl value and never modifying a text file to make the change last after the reboot)
Methods of handling these configurations

This will vary based on the operating system. (Eventually this information should be moved, perhaps to system startup procedures. For now, though, here is some general/specific advice that may help.

Perhaps see See: network addressing page, available NICs page.

Setting IP addresses in OpenBSD

See /etc/hostname.* files. The file extension of the hostname.* files are named after the interface name, which is generally named after the driver and then a number to represent the specific interface.

First, determine what the name of the file should be. The file's name should be named after the NIC, so this requires finding out the name of available NICs. Then, see what files exist with that filename pattern, using a command such as “ ls -lF /etc/hostname.* ”. To see what configuration(s) may pre-exist, see viewing files.

The full details about the syntax are in OpenBSD manual page for “hostname.if” (/etc/hostname.* files). However, here's some simple summaries of common commands supported by that file:

  • IPv6

    To support router advertisements, include the phrase rtsol inside the relevant network configuration file.

    To support both router discovery (router advertisements) and also stateful addressing by DHCPv6, see the details from the section about IPv6 automatic addressing. Since the section about “Chaining DHCPv6 to NDP is not giving useful results at the time of this writing, simply run DHCPv6 separately.

    Details on the command lines may be in the section about DHCPv6 clients.

    e.g., if the desired file has no configuration lines that currently seem to be useful, the following will remove the existing file and create a useful dual-stack configuration file that uses WIDE-DCHPv6.

    CURNIC=if0
    cpytobak /etc/hostname.$CURNIC
    sudo rm /etc/hostname.$CURNIC
    echo \# IPv4 automatic addressing | sudo tee -a /etc/hostname.$CURNIC
    echo dhcp | sudo tee -a /etc/hostname.$CURNIC
    echo \# Stack-neutral NIC configuration | sudo tee -a /etc/hostname.$CURNIC
    echo up | sudo tee -a /etc/hostname.$CURNIC
    echo \# IPv6 automatic addressing | sudo tee -a /etc/hostname.$CURNIC
    echo rtsol | sudo tee -a /etc/hostname.$CURNIC
    echo !/usr/local/sbin/dhcp6c -c /etc/dhc6c\$if.cfg -p /etc/dhc6c\$if.pid \$if | sudo tee -a /etc/hostname.$CURNIC
    CURNIC=

    Also, be sure that the support for router advertisements is set to be enabled when the system reboots. (e.g., for OpenBSD, make sure the long term support is enabled as described. Do this using information in, and referenced by, the section on support for router advertisements.)

    cpytobak /etc/sysctl.conf
    echo net.inet6.ip6.accept_rtadv=1 | sudo tee -a /etc/sysctl.conf
    IPv4
    • The simplest way to set an IPv4 address automatically is to use DHCP. OpenBSD manual page for “hostname.if” (/etc/hostname.* files): section on “Dynamic Address Configuration” notes, “The OpenBSD installation script will create hostname.if with options of ``NONE NONE NONE'' when DHCP configuration is chosen. This is the same as specifying just ``dhcp''.” This could be used to set up routing, but could also be used for other things.
    • To manually set a single IPv4 address, use something like:
      inet 192.0.2.100 255.255.255.0 192.0.2.255

      The first address is the IPv4 address. The second thing that looks like an address is actually a subnet mask which corresponds to the desired network prefix. The third address is the broadcast address, which is most commonly the last address in the network block. Additional options may then be included after the broadcast address, although in general there is not necessarily any need to specify additional options. The possible options may be identical to what could be sent to an ifconfig command.

    • If multiple IPv4 addresses are desired, the same syntax is followed except the word alias is inserted after the network family and before the first address. For example:
      inet alias 192.0.2.100 255.255.255.0 192.0.2.255
  • Don't forget, after specifying the address(es) that will be used, to make sure the card is set to be up. The up command may be used on a line by itself.
  • To run a specific command on the command line, start the line on the text file with an exclamation point (“!”) and then run the command. For example:
    !echo $if
    The man page states, “It is worth noting that” the $if environment variable may be specified and it “will be replaced by the interface name.” A possible use of this is to set up routing.

Once these files are configured, the fastest way to test them may be to run:

. /etc/netstart if0

The reference to if0 references the specific network interface name. That command line parameter is optional: if no network interface name is provided then all network interfaces will be initialized. (The other option would be to reboot.)

Forwarding
Traffic forwarding may have some information.
Environmental variables

For example, in Unix, put them in a login file that will be used. (In Unix, for global/system-wide values, backing up and modifying /etc/profile may work, while each user's individual ~/.profile file may work for that user.)

After recording all environment variables that may be customized, log off and back in and make sure they take effect as expected.

Examples of environment variables that might have been customized:

Variables that relate to where packages are obtained. e.g. for users of pkg_add, there may be: PKG_PATH and PKG_CACHE. (However, these might not have been set yet.)

Mount points
In Unix: Run “ mount ” and notice any differences from what is in the /etc/fstab file. In Windows, check that any network drives are documented, so that they may be re-created manually if they aren't automatically re-created. Also, ideally test that the drives auto-mount okay. (That may involve logging off and back in, or rebooting the system.)
Other items to check (which may change during reboot)
Other sysctls such as forwarding?
Time

Two common aspects to setting time involve keeping the clock synchronized on a daily basis, and for those areas in the world who participate in such time changes, making sure that Daylight Savings Time is supported (using the latest rules adopted by the local government).

See: Supporting hardware: the system clock. It has information about setting times, including setting what local time zone to use (even though support for the concept of a “time zone” might not be implemented fully (or at all) by the system startup (BIOS, or similar) software).

Installing some desired software (likely to be desirable for system administrators to use)
Some common software

If there is a local repository, make sure that it is accessible. Otherwise, make sure that Internet access is working, and get in a downloading mood. Review the section about installing some common software, and obtain and install software from the various categories. (This may not be as needed for child images, if the software is pre-installed into the base.)

To clarify: download lots! Review the section about installing some common software, and install many, or even all, of the software that is mentioned. (This is commonly installed software.) Hopefully installing software from a package repository is very easy to do (without even needing to locate and click on a web page's hyperlink to download the software).

Download packages specific to this machine
If this is not meant to be a multi-purpose “parent/base image”/“backing file”, and some specific software is going to be needed to carry out this machine's task, downloading such software now (while in a spree of downloading other software) may be a sensible approach.
Supporting dynamically set IPv6 addressing
Earlier instructions noted to configure fixed IP addressing. However, some operating systems might not include support for DHCPv6, and there might not even be an automatic addressing server on the network. Therefore, those earlier instructions noted that it may be sensible to delay that step. At this point, it may be time to re-evaluate. Make sure that commonly-installed software is installed (including the specific section about getting a client for automatic addressing), especially if this computer is going to be used as a base image that other computers will be using. If that client software is installed, and if the network is also supporting reserved addresses (because automatic addressing has been installed), there is likely no further need for any delay. If the computer being set up is supposed to have any fixed/reserved IP addresses, make sure that is set up (as described by the section on configure fixed IP addressing and/or resources referenced by that section.)
Updating the log

This is a good time for a reminder about keeping an installation log filled out. Hopefully the log at this point contains files, packages, and any other customizations performed. If not, now may be a great time to update the log. A lot of what has been done already can be recorded to a log fairly quickly.

If files have been backed up to a directory named /origbak/ then creating a list of files that were updated may be as simple as creating a list of files that are under the /origbak/ directory. The software package installation section may have details about how to view what packages have been installed. Although such information may be able to be derived from the backed up database, listing the users created, as well as any passwords that administrators will need, may also be a good idea.

Since much of the rest of this guide involves some more customizations, possibly by running commands and not just by editing files that will be backed up, be sure to document the remaining steps so that they may be easily re-performed.

Performing some protection

Some protection software may have been installed by some earlier instructions. This section is more about making sure that it is being used properly (such as being actively run).

Running a file integrity checker
See: File Integrity Checker(s)
Anti-Malware software

e.g. making sure that Anti-Malware software definitions are being updated and that scans are scheduled.

Unix

One possibility is to use ClamAV.

Using ClamAV

To use ClamAV, obtain and install the software called ClamAV, and configure it as needed, as described in the section about section about Anti-Virus for Unix. Although variations may be possible, a generally good approach is to make sure that freshclam is up and running, and that scans have been enabled.

(The following is rather untested by the author of this text, at the time of this writing...) Also, create an account (for whomever owns the computer network, and/or whatever organization is in charge of the computer network) at http://www.stats.clamav.net (see also FAQ about ClamAV Malware Statistics). Get a DetectionSTatsHostID, and place that into /etc/freshclam.conf

Details about running a scan are in the section about Anti-Virus for Unix. Of course, the real usefulness of a scan is to have some action to be taken when there is a problem, such as removing malware and/or quarantining malware and/or at least reporting the malware. Further details are not completed in this guide, at the time of this writing. Even so, there may be benefit in having up-to-date signatures, so that they are up to date when automated scans do get added later. (Note that there may be multiple times of scans, such as a general scan of the hard drive, or targetted scans such as scanning E-Mail or networked filesystems.)

Microsoft Windows
There are details for solutions for Microsoft Windows in the Anti-Malware section.
Handling patches
Are patches set to update automatically? If not, is there some sort of scheduled trigger to help make sure the (operating system and other) software does get updated? e.g.: Operating system updates.
Monitoring logs
Perhaps have some software monitor logs and report errors. (Details are, and/or will be, located at section about reporting events.)
Adjusting permissions

User credential information should be in a file that is not accessible to other users.

An approach which some places may take is to use directory permissions to be set so that users cannot execute any file from any directory that the user can write to. This approach may not have been fully studied by site staff, but it sounds somewhat sensible and is mentioned as an approach that might be helpful. Mainly, what hasn't been sufficiently studied yet is to determine if this may cause some unwanted side effects of users not being able to perform tasks that they should. This might prevent a fair amount of automated malware from working, as well as reducing the ease with which many end users can execute unauthorized/unneeded files. (This may be desired for organizations where it is preferable that most standard end users do not install software, but instead get approved staff from IT support to perform installations as requested.)

If this works sufficiently, it may be particularly useful on Microsoft Windows machines to remove execute permissions from %LOCALAPPDATA% and %TEMP% and %TMP% and %USERPROFILE% and %ALLUSERSPROFILE% and %APPDATA%\.. and %PUBLIC% (which may be some of the directories most often abused by malware).

Handling licensing/auditing requirements

Did any of the software already installed (like the operating system) require any sort of software/product activation id/keys to be entered? If so, it probably makes good sense to document those near other information about dealing with the specific computer. If not, be glad that the operating system being used does not require such hassle.

Users of Windows XP have some recommended steps on dealing with Microsoft Windows Product Activation with Microsoft Windows XP which may reduce the likelihood of the operating system re-requiring activation. (This may be good to do before trying to make a bunch of other changes, like making sure the best drivers are installed for any hardware in the machine.)

Handling disruptions

Computers have requirements to continue to run. For a physical machine, that involves continuing to use electricity. For a virtual machine, that involves continuing to use the resources on the host machine, such as CPU cycles and memory.

What happens if those requirements stop happening?

Physical machines

If the machine has no battery backup, then it should probably still be using a surge protector, but that may be all that can be done at minimal cost. If the machine is important, some investment may be warranted to get the device to be able to use a battery backup.

e.g. NUT, APCUPSD, APC's Network Shutdown software, APC's PowerChute Business/Personal Editions: For details on setting up the software (if any such communications are provided from here, at this time), see UPS communications.

Also, if this machine will be running virtual machines, after the virtual machines are configured, see the section about host servers.

[#vmshutdn]: Virtual machines

There may be multiple ways to shut this down cleanly.

[#vmdnbysh]: Using a remote command to shut down (or restart) a virtual machine

One method would be to use a technique that simply allows a program to run on the remote machine. See: stopping/rebooting a system: specifically the section about doing that by using a dedicated user account. Create a batch/script file that can easily shut down this system. The keys will need to be accepted: if they are not yet accepted then this batch/script file may end up being interactive (until the keys get accepted). For details about accepting keys, see SSH key signatures.

Other options

There might be one or more other approaches to allowing a rather automated shutdown, using information found on the page about shutting down a computer. However, at the time of this writing, this guide may not have many details about the other options. To verify if this is still true, one may check the section about shutting down a computer. Another option would be to just take the approach of running a remote command. using a remote command to shut down (or restart) a virtual machine.

Once there is a command to be able to easily stop (shut down or pause or suspend or hibernate) the virtual machine, make that that happens automatically any time the host server is being shut down. For details, see the process about shutting down a computer has some details about what commands may run when the system shuts down.

Note: It is recommended to use a script to shut down a system. This way, the script can be referenced by system documentation, and by other scripts such as a stopvms script. Then /etc/rc.shutdown can call the stopvms script. The purpose to using this approach is so that the shutdown process can be effectively changed with the minimum amount of work. For instance, if a key is compromised and must be replaced, one needs only to adjust the script that shuts down the machine. The system documentation, and other scripts, will require no adjustment. Another scenario is that a stopvms script can be made simply by shutting down one machine, and then another. However, as the number of virtual machines grows, it will become beneficial to allow some parallelization. With such parallelization, if one virtual machine is slow to complete a shutdown process, that won't need to hold up the whole process of letting other virtual machines start their process of shutting down. Again, this is an example of a type of improvement that could be made without needing to alter system documentation or a less localized process like the text in a script (like /etc/rc.shutdown in Unix) that runs during system shutdown.

[#elcvmhst]: Disruptions for host servers running virtual machines

What happens if the host server is going to be cleanly shut down? It will be nice to have a rather automated process to handle all of the virtual machines that may be running. Perhaps this process can be part of the system's regular shutdown sequence (e.g. for some versions of Unix, put commands in a /etc/rc.shutdown file: Perhaps further details are(/should be?) in the section about stopping a computer?)

If there are not any other available host servers

Suspend or shut down the virtual machines. Some details may be in the section about how virtual machines can handle this. There may be reasons why suspending is not preferable.

For instance, the virtual machine software might support suspending by adding information to a disk image; that information might not shrink the disk image after the suspension is over.

Or, perhaps the host server realizes that it is being reboot for patches. If the virtual machines are also being patched, perhaps it will be simpler to just have the virtual machines complete their patching, and be shut down, then the host server shuts down quickly (because there are no virtual machines to run), and then the host machine starts. Then, when the host machine starts up the virtual machines, there's no remaining need to also reboot the virtual machines. (However, this may be a bit riskier if it means that all patched machines are being rebooted at once: if there is a critical problem then perhaps all machines will need to be fixed, instead of just one.)

For instance, if there is a command that will automatically suspend/shut down a virtual machine, perhaps the host machine should simply run that command.

However, the host machine should either only run the command for virtual machines that are actively running, or else the command should check that the virtual machine is running before spending any further time/resources trying to shut down a virtual machine that isn't active. The fate to avoid here is if the virtual machine is shut down, and the host server tries to send it a signal, and the host machine's entire shutdown process is somehow delayed due to not getting a quick response from the machine that isn't even running.

If there may be other available host servers

First, one should not assume there are other host servers. Maybe every other host server is also experiencing the same signal from UPS battery backup units, saying that there is a power outage, so every other host server is also either down or in the process of shutting down itself.

If the machine was asked to suspend itself, perhaps there is a reason: perhaps the user wants to be able to make the machine go somewhere quickly. However, if this is automated, such as a power event being declared (and if there is enough time before the battery will run out), or as an automated reboot (perhaps as part of patching), then perhaps this can be handled gracefully so that the virtual machines don't cause unclean shutdowns, and perhaps so that downtime of the virtual machines can be minimal (seconds, or less).

If another host server is up, and if that host server has the capability of receiving the virtual machine, then perhaps transfer a running virtual machine to the other host. This can be slick if the transfer process is fast. Repeat until all virtual machines have been transferred, if possible.

The other key to this would be for the computer to announce itself when it boots up, so that it may take load off of other computers that are running virtual machines that have previously been transferred off.

Customizations

Multiple of the following sections may apply. (If finding one section that doesn't apply, be sure to scan for other sections that do.)

Handling panics

Perhaps the system may show a screen and wait for user input. Meanwhile, the system may be pretty non-functional. (In at least some cases, for some operating systems, there may be some limited functionality, such as the prompt of debugger software.) This interaction (waiting for the user to respond) may be interesting to some folk, and even of use to a few of them. For others, this may be a terrible inconvenience, because the system unresponsiveness as the computer just waits fro a reboot may cause notable downtime, and the only way to fix the issue may be to perform some specific task(s) with the system. If the system is typically accessed remotely, but if remote access is not working, then accessing the local display of the system may not be convenient at all. So, setting the system to auto-reboot may be useful.

In the section about system panics, search for information about the operating system being used. (For Microsoft Windows, skip to the system panics: section about system failure.) See if there is information about setting options for automatic handling. Some good options may be to have data be dumped, logging certain information (so it can be determined when the panic happened, and to allow for alerts), and to have the system automatically reboot.

Microsoft Windows
Specifically, the information for setting options in Microsoft Windows is in the section about system failures in Microsoft Windows.
Handling logs
Enabling more log details
Enabling accounting in Unix

First, the directory that will store an accounting log file may need to pre-exist. The default location's directory may not pre-exist by default, so intervene:

sudo mkdir -p /var/account

Enabling this now might (still untested by the author of this text, at the time of this writing) be able to be done with:

sudo touch /var/account/acct
sudo accton /var/account/acct

Although the filename may be able to be customized, using the default name may allow some pre-defined automatic handling to occur, such as rotating system logs with known names.

In OpenBSD, backing up data using cpytobak, and then setting up such accounting each time the system is rebooted, can be done by using:

cpytobak /etc/sysctl.conf
echo accounting=YES | sudo -n tee -a /etc/rc.conf.local
Increasing log verbosity
...
Log limits

Determine limits related to log files, such as how large the log files are allowed to be, or what length of time should be retained in logs. (There is fairly little detail provided here about this optional step.)

Log limits in Microsoft Windows

In the Event Viewer, these may be properties related to the individual log files. (Go ahead and check this out by running EvtViewer.msc: there are currently no automatable instructions, for this particular step, here.

In Windows Vista, and newer (and older?), the setting of “When maximum log size is reached” may be set to “Archive the log when full, do not overwrite events”. If the log gets full, look in the logs directory (%systemroot%\System32\Winevt\Logs) which, it seems from some text at Randy Franklin Smith's UltimateWindowsSecurity.com's web page about Event ID 1105, will create a file using Archive-Security-YYYY-MM-DD-HH-MM-SS-NNN.evtx as a filename template.

That seems like a great option to put on all of the logs.

Log limits in OpenBSD

These may be controlled by the /etc/newsyslog.conf file.

Unix
Inserting custom code in the early startup sequence

The update to version 1.198 of FreeBSD's /etc/rc file has the file stating, “If you do need to change” “this file for some reason, we would like to know about it.” Whoosh, it sounds like the operating system's development team has lots of faith in the startup process.

So, consider much of this to be quite optional.

Performing activities at securelevel zero
Examples for specific operating systems
OpenBSD

Back up /etc/rc.securelevel before making any changes. e.g.:

cpytobak /etc/rc.securelevel

Modify that file.

$VISUAL /etc/rc.securelevel

Ignore the commentary that says “# Place local actions here.” Instead, ABOVE the line that says “securelevel=1”, insert the following line:

. /etc/seclevz

Then place actions inside that referenced file. The point to putting actions into that file, instead of populating /etc/rc.securelevel with a bunch of actions, is so that if an operating system upgrade/installation accidentally overwrites the file, then restoring customized functionality will be fairly painless.

Make sure that file exists.

echo echo Current SecureLevel: \$\(sysctl kern.securelevel\) | sudo -n tee -a /etc/seclevz

Then, additional commands may also be placed in that text file.

Some actions to take at securelevel zero
Offering an easy way to run a shell
Overview

The idea here is to allow a console user to be given a root shell prompt, without any need to log in, before most of the automated scripts are processed. Some may view this as a security risk. OpenBSD's FAQ about recovering a root password shows an example about using the boot loader to get physical access to the system, and discusses the security ramifications.

There is a counter argument that making things even easier for a local attacker might allow some attacks that would otherwise be less likely to exist: an alert security guard might notice a bootable USB drive being inserted into a computer running Microsoft Windows, while simply restarting the machine and then typing on the keyboard might be less noticed. (Note: People who wish to operate under this philosophy may want to secure things further, by removing the “secure” flag from the TTYs file. This is described in a later step in this guide.)

This guide is not trying to take sides, but for those in agreement with the OpenBSD team's philosophy, this guide shows how to allow such a shell in case the boot loader scrolled by before the screen was visible. (e.g. if Qemu started up and showed the boot loader before a VNC session was created to allow interaction with the virtual machine.)

The method here is going to be to check for user input. If the user supplies accepted input within five seconds then the system will show a shell prompt.

Enabling reading of input

This is going to run a local script containing commands in a /usr/local/bin/timedrds.sh file that probably does not pre-exist. So, make it. Users of a system that uses OpenBSD's ksh will want to use some code seen from the coding section about getting keyboard input from a command line.

Users of some other systems might be served better by placing the text read -t $1 timedrdResults” in the text file.

Once the text file is made, make sure it is marked as an executable text file, by giving it executable permissions. (Details are in the section about handling file attributes.)

Then, the commands to put into another script file (e.g. /etc/offercli could be the filename) are:

offerCmd=${1:-/bin/sh}
# The above is a fancy way for ksh and/or bash to implement:
# if [ ! X"$1" = X"" ] ; then offerCmd="$1";else offerCmd=/;fi
echo Press the Enter key within 10 seconds if it is desired to run
echo ${offerCmd} now.
/usr/local/bin/timedrds.sh 10
# if [ Text"${timedrdResults}" = Text"" ] ; then
if [ ! "${?}" = "0" ] ; then
echo Timeout occurred. Not running the command.
else
# echo Input was detected. Line of text was:
# echo ${timedrdResults}
echo Starting ${offerCmd}
${offerCmd}
ofrcmdrs=${?}
echo The offered command...
echo ${offerCmd}
echo ... has ended with return value of ${ofrcmdrs}
fi

Note: After making this script, testing the script won't require a reboot. This would be a fairly easy/painless script to just run at the command line. This doesn't test things fully, but if there are problems when run on the command line, the problem will probably also exist during the reboot.

Auto-fixing / mount

What happens if the power plug is pulled from a machine? Hopefully, the risk to the / (top-level “root”) directory will be minimal because it was mounted read-only. However, maybe that wasn't true.

Overview: the challenge of marking / as clean

In OpenBSD, it was found that if / (top-level “root”) was not cleanly dismounted, this directory would typically be mounted as read-only. This would cause numerous problems (including causing the shell to lock up after assigning NICs with a network address). Unlike many other directories, the / directory is always going to be mounted. OpenBSD's manual page for “securelevel” notes that secure level 1 makes a rule that “raw disk devices of mounted file systems are read-only”. The combination of those two facts means that the / directory cannot be written to by fsck after the securelevel is changed.

Other directories can be repaired by unmounting the directories, and marking them as clean. This can typically be done (perhaps painfully) without requiring a reboot (which may be even more painful). However, with /, the opportunity to do this is before the securelevel changes.

Update: An OpenBSD Hardening Guide notes a way to switch back to single user mode. The guide notes that “kill -s TERM 1” “never worked for me but "kill -15 1" worked quite reliably and "kill -TERM 1 " works as well.” So perhaps this appraoch isn't quite as needed for the sake of avoiding a reboot, however it is still being provided as an example (as this can still be automated with some success).

Overview: further planning

If / was not marked as a clean shutdown, there are various approaches that a system administrator may wish to take. One may be to ensure that no further changes are made to the possibly corrupt filesystem, and to hope that a system administrator notices the problem and can then perform whatever backups are needed before trying to fix it. That approach seems to be less likely to cause data loss, and so does seem sensible, and it is the approach taken by OpenBSD's default behavior.

However, the system may not quite work very right with / (and other directories, most notably /dev/ and possibly others like /etc/, being read-only. So, this approach is prone to cause downtime. The downtime may be reducable by trying to repair the disk. The files that would be likely to be affected by a botched automated filesystem repair are those which are not under any mount point other than /. If all such files are likely to be unimportant or fairly easy to restore/re-create, then avoiding longer downtime may be worth risking an automated repair.

In that spirit, here are details about trying to automatically repair when needed.

Make a file. e.g. mntckrw.sh

#!/bin/sh

#

# mntckrw.sh checks whether a mount point is writable.
# Will check / unless a different directory is given as a parameter.
# Returns codes have roughly same meaning as http://linux.die.net/man/8/fsck
# Returns 0 if all was (and is) okay. Returns 1 if problems got fixed.
# Returns 2 if the system probably just needs a reboot after this script ends.
# Returns 4 if problems likely remain.
# Returns 8 if script encountered issues.  Likely fsck returned 2 so please reboot.

mntckdir=${1:-/}
# The above is a fancy way for ksh and/or bash to implement:
# if [ ! X"$1" = X"" ] ; then offerCmd="$1";else offerCmd=/;fi
echo Attempting to make sure that ${mntckdir} is writable.
mount -uw ${mntckdir}
retCode=${?}
if [ "${retCode}" = "0" ] ; then
echo No problem detected with the ${mntckdir} mount.  It is writable.
mntckret=0
else
echo Setting ${mntckdir} to writable failed: return code ${retCode}
echo Perhaps unclean filesystem ${mntckdir} caused this.
echo Will try preening.
fsck -p ${mntckdir}
retCode=${?}
echo Results from preen were ${retCode}
if [ ! "${retCode}" -le "1" ] ; then
echo Retrying mount
mount -uw ${mntckdir}
retCode=${?}
if [ "${retCode}" = "0"] ; then
echo The preening seems to have fixed ${mntckdir}
mntckret=0
else
echo Mount ${mntckdir} still failing.  Trying more major repairs...
slshfsck=TryFull
fi
elif [ ! "${retCode}" = "2" ] ; then
echo Preen indicates that a reboot is needed. Will reboot.
echo Sleeping 15 seconds.
sleep 15
echo Initiating reboot.
reboot
echo Sleeping while the initiated reboot occurs.
sleep 999
echo Reboot seems to have failed.  Manual intervention may be needed.
/bin/sh
mntckret=8
elif [ "${retCode}" = "4" ] ; then
echo Preen was insufficient.  Trying more major repairs...
slshfsck=TryFull
elif [ "${retCode}" = "32" ] ; then
echo Seems preen was aborted manually.  Enjoy a shell.
mntckret=32
/bin/sh
mntckret=${?}
echo Shell exited with return code of ${mntckret} so setting return code to ${mntckret}
elif [ "${retCode}" gt "4" ] ; then
echo Preen results seemed problematic.  Exit code was ${?}.  Will apply major repairs.
slshfsck=TryFull
fi
fi

if [ Text"${slshfsck}" = Text"TryFull" ] ; then
echo Will try full repair.
fsck -y ${mntckdir}
retCode=${?}
echo Result from full repairs: Exit code was ${retCode}
if [ "${retCode}" -le "1" ] ; then
echo Retrying mount
mount -uw ${mntckdir}
retCode=${?}
if [ "${retCode}" = "0"] ; then
echo Full Repairs seem to have fixed ${mntckdir}
mntckret=1
else
echo Mount ${mntckdir} still failing.
mntckret=4
fi
elif [ ! "${retCode}" = "2" ] ; then
echo Full repair process indicated that a reboot is needed. Will reboot.
echo Sleeping 15 seconds.
sleep 15
echo Initiating reboot.
reboot
echo Sleeping while the initiated reboot occurs.
sleep 999
echo Reboot seems to have failed.  Manual intervention may be needed.
/bin/sh
mntckret=8
elif [ ! "${retCode}" = "32" ] ; then
echo Full repairs seem to have been aborted manually. Enjoy a shell.
mntckret=32
echo Full repairs seem to have been aborted manually. Enjoy a shell.
/bin/sh
mntckret=${?}
echo Shell exited with return code of ${mntckret} so setting return code to ${mntckret}
else
echo Results of full repair seemed problematic.  Exit code was ${?}.
mntckret=4
fi
fi

Ideas for further development: Run mount to see if there is any directory that is not read-only. If so, tee the results of fsck and mount and echo commands to such a log file.

It may be desired to do similar such things for /var and /usr, since those directories could lead to /usr/bin/ssh not running. However, there is likely less need to really be doing that at securelevel zero. For example, something like the following may be appropriate to add to /etc/rc.local

/etc/ckkeydrv.sh

Then, place the following in that /etc/ckkeydrv.sh file. (The file should not go into /usr/local/bin/ if that directory might not be mounted okay.)

Please note that the following commands are simply an example, and may be a bit speculative. Some people may rightfully consider the following to be a bit unsafe. This assumes that there are separate mount points for / and /usr/ and /var/ and /tmp/ and that all crucial/important data is on other mount points.

echo Running filesystem checks
fsck -n
logger -s Result of fsck -n was return code $?
if [[ ! -x /usr/sbin/cron ]] ; then
echo cron is missing. Will try preening disk repair
fsck -p /
echo Disk repair done, gave exit code $?.  Now forcing for just /
fsck -f -y /
echo Disk repair done, gave exit code $?.  Doing heavier checking.
fsck -p /usr
echo Preen on /usr returned result of $?
fsck -p /var
echo Preen on /var returned result of $?
fsck -p /tmp
echo Preen on /tmp returned result of $?
echo Doing more forceful checks...
fsck -f -y /usr
echo Checks on /usr returned result of $?
fsck -f -y /var
echo Checks on /var returned result of $?
fsck -f -y /tmp
echo Checks on /tmp returned result of $?
# Might want to just do fsck -f -y without specifying any directory
echo Remounting /
echo -mount returned $?
echo Re-mounting of / done.  Making /fastboot
echo -n >> /fastboot
echo Now rebooting
reboot
fi

This is perhaps slightly sloppy, as a perhaps slicker version might not try to repair /tmp, but would instead just re-format that mount point. However, the implementation shown above is simpler.

The reboot is performed because the (OpenBSD) operating system is known to work fairly poorly when / is read-only and/or when other mount points are not mounted.

Make sure that any required files exist. For instance, if the system's startup process will try to run a script, make sure that script exists. The documentation in this guide may have involved creating the following files, so checking for their existance may be worthwhile:

ls -l /etc/seclevz /etc/offercli /etc/mntckrw.sh /usr/local/bin/gettext.sh /usr/local/bin/timedrds.sh /etc/ckkeydrv.sh

Also, this guide (for OpenBSD) may have involved backing up and modifying /etc/rc.securelevel and /etc/rc.local

[#mntwhnbt]: Choosing when to mount a drive (during the boot-up process)
Overview: Why to carefully control when a drive mounts

If a Unix system has troubles mounting a drive, it may decide to take some unpreferred actions. For example, it may try to automatically run a disk repair program. There are multiple reasons why that may be undesirable:

  • This may take a long time while the system is still in single-user mode. Instead, taking such actions after the computer is in multi-user mode, and after remote access services are started, may be preferable. An administrator may be unable to remotely interact with a machine in this state.
  • The operating system may automatically respond in a less than ideal manner
    • This could cause data corruption. (Further details about possible data corruption are available in the section about Ext2fs. (That section might also hyperlink to this section.))
    • The system may try to automatically reboot (e.g. OpenBSD, if “fsck -p” provides an exit/return code of 4). That may be completely unnecessary, depending on what drive failed to mount. This may be sensible behavior as a sort of last-ditch effort when the system fails to mount a partition that is required for it to start its most basic operations, but for other data the easiest and best way to fix a problem may be to take some action using a remote access program running on a system that is in multi-user mode. Even if such remote access isn't the desired method to fix the issue, automatic rebooting may be a completely unneccessary step (and could even prevent easy fixing of the problem).

In addition, mounting a drive may just take some amount of time. (This may be more true for some types of drives, such as large filesystem volumes or remote filesystem volumes.) That delay may be nicer to have happen after the system is in multiuser mode and running certain critical services. (However, other services may rely on certain data being available, so the order of things might be good to carefully control/customize).

As a generalization, it can be a great idea to place information about mounting drives in the filesystem table that is stored in the /etc/fstab file. However, for all data that isn't critical to the most basic tasks (booting, starting remote access services, providing basic system administration), it may be best to delay such mounting.

The exact way to deal with a filesystem volume that doesn't mount may depend on which filesystem is not mounting correctly. If an issue exists with the /usr/obj/ mount point (if that directory is a separate mount point) or the /home/ directory or even the the /srv/ directory then the desired approach may be quite different than if the top-level “root” / directory has an issue.

Overview: Why mounting a drive later may be better

Many times large data drives may be set to automatically be mounted as an early part of the system startup. However, it may be better to delay the loading of the drive. There's a couple of reasons for that, both namely related to problems mounting the drive.

One is that the system may abort its regular startup procedure if there is a problem mounting a drive too early in the startup process. Instead, the system might determine that manual intervention should be done to mark a drive clean. This may be determined before all of the partitions are mounted. (An example of this is OpenBSD FAQ 10: section on starting daemons (section 10.3), which shows that file systems are checked, then more activity is performed, then file systems are mounted. All of that is done after running /etc/rc so this presumes that /etc/, generally part of /, is started (at least with read-only access).) If the shell is on a partition (such as /usr/) that doesn't get mounted, the system may prompt to determine what shell to use. (e.g. “Enter path name of shell or RETURN for sh: ”) This has been seen with OpenBSD; other systems will probably vary in exact implementation, not looking like the example just shown.) When this happens, the system might wait for a user to enter something at a prompt. (In this sort of damaged boot process, the system may be readily available for use by any user who approaches the local console. The user does not need to be a person who has already logged in.) The system's startup process might never progress to the point of starting network services such as remote access. Lacking remote access can be particularly problematic for an administrator who does not have easy local access to the console.

Perhaps less painful is when the system doesn't get stuck on a manual prompt, but it still completes the process of checking disks before it runs network services such as remote access. As an example, perhaps a system is running the disk-checking processes because the system was improperly rebooted. From the remote administrator's perspective, all that person can determine is that the system isn't fully responding to standard network services including remote access. (ICMP might still work.) When the system still isn't being very responsive several minutes later, the system administrator may determine that local interaction with the console appears to be required. At that point, the system administrator may be prone to start getting local access, and paying less attention to further attempts to get local access. For instance, the administrator might get in a vehicle and start driving some distance, unaware that the system became more responsive a minute or two after the system administrator gave up on waiting for the computer.

If, on the other hand, the drive was mounted as a later part of the system's startup process, the system may be more prone to mount some “drives”/“mount points” that can be successfully mounted, and possibly even be providing some network services such as remote access. The system might still be configured to perform the recommended process of performing a disk check, but if some things are functional, then at least the system administrator can log in remotely, see that the system is partially functional, and perhaps determine what is happening. The system administrator may be able to view logs (if the logs are on a drive that was mounted). So the benefits include the system being less likely to really freeze up bad, and also to allow system administrators to be more knowledgable. The system administrator can even interact with the system, although there may be little to nothing that is recommended to do.

In contrast, if the computer is more prone to refuse to try doing these good things until all local partitions are mounted, including allowing disk checks to be fully complete, the system administrator may be less knowledgable and also be unable to cause any changes to occur.

The recommended mounting process

Seperate data from the operating system. (There are more benefits than just dealing with the mounting process, such as making operating system upgrades likely to be easier.)

When a large data drive is included in /etc/fstab, consider delaying the point when that filesystem may be checked for errors.

(This is still being tested/revised.) Perhaps see also: mounting data, mounting a CD image)

OpenBSD

Note that even other BSD operating systems may differ in what file to use. These instructions should not be applied verbatim to other operating systems.

Note: This process may not be very standard, or even recommended by the operating system's development team. In fact, there may even be some problems with it: output like the following may be seen:

NO WRITE ACCESS
/dev/rwd0a: UNEXPECTED INCONSISTENCY; RUN fsck_ffs MANUALLY.

Fortunatley, no data seems to have resulted (yet), but the message does seem fairly scary...

Related notes: nabble posting states:

fsck -p is not possible to do in multi-user because of

# fsck -p /extra
NO WRITE ACCESS
/dev/rwd0m: UNEXPECTED INCONSISTENCY; RUN fsck_ffs MANUALLY.

Otto Moerbeek replies later on that thread, “Of course. What's the point of checking a mounted filesystem”

It does seem that rc handles fsck before running the rc.securelevel file. Perhaps a solution is to modify rc.securelevel so it runs a custom script, then have that script re-mount / to ro (mount -ur /) and run the “fsck -n ” command. However, if we're in single-user mode, this is countering the whole point of trying to get sshd running first.

With OpenBSD, one simple way to cause drives to not be automatically checked (using the standard checks pre-configured by default with the operating system) is as simple as having /fastboot exist. If that file exists, the standard filesystem checking is skipped. (This is because of a line in /etc/rc that says, ““if [ -e /fastboot ]; then”. For ksh, further details about the parameters to the if command are documented in more detail in the man page for ksh in the section about the test command. However, documentation for the test command may often be similar.)

OpenBSD's manual page for some startup files including rc notes that (when /etc/rc is run), “The file is then removed so that fsck will be run on subsequent boots.” So, if that file's effects are desirable, re-create that file during a later part of the bootup process. (See system startup process.)

An example of how to do all of this in OpenBSD would be to run:

echo touch /fastboot >> /etc/rc.local
echo fsck -n >> /etc/rc.local
echo logger -s Result of fsck -n was return code \$\? >> /etc/rc.local
echo mount -a >> /etc/rc.local

To more thoroughly mimic OpenBSD's startup process, including not continuing if there are problems with the preening “ fsck -p ”, view / copy the code which is from /etc/rc and, more specifically, the code which is related to “ fsck -p ”.

Using noauto

Here are some older/other instructions that might apply better for some other operating systems:

The first step to doing this is to make sure that the sixth column/field is set to a zero (0) if it even exists.

It also may make sense to prevent the drive from being mounted before it might be checked for errors. Also, add “noauto” to the comma-separated list of options that exist in the 4th column of the file system table. (The first three columns specify the device, destination mount point, and file system type.) (So, at the end of whatever is already at the fourth column, the line would say “,noauto”.) Also, if doing this, make sure that there isn't an option that simply says auto. This will cause the file system to not be automatically loaded as an early part of the system startup. (Further details about the file just mentioned are in OpenBSD's “manual page” for the file system table stored in /etc/fstab.) One drawback to this method is that the drive won't be mounted when “ mount -A ” is used.

A reason to edit the /etc/fstab to specify that the partiion should not be automatically mounted, instead of just commenting out the line entirely, is that the line can allow an easy reference with the mount command. Many of the parameters that are needed to mount the drive, such as the “type” of the file system to be mounted, become optional if the mount command can location a relevant line (that isn't commented out) in the /etc/fstab file. In many cases, the only required parameter to mount will be the mount point.

Chances are quite high that a large data drive will be something that needs to be mounted. Find the generally desirable way in the system startup procedures to simply run a disk check command and then a mount command (or more than one, as appropriate). The precise details will vary based on the operating system. As some quick examples (which assume the drive to be mounted will be mounted at /srv/bigspace/):

  • In OpenBSD, place the mount command in /etc/rc.securelevel
  • (More examples are expected to be added here at a later time.

If the operating system being used is not in the above list of examples, the /etc/rc likely controls how the system is started. One idea that may work, although it may not be recommended, would be to just add the details to the end of the /etc/rc file. The main reason this might not be recommended is that some automated processes may assume that this file isn't customized, and that customizations are instead stored in another location that is referred to by “/etc/rc”. As an example of this, OpenBSD FAQ 10: section on starting daemons (section 10.3) says, “We strongly suggest you never touch /etc/rc.conf”, and explains why. The same sort of logic for rc.conf.local and rc.conf also applies to rc.local and rc.

Instead of mounting the partition, it may be easier to mount the devices. For example, if using OpenBSD non-SATA drives which use the devices at /dev/wd0g and /dev/wd0h and so on, up through /dev/wd0p, then the following may work (even if the mount points are scattered at different locations):

for drvletr in /dev/wd0[g-p] ; do mount $drvletr ; done

(If such a command were run manually, it would probably be desirable to run “ sudo mount ”. However, since the command is part of the system startup sequence, presumably the command will have permissions to mount. Not including sudo unnecessarily may reduce the unnecessary fragility in circumstances like if there was an error in the /etc/sudoers file.)

Make sure that the mount is started, and completed, before running any software that may rely on data which is stored on the mount point. That may require placing this sort of command towards the beginning of a customized list of commands that get run when the system is started. (So, place it at the beginning of an rc.local file if that is being used.) In some cases, that might still not be good enough. For instance, OpenBSD FAQ 10: section on starting daemons (section 10.3) shows that system “services” (also known as (called “daemons”) are run before running the /etc/rc.local command. Also, quotas are checked before /etc/rc.local is run. So if those need to be run after a drive is mounted from /etc/rc.local then be sure that is taken care of.

If the mount point is a large batch of directories storing data for end users, it might be desirable not to allow end users to log in (and see that their data appears to be missing, and perhaps start to panic about thinking that data has been permanently lost). Before deciding to delay remote access for end users, remember that having remote access for administrators is something that is nice to have running early on. This could be implemented by using multiple instances of remote access, perhaps by having administrators use a customized TCP port that is listened to early in the system startup process (while another port, used by more end users, may not be accepting connections until a later point in the system startup process).

Ext3 file systems

Wikipedia's article on Ext3: section called “No checksumming in journal” notes that improved safety may occur on some hardware if barrier=1 is used as one of the mount options. (This may be placed in the /etc/fstab file.)

(Perhaps: See also the sections about tuning the file system (near the documentation about creating a file system), mounting.)

OpenBSD
Speeding up FFS drives

Check out the idea of using FFS Soft Updates.

(If disk performance is a problem that is worth investing further effort into, see OpenBSD FAQ 14: Disks: section on optimizing speed.)

Forcing a security run

cron runs periodic system maintenance scripts (see OpenBSD's “manual pages” for “daily”/“weekly”/“monthly). The /etc/daily file runs a command called “security”.

(The OpenBSD “manual page” for the security command: “Synopsis” section identifies this as /usr/libexec/security but the file may actually be at /etc/security and be correctly referred to by the /etc/daily command.)

The following seems like a good idea, but hasn't been recommended/tested, and might not be a good idea. Perhaps daily should be run?

So, why wait until tomorrow? Check it out by running:

sudo time ksh /etc/security
[#obemlorg]: Checking mail

The root account should have received some mail. A piece of mail is sent to root when the operating system is installed. (The file may also be seen by following a “markup” hyperlink at the OpenBSD CVSWeb for the src/etc/root/root.mail file. If a local copy of the source code is installed, this file may be found under that location. (As the most common “default” location for OpenBSD source code is under /usr/, look under /usr/src/etc/root/ for the root.mail file.) Also, if /etc/security has been run, that may generate another piece of mail.

There are various methods of checking mail, including using third party packages. At least some operating systems may have one or more built-in tools.

For example, OpenBSD's installer suggests using the mail command. Specifically, the last message of the installer may contain the following text:


CONGRATULATIONS! Your OpenBSD install has been successfully completed!
To boot the new system, enter 'reboot' at the command prompt. When you login to your new system for the first time, please read your mail
using the 'mail' command.

The reference to using the command called mail is likely just meant as a hint on how to perform the task, rather than a strong recommendation to use that method instead of another suitable/safe/secure method that might be preferred.

Advantages of the built-in mail command are that the program can be pretty fast to use if one has become very accustomed to it, and conveniently it may be built in with the base operating system. However, it may not be the easiest option to use. OpenBSD's “manual page” for the mail command states, mail “has a command syntax reminiscent of ed” “with lines replaced by messages.” Eek! Beware, as the user interface may not be very intuitive to people without experience with ed or a similar program.

For the daring, who would like to learn yet another computer program, some details are available at the E-Mail page: section on basics of the mail command. For those who want an easier experience, it may be preferred to install software as needed to have another E-Mail MUA (Mail User Agent) program that may have a nicer interface.

Altering TTY settings

Back up the /etc/ttys file. e.g.:, using a command to make a simple copy of a file:

cpytobak /etc/ttys
Adding more consoles

If consoles such as ttyC6 through ttyCb are off, and if the keyboard has F7 through F12 keys, go ahead and enable those extra keys. Simply make sure the status is set to on.

[#ttynosec]: Identify local console as insecure

OpenBSD's manual page for the “init” command states, “If the console entry in the” /etc/ttys “file does not contain the ``secure'' flag, then init will require that the superuser password be entered before the system will start a single-user shell. The password check is skipped if the console is marked as ``secure''.” This sort of effect is also documented by OpenBSD's manual page for the /etc/ttys file. (At the time of this writing, this change has not been heavily tested by the author of this guide. As the quoted text refers to “the superuser password”, this sounds like there is only one. Presumably this would be the password for whatever account UID zero maps to (which would usually be “root”)? (That could be an issue if, due to a disabled password, the “root” account is only able to be logged in indirectly, by using sudo.)

People in a trusted environment, such as a virtual machine that has the console relatively controlled, or perhaps a home computer, may want to continue having the console be considered to be treated as if it is already “secure”. That will allow relatively easy changes of a root password, as described by OpenBSD FAQ 8: Handling a lost root password. By making the console as insecure, lost root credentials may require booting off of other media (and not using any kernel that would use the same /etc/ttys file. e.g.: booting off of another drive might work.) If the machine works (but just can't be logged in), this might involve less downtime for a virtual machine, as the hard drive image could just be copied and then the changes made to the copy; then the virtual machine would likely need to be shut down to use the altered copy.

People who are absolutely confident in their ability to remember the root password, and who are not quite as concerned about the major inconvenience of dealing with a the problem, could remove the word secure from the line that starts with the word “console”. (The “secure” setting does not mean to imply that software operates in a more restricted mode, to help make a secure environment. Instead, the setting indicates that the environment is considered to be “secure”.)

Removing the word “secure” from the other lines has the different effect of not allowing the superuser account to log in. This may not be desired.

To re-add the word, the word should show up in the list of flags after the main status word, which is either “on” or “off”. These flags are separated by one or more spaces.

Automatically running code from a second drive

The method of data layout on the disk has been named COALOD (Copy On Autostart: Location Of Data). A mascot/logo for this concept has not yet been released. (At this time, it isn't created, although the idea of being obsessed (overdosed on) coal might provide some idea...)

Handling the startup
Providing flexibility for automatic mounting
Overview

On supporting operating systems, this may involve manually assigning a DUID. This step is rather optional: there may be advantages, but this is not strictly required.

Creating a custom DUID will allow a replacement hard drive to be manually assigned the same new DUID. The alternatives are to either record the DUID, or to mount the drive based on a name other than the DUID. The problem with the latter method is that the other method of naming a file depends on the type of drive, so the drive might be called /etc/sd1 or /etc/wd1 or something else for USB, perhaps /etc/uhd0?) or perhaps something else. Another option could even be to just have OpenBSD try mounting each of these things, causing generally harmless errors to occur during each reboot. That, however, seems less classy.

/etc/sd1 or:

sudo disklabel wd1 | grep duid

That may show sixteen hex digits, perhaps something like:

duid: fdb975310eca8642

Find that line in /etc/fstab

Comment it out

Copy it

Uncomment the copy

rename the DUID to something easy/memorable.

e.g., instead of starting with

fdb975310eca8642.a:

maybe it may start with

0123456789abcdef.a:

Then edit that DUID to match.

Run:

sudo disklabel -E fdb975310eca8642

(The end of that command line is specifying the old name.)

Enter:

p
i
0123456789abcdef
w
q

It will say "No label changes.” That refers to no changes after the w command.

Mounting the drive

Make sure that the drive is detected by the system startup code (e.g. the BIOS detection) and recognized by the operating system. (Manually mount it. This will require defining the disk layout, and formatting the filesystem volume, if these steps haven't yet been taken.) Then, make sure it gets automatically mounted. (In Unix, have it be in the file system table information stored in the /etc/fstab file.)

(Perhaps related info may be at: mount points.)

Interacting with the drive

Determine the mount point used by /etc/fstab for the drive.

In this example, we will be using /srv/hddcfg

mkdir -p /srv/hddcfg

Modify /etc/seclevz

mount /srv/hddcfg
[ -f /srv/hddcfg/hddcfg.tgz ] && tar xzzvf /srv/hddcfg/hddcfg.tgz -C /
[ -d /srv/hddcfg/ready/. ] && cp -Rp /srv/hddcfg/ready/. /.
[ -f /srv/hddcfg/hddcfg.sh ] && /srv/hddcfg/hddcfg.sh
Note: Many custom commands might be better placed in /etc/rc.local The above commands could simply make sure that a desired/customized rc.local is placed where needed.

Note: The above simply copies files over. That is fine for files that will not be updated frequently. e.g., a configuration file that tells the machine how to behave. For such files, updates can be done manually so that the files in /srv/hddcfg/ are appropriately updated. For other types of files, like databases, it may be that the file on the primary drive is newer, and simply didn't get appropriately copied to /srv/hddcfg/ during shutdown. For instance, if the computer was shutdown improperly (due to someone accidentally pulling on an electrical cord, or perhaps flipping a switch on the power supply unit), then the data on the main drive may be the most up to date data. Rather than blindly/dumbly just copying over the data from the second drive, some sort of synchronization method may be more prudent.

At the time of this writing, this concern is not handled, as it is not really so important for some types of machines. (This is expected to be elaborated upon further at a later time.)

Handling the shutdown

Make sure that file updates that should be kept, such as database updates (which are likely crucial to successfully keep), log files (which may be appropriate to store long term), and DHCP leases (which might not be disasterous to lose in many cases, but which would be preferable to keep) will be kept.

Identify how to interact with the standard preferred system shutdown process. e.g., perhaps this simply involves editing a text file named /etc/rc.shutdown (or some other method; details may be specific to the operating system being used).

e.g.:

Some other directions about automatically running customized code (specifically the part referencing /srv/sysset) may basically be doing this same thing (duplicated instructions).
export HDDCfgRd=/srv/hddcfg/ready/.
cp -Pr /var/db/. ${HDDCfgRd}
cp -Pr /var/spool/. ${HDDCfgRd}
cp -Pr /var/mail/. ${HDDCfgRd}
cp -Pr /var/log/. ${HDDCfgRd}

Other ideas:

Some logs may be in a location other than /var/log/, such as /var/www/logs/.

State files, e.g. pfsync state (and, as mentioned before, DHCP leases). E-Mail queues (/var/spool/ and especially any subdirectories of that location) would probably be great to not lose.

The details about making configuration files for the File Integrity Checker(s) may mention some other files that are regularly updated. Consider which of those files might be keepers. (Some entries, like the TTY filesystem objects, won't need to be copied. Files which are regularly automatically generated, including files that come from data that is remotely supplied (e.g. /etc/resolv.conf might be automatically generated from a DHCP(v6) client that is getting data that is remotely supplied) may not need to be backed up during the ssytem shutdown.

Misc changes
...

The following may be merged in with some earlier instructions.

Client support for DHCPv6

Run these commands to put needed lines into the /etc/sysctl.conf file:

cpytobak /etc/sysctl.conf
echo net.inet6.ip6.accept_rtadv=1 | sudo -n tee -a /etc/sysctl.conf
echo net.inet6.icmp6.rediraccept=1 | sudo -n tee -a /etc/sysctl.conf
echo net.inet6.ip6.forwarding=0 | sudo -n tee -a /etc/sysctl.conf
The following was meant for WIDE-DHCPv6.

Have the /etc/hostname.em0 file say:

dhcp
up
!echo rtsol -O /etc/dhc6c${if}.sh ${if}
!rtsol -O /etc/dhc6c${if}.sh ${if}
!route add -inet6 default fd00:0:0:5::1

Then make /etc/dhc6cem0.sh say:

#!/bin/sh
#This script is meant for use with WIDE-DHCPv6
sudo kill $( cat /var/run/dhc6c${1}.pid )
#The idea here is that ${1} will represent the NIC.  (rtsol -O passes that.)
/usr/local/sbin/dhcp6c -c /etc/dhc6c${1}.cfg -p ${1}
# Output the desired nameservers. This script does not set them.
/usr/local/sbin/dhcp6c -i -c /etc/dhc6c${1}.cfg ${1}
# Note - this output may be ignored, and not show up, if called by rtsol/rtsold

Then run:

chmod ug+x /etc/dhc6cem0.sh

Contents of /etc/dhc6cem0.cfg:

interface em0 {
# send ia-pd 0;
send ia-na 0;
send rapid-commit;
send domain-name-servers;
request domain-name-servers;
};

id-assoc na {
};
Stop hiding information
Adjusting what file directory listings look like
See: customizing the listings of files within a directory/folder.
Enable the visibility of Access Keys in Microsoft Windows
Display hotkeys.
Adjusting text mode configuration
Text-mode Scrollback
MS-DOS
See: TOOGAM's Software Archive: User interface software: Scrollback software for DOS.
Unix

If available, this might be accessed by using Shift-PageUp and Shift-PageDn, PageUp and PageDown, and/or Ctrl-PageUp and Ctrl-PageDown.

The precise way isn't universal, and may not even exist. For instance, although scrollback is generally supported by OpenBSD when VGA-compatible cards are used, OpenBSD FAQ 7: section about “Accessing the Console Scrollback Buffer” notes this isn't available when booted from the installation kernels.

Another option may be to run terminal multiplexing software. For instance, in the software called screen, pressing the Esc key (or Ctrl-[ key) and then using arrow keys or the (“vi-like”) “Movement keys” of h, j, k, l, and others described by screen's man page. (The -h parameter may affect the scrollback buffer.)

Graphical interfaces
Microsoft Windows
(Information documented from a Windows Vista session.) View the properties of a command line window. Look for a reference to “Buffer Size” as well as “Number of Buffers” on the default (“Options”) tab. Also view the “Layout” tab for a “Screen Buffer Size” section. (Increase Height as needed.)
[#txtmdrow]: Increasing the amount of rows in a text mode

This may be considred purely cosmetic. (It was particularly useful in MS-DOS which didn't traditionally provide scrollback.)

The two basic methods involve switching to the use of an already existing font on a video card, or uploading a custom font. Not all video cards support uploading a custom font.

The most common resolution is 80x25 (meaning 80 columns by 25 rows). The most popular alternative may be VGA's 80x50, which is achieved by using an 8x8 font. EGA could also use that same font, but supported fewer rows and so would display an 80x43 format. Even CGA may support multiple resolutions, by supporting some resolutions lower than 80x25.

[#wrntxrow]: Warning

Note that Linux From Scratch's svgatextmode.txt file does mention possible hardware damage. (The safest approach may be to look up some hardware documentation from the vendor to verify that a mode is supported before switching to the mode. Just because something worked on one computer does not mean the same thing will work on another computer: It is quite possible that a different video card model (which may even be made by an entirely different manufacturer) may support different video modes.

The most typical resolutions (like 80x25) will likely work for a VGA card. Some settings, particularly with various refresh rates, may damage some hardware (some monitors may be the most likely hardware to be damaged, followed in likelihood by the video card).

In MS-DOS

mode.com co80,50 seemed to work on more systems. However, there were some systems where that didn't work, but using mode.com 80,50 did end up actually working better. To adjust this even earlier, during the processing of the \CONFIG.SYS file (or, for some DOS implementations, a similar file), this may be done by a third party offering called Confix.sys. To see this file, or other command line offerings for changing video modes (including a 16-byte program), see TOOGAM's software archive: User interface software: Text mode resolution setter.

Uploading a custom font may be possible with some third party software.

VESA text mode support

(See the warning of possible hardware damage.)

Linux

(See the warning of possible hardware damage.)

Boot loaders will typically allow passing parameters to the Linux kernel. Generally Linux will switch to an 80x25 video mode display, but it can leave the current video mode unchanged. Also, one may use vga=ask to be prompted on bootup, and then enter a command called “scan” to determine some other settings available. This is documented by the svga.txt file which is part of the Linux kernel documentation. Other values that may be passed to the kernel include vga=extended for the 8-pixel high font (resulting in 80x50 on VGA and 80x43 on EGA), vga=normal for 80x25, or some numeric references. (Again, this is covered by the svga.txt file.) A Linux kernel video mode number may be equivilent to a VESA mode number plus 512 (0x200). As an example, Ed Halley's Red Hat Configuration HOWTO: Customizing the Linux Console notes that 132x50 can be obtained with vga=9. That may correspond to VBE mode 267 (0x010B), a value which is mentioned (by the name 010Bh) at Wikipedia's article on VESA BIOS Extensions (VBE): section on Linux video mode numbers.

Perhaps see also Linux source/documentation related to framebuffer: modedb default video mode support and the the fb/vesafb.txt file which is part of Linux source/documentation. This vesafb.txt file notes that either vgacon or vgafb will be used to take over the console. Which one “depends on whenever the specified mode is text or graphics.”

To change things after the system is booted, perhaps also see: SVGATextMode/savetextmode/textmode (Linux From Scratch's svgatextmode.txt file).

In OpenBSD
Load fonts to the video card using software. A guide for this is at OpenBSD FAQ 7: section on using an alternate console resolution such as 80x50 (OpenBSD FAQ 7.5). This only affects some parts of OpenBSD. Quite notably, the FAQ states a non-option: “not possible to change the resolution of the primary console device (i.e., ttyC0).” That is the one display that would likely be most useful to change...
OS/2
Solutions for MS-DOS may work. Notably, a $10 shareware program called ROW.exe was released. See: TOOGAM's software archive: User interface software: Text mode resolution setter.
Additional terminal consoles

This text may be specific to Unix and similar operating systems.

Before using different pre-existing consoles, note that switching consoles may erase some scrollback history. Also, the X Window system may be configured to use a specific console. Avoid using that same console for a text mode interface. (The specific console used for the X Window system may vary based on operating systems, but generally comes after at least some of the existing text mode consoles. Look for a console that shows a blank screen instead of showing a login prompt.)

To use the pre-existing consoles, try using Ctrl-Alt-F2 if inside X Windows. If not in X-Windows, try using Alt-F2: If that doesn't work, then try the same method as what is used in the X window session, Ctrl-Alt-F2. These are the likely keystrokes for switching to the second console. There may be multiple consoles reachable by pressing different numeric Function keys (such as F1, F3, F4, and perhaps more). (This may be defined by the /etc/ttys file. One of the consoles with a name starting with “ttyC” (and then ending with a single number or letter) may typically be in an initial status of “off”. That console is likely intended to be reserved for use by X Windows.)

For resolutions other than the default (generally 80x25), see also the section about supporting other text video modes.

Note that this sort of functionality may be offered by using terminal multiplexer software. Such software may also provide additional advantages, such as not losing scrollback history when switching from viewing one program to viewing another program, and providing an easy interface to view programs that are running in the background. Also, if/when remote sessions are used, such software may help to protect a program from being less easily inaccessible (and possibly closed) after an unexpected disconnection of a remote session.

OpenBSD FAQ 7: switching consoles (OpenBSD FAQ 7.4) discusses switching consoles but also discusses creating consoles with wsconscfg.

Copying installation media

In many cases, having a copy of installation files on the system may be desirable. If there is an available option to copy files from a CD to the hard drive, consider whether that should be done. If so, it may be best to copy those files before installing the software, so that the software may be installed more quickly (from the hard drive). A reason this can be nice is in case the installation files are needed to install more software and/or optional components of software, or in case the files are needed during a process of updating software (by applying patches) or the process of installing a driver. Note that this practice may be commonly performed on physical machines with plenty of free hard drive space which could otherwise go unused, but on virtual machines this might just make a dynamically-sized image unnecessarily large. If the virtual machine is going to be backed up regularly, the time it takes to back up the virtual machine may be lessened by not having such a CD image on the virtual machine. (It may be just as effective/convenient to have the CD image on the Host machine, and back up that data seperately if there's a desire to back it up.)

Copying installation media for Unix

Often a method will involve just making an image of a bootable CD. For details, see: making data images of an optical disc, creating disk images.

In case any of the installation media has any files that may help support any hardware, this might be nice to do before doing a check for unsupported hardware.

Checking for unsupported hardware

If hardware is very unsupported, the operating system might not even notice it. Traditionally, attempting to interact with the hardware, which in some cases might occur rather automatically after such a detection, may result in instability. However, in modern computers, plug-and-play and other hardware detection methods tend to be able to identify many types of hardware without substantial risk of the computer freezing up just from trying to detect the hardware.

Since hardware driver installation may require that the system is rebooted, installing hardware before the next system reboot may be a sensible time to perform such an action. (An exception would be if this operating system installation is going to be used to create an image, and if it is desirable for the image to not be cluttered with largely unused drivers.)

For details, see detecting hardware.

Reporting detected hardware

Note to users of OpenBSD: the section about detecting hardware (in the section about boot logging related to detecting hardware) includes a section on the subject of “Reporting success”. The developers of that operating system request information about detected hardware. After installing this (freely available) operating system, please help the developers of this operating system, by submitting requested feedback.

(It would not be surprising if other operating system developers might also like such feedback. People who know about such a public request, for another operating system not mentioned in this section, are currently welcome to help CyberPillar by sharing such information.)

Rebooting

There may have been quite a few changes made so far. There are multiple reasons why rebooting at this point may be worthwhile.

  • Presumably the announcement hasn't yet been made that the new system is all ready to go, and so people aren't using it yet. Therefore, people (including the person who oversees the system's operation) won't be terribly inconvenienced by this reboot. This may be a more convenient time, particularly compared to wehn more services are not only running, but also they are even actually getting used by people.
  • This may be an opportune time to reboot the system and make any adjustments to the BIOS if they haven't been done yet.
    System startup procedure (e.g. BIOS) settings
    Main screen: Time and basic drive info

    This stuff is often found on a BIOS screen

    SATA drives
    If using the motherboard's onboard RAID (which may itself it a cause of consideration, as such implementations may be incompatible with other/replacement motherboards), this might be set to RAID. Otherwise, using AHCI will likely be best for speed. That may have some compatibility issues with some (older) operating systems. Microsoft Windows ME and NT 4.0 and older may want to use IDE mode, and as another nearby option, Compatibility Mode (instead of IDE mode, and then another nearby option of Enhanced mode).
    CPU features
    Compatibility/features
    Virtualization
    If using virtualization, enable. Some/most software that run virtual machines have found this useful, and so may run a little bit, or significantly, faster. Some software using virtual machines has been known to require this feature. (For instance, an old version of Microsoft's Windows Virtual PC (for Windows 7) had a requirement for “Hardware Assisted Virtualization” until the update described by MS KB 977206.)
    No Execute (NX) / Intel's Execute Disable (XD)

    Allow (enable) this feature (which allows the disabling of executing code). Doing so could potentially cause some code to stop functioning. Such code should be updated to become compatible with these security-enhanced features. So, unless there is a known compelling reason to not use these features, do use them.

    A20M
    May be needed for compatibility with older operating systems?
    Handling speed/power usage
    Intel SpeedStep
    Allows an operating system to slow the processor. Unless there are problems with this, the best option is probably to just enable this in the BIOS (and deal with any problems from within the operating system). That way changes won't require going into the BIOS.
    Intel TurboMode Tech
    (seems this was found enabled by default)
    Intel C-State

    (seems this was found enabled by default)

    Some quoted material here may come from Tom's Hardware: forum post about C-State, which itself was apparently quoting Intel, possibly at http://www.intel.com/cd/ids/developer/asmo-na/eng/dc/centrino/286122.htm?page=3 (but that page is now currently gone, and didn't seem to be fully archived publicly).

    If this is enabled, then Intel SpeedStep Technology may use “multiple processor sleep states (referred to as C-State; higher C-states such as C4 refers to deeper sleep state) that reduce the overall power consumption significantly. However, this doesn't always save much power. “Every interrupt will pull back the CPU from a deeper sleep state to C0 due to the interrupt handler that services the interrupts. This impacts the sleep state residencies that are critical to optimize the power consumed. There is also an energy cost associated in transitioning between multiple C-states.” The quoted text went on to say (twice) that high/aggresive interrupt rates can negate the power savings. (Presumably it might even cost more power.)

    If enabled, there may be further options relating to C-States. Some of those CStates are C1, C3, C6, and C7.

    Powering on

    It probably makes sense to allow for powering up by PCI, PCI Express (PCIE), Wake-on-LAN (WOL), etc. Namely that is because these sorts of techniques usually don't work due to reasons such as compatibility issues, hardware not being hooked up (like a WOL cable going from an Ethernet card to the motherboard), and not having a client able to send a needed Ethernet packet. If enough effort is made that all other factors are taken care of, having the BIOS configuration complicate things is probably not necessary.

    Powering on from the RTC (real-time clock) Alarm is probably not needed. Enabling it may require additional settings to be considered (like when the system should come online).

    Powering on by keyboard is an option that some people may like, and others may not. The keystroke(s) needed to turn on the system may be configurable, possibly just by selecting from a pre-defined list of options. Not all systems may have the same sequences to choose from. Ctrl-Esc may be one of the more commonly supported options. A power key might also be an option, but note that many, many keyboards do not have power-related keys.

    Other/Misc
    Shadow copy, ...
  • Testing that the system still successfully boots the desired code by default. That may be the new, desired OS, or it might be a boot manager.
  • Make sure that the startup process happens as desired
    • Make sure that all mount points that should be mounted are successfully mounted automatically after a reboot
      • Make sure they mount at the desired time. Specifically, it may be good to mount after running any needed disk checking, and it may be good to check disks after starting at least enough remote access services that an administrator can check on and/or alter things
    • Make sure that all NICs have the desired IP address(es)
    • Make sure all desired software is running successfully
    • In BSD, check that any custom sysctl configuration settings are still set. (Especially, if this system is a router, make sure that forwarding is still set.)
    • Check some of the main logs, particularly standardized operating system logs, for errors (or warnings).

Viewing messages from the operating system manufacturer, designed for new users of the operating system
Unix/BSD

See if there is a “Manual Page” called “afterboot”. (e.g. OpenBSD's “manual page” called “afterboot”.) That manual page may provide some recommendations, many of which may already have been addressed in this guide. That manual page may be read online or by running “man afterboot ”.

There may also be a page called “intro” (e.g. OpenBSD's “manual page” called “intro”) which might have some useful information.

Mail may have been automatically sent to an account as part of the installation process. For instance, in OpenBSD, the user named “root” gets E-Mailed. (Checking for OpenBSD's original E-Mail contains details for that operating system; those details would likely also work for Unix/BSD systems.)

Microsoft Windows

There may be a program designed for new users. The program may have one or more of the following terms as part of its name: Welcome, Tutorial, OOBE. (“OOBE” stands for “Out-of-the-Box Experience”.)

Windows Vista Home Premium may have a “C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Accessories\Welcome Center.lnk” file designed to cause the following to be run:

%SystemRoot%\system32\control.exe /name Microsoft.WelcomeCenter
[#autrunsy]: Automatically running customized code

There may be some benefit to having the machine automatically run code which is found on an extra fixed disk, or on a removable disk, or on a network location (or perhaps multiple of these types of drives). An example of how this could be put to use is covered by the tutorial about making a network of virtual machines. To summarize: this can be a good idea to do for a base image, as it can more easily allow automation of software installation, software configuration, and software startup.

To do this, the operating system needs to be able to locate (and, as needed, copy) a file, and then to be able to run the file. Any other sort of advanced configuration can be handled by carefully changing that file, even if this operating system installation becomes read-only. The simple trick is that support for such a file needs to occur before the operating system becomes read-only.

At least most of the steps to do this are fairly straightforward. (Taking the time to perform all of them may take a bit: perhaps one minute or a small number of minutes.)

  • Create a series of steps that will happen automatically during system startup. This may be doable by creating a batch/script file that gets run during system startup. For further details about where such a file may be, see the section about some files that are automatically run during system start. For details about creating such a file, see creating/editing a text file.
  • Make sure the specified drive exists (for testing). If the file is going to be on a network drive, make sure the file is shared. If the file is going to be on a local filesystem, make sure the virtual machine is using the hard drive. Make sure that hard drive has the needed disk layout (e.g. partitioning scheme. For For BSD operating systems, make sure there is a suitable BSDLabel/Disklabel and, preferably, assign a UID to the drive.) Make sure the specified partition (or BSDLabel/Disklabel entry) has a filesystem volume (making a filesystem volume if needed).
  • As the first series of regularly steps to perform automatically, plan to mount the drive. The method to do this may vary depending on the type of drive (e.g. supporting a remote networked location may use notably different parameters, or even an entirely different command, than supporting a local fixed disk), and what name is going to be assigned to that device (so that it may then be given a mount point). For details, perhaps see: mount points, disk layout, device namespace.

    Notes for BSD: Since the filesystem object (e.g. in OpenBSD: /dev/sd1a or /dev/wd1a) may have a different name based on what hardware is used (and this could, at least in theory, change if a change is ever made regarding what virtual machine software is being used), it may make sense to support a specific BSDlabel/Disklabel UID (rather than relying on a named filesystem object). If doing this, it makes sense to use a manually typed UID, rather than an automatically-generated one, so that the UID may more easily be cloned to a new disk if desired. The most common and centralized place for storing such information would be in the filesystem table (in the /etc/fstab file).

    Notes for other operating systems: The advice for BSD may or may not apply, depending on factors like whether the device names are based on what driver is used (which is based on what hardware is detected).

  • See if a script at the specific location is available.
  • Clearly, this could be a security risk if somebody unauthorized is able to place a custom script at that location which is checked. It may be wise to make sure that the script is authorized. (Perhaps some sort of code signing technique could be helpful. Details may be in the section about signed code and/or making code signatures.) (At the time of this writing, these details may be prelimiary (perhaps insufficiently so).)
  • If the script is trusted, then go ahead and make it so that the batch/script file will be run. (See: batch/script files.)
  • As a reminder of standard practice (particulary for Unix where this may be more typically necessary to customize): If a text file is created, and the intent is to run that text file, then edit file attributes so that permissions allow the program to run.
Example(s)
OpenBSD

Here is an example of code for OpenBSD. This does not perform the code-signing bit, and it is assuming a BSDlabel/Disklabel UID has already been set up.

mkdir -p /srv/sysset
~/cpytobak /etc/fstab
echo 0123456789abcdef.a /srv/sysset ext2fs ro,nodev,nosuid,noauto 1 2 >> /etc/fstab
~/cpytobak /etc/rc.local
$EDITOR /etc/rc.local

And here is what to add to the /etc/rc.local file:

[ -r /srv/sysset/archived/$( hostname -s)/sysdata.tgz ] && \
tar -xzvvf /srv/sysset/archived/$( hostname -s)/sysdata.tgz
[ -d /srv/sysset/sysdata/$( hostname -s)/. ] && \
cp -pR /srv/sysset/sysdata/$( hostname -s)/. /
[ -x /srv/sysset/sysstart/$( hostname -s)/autoran ] && \
/srv/sysset/sysstart/$( hostname -s)/autoran

The way that works is that the ksh shell will perform a test to see if the specified file exists and if the file matches some other property (being a standard readable file when -r is used, or being a directory when -d is used, or being an executable when -x is used). If so, then the part bewteen square brackets is treated like the number zero. The double-ampersand causes an evaluation using AND logic. The end result of the evaluation isn't used (nor important), but the end result will only require running the second command if the part before the double-ampersand is recognized to be treated like the number zero. The backslash at the end of the first line escapes the newline character (sequence), and so the next line is treated like a part of the first line shown.

Last-minute changes

Particularly if this is about to be used as a base image, perform some last minute changes so that the base image is as prestine as possible:

System name

In Unix, the system name may be stored in the /etc/myname file. See if /etc/myname.anc exists. (It likely will not.) If it doesn't, then run:

cp /etc/myname /etc/myname.anc

If the file does exist, because this is a child image (which is being used as a base image), then run:
cat /etc/myname | sudo tee -a /etc/myname.anc

This allows the /etc/myname.anc file to effectively be a backup of any useful contents of the /etc/myname file. It may also help a system administrator, down the road, to realize just what parent disk images are being used. (If this process is done multiple times for a grandchild image, then the text file ends up being a list of ancestor images.

Internet addresses

e.g., if this is a base image, make sure it is set to use dynamic IP addressing. (This means running the client, and making sure any other settings are set appropriately. e.g. for IPv6, ensure that router discovery packets will be accepted by the client. Further details about making such changes may be in the section about automatic network addressing. Naturally, the changes should be made so that they take effect when the system is restarted.)

Make sure that the specified DNS is set to a value that will work well for most programs.

Handling child images

If this is a child image, see if there are any special standardized directions that should be followed whenever making a child image from the base image that was created. (As part of this process, see if there are any special standardized directions that should be followed whenever making a child image.) These standardized directions may be supplied by a technician (perhaps yourself!) who may be more familiar with the specific parent image.

For example, perhaps a certain configuration was missed when creating the base image. If so, and if other child images use the same base image, perhaps the easiest way to fix the problem was for a technician to specify that certain processes should be performed on all new child images. Hopefully such a decision was clearly documented, along with specific details on how to quickly implement those fixes. Check for such documentation, and appropriately follow what may exist. By performing a simple copy-and-paste job now, tremendous amounts of past efforts (to re-create all other child images) may have been able to be effectively avoided. However, the key to really making the whole process work is to perform the steps now that a child image has been created.

If the system is set to automatically run a file from a specific location (which would probably be a good thing), perhaps the procedure to perform will be as simple as putting some pre-created commands into that file that gets automatically run. There may be a template file containing those commands, so this is simply a matter of copying the commands from that template file to the top of the file that gets automatically run.

There's no particular example to provide. The most customized part of this process is simply determining how such information may be documented, and locating such documentation. A possible location for such a file, which may have been used in some example documentation, may be shown in the section about automatically run a file from a specific location. However, this may involve running a file that has a rather customized name, so it may not be set in stone.

Handling base images

It may be desirable to remove some files that are meant to be unique for every system, particularly if these files get automatically re-created if they do not exist when the system reboots. (If it is not super-certain if they will be re-created, perhaps just rename them to *.old and then see what happens during a reboot.)

Review a list of files that are automatically regenerated, as needed, when this operating system reboots. If such a list is not pre-created, go ahead and create one. A way to do that, and be thorough (removing all files that should be removed), and avoid removing files that don't get automatically re-generated in the operating system being used, is to review what happens when the operating system is started. Details about the automatically started files may provide some clues how to do this. For instance, in OpenBSD, these sorts of commands are in /etc/rc. Reviewing that file for each occurrence of the word “gen”, and considering what the nearby code does, allowed a list to be created. (The following keys came from checking out the startup sequence from OpenBSD 4.9.)

[#syssshky]: OpenSSH Keys

/etc/ssh/ssh_host_*key and /etc/ssh/ssh_host_*key.pub which includes the following:

DSA keys
/etc/ssh/ssh_host_dsa_key and /etc/ssh/ssh_host_dsa_key.pub
ECDSA keys
/etc/ssh/ssh_host_ecdsa_key and /etc/ssh/ssh_host_ecdsa_key.pub
RSA keys
/etc/ssh/ssh_host_rsa_key and /etc/ssh/ssh_host_rsa_key.pub
RSA1 keys
/etc/ssh/ssh_host_key and /etc/ssh/ssh_host_key.pub
ISAKMPd/IKEd
ISAKMPd
/etc/isakmpd/private/local.key and /etc/isakmpd/local.pub
IKEd
/etc/iked/private/local.key (which gets copied from the /etc/isakmpd/private/local.key file) and /etc/iked/local.pub (which gets copied from the /etc/isakmpd/local.pub file)
DHCPv6 UID

If there is a possibility that DHCPv6 is being used, then hopefully DHCPv6 software was installed when following either the step of configuring fixed IP addresses or installing some common software (section about getting a client for automatic addressing). (If not, be sure to install software to address that task.)

Since multiple machines should not be using the same DHCPv6, base images should not have a pre-configured DHCPv6. (At least some software may auto-recreate the files as needed.)

Here is some details for specific DHCPv6 client software.

If WIDE-DHCPv6 is used
Remove the dhcp6c_duid file which may be located in /var/db/ (or location such as /var/lib/ or perhaps some other location).

For any other software, perhaps information may be available in the section about that software, in the section on DHCPv6 clients.

Increasing security

Consider editing the sudoers file (using visudo). Specifically, consider whether or not to have a line that provides the wheel group with NOPASSWD support. The idea here is that most of the work that is going to require running a lot of stuff as root may have been done, so in the future it may not be as painful to be simply requiring a password to have superuser access. To implement this, do make sure there is still a line that provides wheel with access to run all commands. Note that requiring a password might not be as great of an idea if the user doesn't have a known working password (possibly because the user logs in by using keyfiles). If there is a line that is providing NOPASSWD support, and if this sort of change is desirable, then copy that line, then comment out one of the copies, and in the other copy remove the phrase “ NOPASSWD:” from the line.) make a line that provides wheel with access to run all commands, but then comment out the line that provides NOPASSWD support.

Other possible actions could be to set log_input and log_output.

[#mntwopt]: Adjusting drive mounting
Overview:

Different people have different attitudes/ideas about whether certain directories should have specific mount options. Some people may think that certain areas should have specific treatment. Other people may believe the problems from that treatment are too undesirable.

This guide does not intend to get into the thick of arguements over these relatively controversial decisions in filesystem layout guide. Instead, this guide mentions the topic so that people can at least be familiar with the topic, and decide upon their own about how to implement or not implement this.

If in doubt, a sensible approach may be to just leave mount points set to the defaults set by the operating system vendor, at least initially. The idea there is simply that there may be many other initial problems, so administrators who are becoming introduced to the operating system may have more pressing matters to focus efforts on. (Trying to second-guess the defaults chosen by the operating system distribution maintainers might not be the best early utilization of early efforts to become familiar with how the system works.) However, for those who have already explored such customizations on a platform, this topic is being mentioned as a reminder to consider implementing such customizations that have been verified to work out okay. Note, though, that a setup which works as a very hardened/secured setup on one platform may not work at all on another platform.

An OpenBSD hardening guide @ GeodSoft.com: section about “Costs of Non Standard Mount Options” discusses some problems experienced.

[#okmntro]: Read-only for security and resiliancy

In Unix, some mount points may be able to be mounted as read-only. Doing this can reduce the likelihood of the mount point being marked as unclean due to a system being shut down, and may prevent unauthorized users from being able to make changes unless they have the ability to adjust how the partition is mounted.

This may not be a good idea for some directories in a base image. For instance, in a base image, making certain directories read-only may not be very sensible, as many of the first steps to occur may involve customizing the system's name, adjusting local user/authentication/credential information, and perhaps also adding programs. (However, hard drive images which are not meant as a base image, such as a virtual machine using a child image, would be drives where it is more sensible to make /usr/ read-only once any required programs have been added.)

Directories that might be able to be made read-only may vary among differnet operating systems. In OpenBSD, / (including directories usually on the same mount point; quite notably that includes /etc/) and /usr/ (including any other mount points underneath /usr/) may be good, sensible locations to make read-only. However, do not make the entire / mount point read-only unless there are separate (writable) mount points for /var/ and /tmp/. Also, the operating system has been known to complain significantly about /dev/ when / is mounted read-only (which may happen when the system experiences an improper shutdown). The /home/ directory is typically writable (allowing users to write data to their own area).

To make a partition read-only on startup, modify the filesystem table data which is the data stored in the /etc/fstab file. In the fourth column, make sure there is an ro comma-separated parameter, and make sure there is not an rw comma-separated parameter.

Limiting execution

This may be an option for both Unix and Microsoft Windows.

Limiting execution in Unix

There are two common mount options related to limiting executables. One is a “noexec” option that states files should not be executable files. (Interpreted scripts may still be run.) The other, “nosuid”, allows executable files, but does not allow the setUID bit to be effective on a file.

For instance, if /var/ is mounted with “noexec”, the executable permission on a file will not allow that file to execute. Files in /usr/ will be executable, but some systems will have /usr/ be a read-only partition. Limiting mount points so that executable code cannot be written may prevent many types of attacks.

Of course, the mount options offer little security benefit in stopping an attacker who has access to adjust the mount options. For others, though, this can be more limiting.

This is a simple change, although what is more time consuming is to make sure that this doesn't cause any unintended side-effects. Some programs may be designed to use certain permissions, and so revoking those permissions can cause problems.

An OpenBSD hardening guide @ GeodSoft.com notes, “It's common to find executables in /home but should be very rare to find SUID programs in /home”. (The recommendation is to set /home/ to being nosuid.)

Some of these topics may be explored further by an OpenBSD hardening guide.

Re-running a file integrity check

If other programs have been installed, or other noteworthy/substantial/large changes have been made, then updating any data files for the File Integrity Checker(s) may be good to do before making an image. By getting that data up to date now, there may be less data to review repeatedly (with each child image).

Note that running a file integrity checker may be useful for more than just security. For instance, if using the COALOD disk layout method, this can provide a list of files to be copied on the second hard drive. To not waste time reviewing a longer list of files on each and every child image, after the very last other change, get the file integrity check data file(s) updated(/“rotated”).

Overview: Understanding the results

Details, about the changes that have been made to this (real or virtual) computer, will be available after updating the database. There may be quite a few changes from programs that have been added, and more changes related to things like adding a new user (including changes to data files related to authentication) and providing that user with elevated permissions.

If a lot of changes were just done, the report may be relatively long (multiple screens of text). Upon viewing the report and considering all of the details that are shown, one may conclude that all the information in the report simply describes known activity. Since that activity was known and is authorized, there isn't a huge need for all those details, and so having all those details doesn't seem to be particularly useful.

This may be true, but having a long and useless report doesn't mean that the file integrity checker is useless. The computer just had multiple expected changes happening. When there aren't so many expected changes happening, and when there aren't any problems, then the report is likely to be much shorter. (If that isn't true, it may be useful to reduce the size of the report, by excluding objects that get regularly modified.) The time when a file integrity checker will be much more interesting is if it reports data about activity that is not expected. Reported data that includes details about unexpected activity will be easier to see when when those reports are smaller.

To see this first hand, feel free to update the file integrity checker databases again right away. Then, when the even newer report is made, the newly created report may be much smaller, since very little was changed since the prior update.

Upon seeing the shorter report, there are multiple reasons why there may be no need to fret about losing the details about the longer report. One reason is that the details in the longer report may not have been needed. Another reason may be that the longer report might have been backed up before it was overwritten. However, if the longer report was actually not backed up before it was overwritten, that situation may be okay as long as the old data files are still available. Presumably the longer report can still be re-created, without a whole lot of work/effort being needed, by having the file integrity checking program compare the older data files. (This may be more true with some file integrity checking methods, like using AIDE or Integrit, and may be less true using other methods such as mtree.)

Optimizing the disk

The task of running disk optimization software may not repair problems (namely system slowness, but also heavier wear and tear on drives) in some operating systems, namely because the problems aren't quite so prominent to be in significant need of repair. This may be worthwhile when using Microsoft Operating systems. If optimizing a filesystem volume is going to be performed, a sensible time for that action would be after installing the operating system and other software that will remain on the hard drive.

(If this will be a disk image, it makes sense to first download the software which will go onto the disk image and any child/snapshot images. That way, the software may be included as part of the volume optimization process, and defragging the software so much may not be needed again for each and every child image. However, the image should generally be made before installing software which is not part of the software to be included in all the child/snapshot images.)

Making an image

If the purpose of this machine is to be an image, which may be used for creating other new virtual machines or perhaps restoring physical hard drives to a state when they functioned well, then it may be good to shut down the machine before many other “customizations” are performed. First, though, it may be worthwhile to try to make the machine a bit more generic. For instance, if it is currently using static (manually defined) IP addresses, would it be better off being changed to try to automatically detect network settings? Will the machine be configured to trust an external authentication server? Does the machine have some sort of account (local to the machine, not requiring any external authentication server) that may be used in case the external authentication server is unavailable? (If the machine is moved to a location that is not on the network, such as being moved to a room/building where hardware repairs are performed, the machine might not have normal access to the network's external authentication server. Therefore, an administrator account, and specifically an administrator account that is “local” to the machine's own user database and requires no remote services, can be useful.) If a “local” administrator account does exist, does that account have a password which will be convenient to use in the future when this image is used days or weeks or years later?

Even if the hard drive is not intended to be used as a base image for multiple other virtual machines, it may be nice to make a copy of the hard drive's current state. That way, any work done so far may not need to be repeated. (This may be especially true with a virtual machine which can just make a child image.)

If the hard drive is perfect for using as an image, then stop using it (when generally means shutting down the operating system, and turning off the computer). If this is a hard drive image, if data speed is a non-concern or is less concerning than disk space usage, then compress the image of the hard drive's current state. (If compression will just create a new copy of the file, don't bother copying the file first.) Otherwise, if this is a hard drive image (but if compression isn't desired), then making a copy of the image file may still be quite useful (and time-saving) later, so do that. Adjust file attributes as necessary to make a read-only copy of the (compressed version of the) image.

Note: If, after the image is made, there is a desire to update the standard behavior of a file, there may be two ways to do that. One would be to update the image, although that may be quite a bit of work, including adjusting file attributes (so changes may be written) and then, the step that may be much, much more work: adjusting/re-creating any child images. Another possibility is to modify files in the child images. If the parent image is set to automatically run a file from a specific location, then perhaps the desired effects can be obtained by modifying a file in that location. Then any sort of changes that may have been forgotten can be inserted into a template that is used for each of the child images. It's a bit sloppier, but may an effective way to easily make standardized changes to each child image.

This completes this tutorial.

Note that there is certainly more that may be frequently done as part of an elaborate process of setting up new machines. If a brief amount of additional time is available, consider starting a time consuming process that will make progress even while a break is later taken. For example, see hardware testing for data storage devices to have a hard drive run a test. (The test may be initiated using software aware of S.M.A.R.T.)

There is much more information available to learn even more. For example, the “Multiple Virtual Machines Tutorial” refers to this guide and then goes on to discuss other steps of configuring a remote machine, such as supporting remotely initiatable shutdown options. (This is covered in the Multiple Virtual Machines Tutorial: section on configuring a new system.) There may also be other guides for hardening networks, or software to help test how secure a platform is.