OpenBSD Remote Holes

Some people who are very well-known for excellently implementing security are the people behind OpenBSD. OpenBSD is an operating system, and the people behind OpenBSD are also the people behind some other software projects, most famously OpenSSH. Many security-conscious people have relied on this SSH code for encrypted data. As a result, surely experts and others have scrutinized the code for security concerns, by experts, and this code may well have receive more such scrutiny than just about any other code on the planet. Both OpenBSD and OpenSSH are well-known security successes that other projects may only dream of. Here is a brief overview:

OpenBSD has two often-used tag-lines. If there is an unofficial slogan for the project, that that slogan is likely: “Free, Functional, & Secure”. The other tag line that OpenBSD fans like to commonly quote is “Secure By Default”. The primary accomplishment that OpenBSD boasts is that there has been a very small number of known problems that would allow a remote attacker to control, or to break the network functionality of, a computer running the latest release of the OpenBSD operating system.

One possible approach that an attacker could take is to try to use up all of the network bandwidth, thereby preventing communication with a computer that runs OpenBSD. However, that potential vulnerability really originates from the design of standardized network protocols, and is not anything that the OpenBSD software could possibly prevent if the software supports communication with the global protocols that permit communication on the worldwide network referred to as the Internet.

Following is an entire overview of every single remotely attackable method in over 15 years of OpenBSD's history. Take note how pleasantly small this list is, and how each scenario was handled.

The first remote hole: OpenSSH

Starting in 1995, the main website had a tagline reflecting time since 1997, stating “Five years without a remote hole in the default install!” However, a bug was then identified with OpenSSH with versions 3.3 and earlier, which was used by OpenBSD. Therefore, the latest version of OpenBSD was vulnerable, as was any other software product that used the OpenSSH software.

The fix was made quickly. OpenSSH Security Advisory documents the response of the OpenSSH team, including this sentence: “We have not heard of a single machine which was broken into as a result of our release announcement method.”

Still, the OpenBSD team decided to publicly acknowledge the issue. Rather than hide their security statement which was on the project's main page, the team simply changed the statement to reflect their latest security assessment. OpenBSD's website changed their website's tagline to state, “One remote hole in the default install, in nearly 6 years!” Then, with every other release of OpenBSD, the website kept increasing the length of time. (So, a couple of years later, the website would refer to the operating system being secure for “Eight years”.)

In theory, OpenSSH 3.4 fixed this issue, although bugs may have persisted a bit longer than that: Security Focus: OpenSSH Challenge-Response Buffer Overflow Vulnerabilities states, “**UPDATE: One of these issues is trivially exploitable and is still present in OpenSSH 3.5p1 and 3.4p1.” Since there have not been widespread reports about massive abuses, it is likely that further potential problems were being identified by ongoing efforts to make the code as secure as possible.

Further details/coverage regarding this first remote hole:

OpenBSD's second remote hole

In 2007, there was another issue found. The first hole was related specifically to code from the OpenSSH project, and so could have impacted OpenBSD as well as other operating systems that used one of the “OpenSSH Portable” software releases. The second remote hole was rather specific to OpenBSD.

The issue was related to some code that handled a memory structure. This memory structure is called an mbuf (which stands for “memory buffer”), and the bug became known as being related to OpenBSD's handling of an mbuf. This only affected OpenBSD machines that accepted IPv6 packets, and SecurityFocus FORCED RELEASE stated that the attack “requires direct physical/logical access to the target's local network” ... “or the ability to route or tunnel IPv6 packets to the target from a remote network.” (This article also mentions some of the communications between the discoverer of the vulnerability and the OpenBSD team.)

Before becoming concerned about this bug affecting a protocol that is in wide use around the planet, keep in mind that this issue was discovered in 2007. Google Report: Global IPv6 statistics: Measuring the current state of IPv6 for ordinary users (via PDF file found on RIPE) page 13 showed that IPv6 only exceeded one half of a percentage of Internet traffic in Russia (0.76%), France (0.65%), and the Ukraine (0.64%). So, IPv6 was not being widely used at that time.

Even in these circumstances, what the remote attack managed to do was to get OpenBSD to go into its debugger, which led to the server becoming unresponsive over the network. This does not mean that the attacker could interact with the “debugger” software running on the OpenBSD machine. So, this issue is not the same thing as permitting the attacker to run whatever code/instructions the attacker desires, including spying on traffic or modifying data on the hard drive. The only thing that was accomplished by this feat was to cause a machine to stop responding, which would likely set off alerts if the machine's responsiveness was being carefully monitored. So, there were some substantial limitations to the types of damage that could be directly caused by an attack of this flaw that a professional team of security researchers managed to point out.

Still, despite the limitations in the type of harm that the attack provided, and the limitations that prevented the attack from being feasible on most computers (because of IPv6's very limited deployment at the time), an attacker could cause a problem by using another networked machine. The problem was:

  • related to code bundled with OpenBSD,
  • and was not related to the general design of key communications required for participation on the Internet,
  • did result in a problem.

The fact that an attacker could cause the computer to stop responding in desirable ways was considered to be a security concern. This attack was deemed to fit the description of a “remote hole” that an attacker could exploit by using another networked machine.

So, in February or March on 2007, OpenBSD's tagline got updated from “Only one remote hole in the default install, in more than 10 years!”, to “Only two remote holes in the default install, in more than 10 years!” Shortly after, the wording became rephrased to “Only two remote holes in the default install, in a heck of a long time!” (As numbers got bigger and lengthier to pronounce, the slogan might just sound less catchy. Maybe that was why the specific count of years was removed?)

Reaction: Calyptix Security Blog about OpenBSD's second remote hole states, “In a way, it is not too surprising to find a hole in the mbufs -- as anyone who has ever tinkered with mbufs on BSD systems would tell you, the mbuf API is a very tricky beast.”

Here are a couple of other resources related to this incident:

After a fix was released, further investigation into the issue was performed. The OpenBSD Journal article, just mentioned, does indicate that the problematic code “can lead to remote code execution or system crash.” Remote code execution is a far more serious problem.

There was some disagreements over how the bug and/or the bug reports were being handled, such as how these things were categorized and/or described. (Some sample discussions: OpenBSD Journal comments regarding “FORCED RELEASE”, OpenBSD Journal comments about bug categorization). Even Core's bug report shows that the “proof of concent” code was created only after getting more details that are provided by the OpenBSD team. The concensus by people familiar with the OpenBSD team's processes and behaviors is that the OpenBSD team was being straightforward with the knowledge that they had.

So, the situation we have here is that in over 15 years (at least 1997 - 2012) there were only two vulnerabilities allowing remote attacks. Both of these were discovered by people who study Internet security, and the bugs were promptly fixed before any malicious attacks have been known to cause a single real, unauthorized intrusion. One of those was with OpenSSH, which is the technological basis for OpenSSH-p, which is widely used by many operating system distributions other than OpenBSD.

Pleasantly, that's actually good enough security to enable many people to sleep peacefully. A second remote hole for OpenBSD notes that the OpenBSD team's “record over many years remains impressive.”

Critisism: Limiting Definition

Some people have critisized OpenBSD's claims about having so few remote holes, on the grounds that OpenBSD contains far less software than some other operating systems. For instance, in 2014 a simple comparison found that Debian 7.3.0 for i386 has 19 CD images from http://cdimage.debian.org/debian-cd/7.3.0/i386/iso-cd/ while OpenBSD 4.8 came to less than 446MB (which is less than 70% of the capacity of a single CD). This is a bit of an unfair comparison because it is not including some OpenBSD code like the ports tree, but OpenBSD does contain less pre-installed software than several other operating systems. People may claim that OpenBSD sacrifices some ease of use, and so many users will desire to use popular software which is not part of the OpenBSD package.

First, it should be pointed out that installing extra software can be pretty painless in many cases. Trying to think about this from a security perspective, there is some real truth to the logic that is presented by that critisism. However, there are a couple of counter-arguments. Not everybody will need extra software, depending on what they intend to be using the operating system for. People using the computer only as a firewall may not care that a graphical web browser is not included, and those computers may benefit from not having a bunch of the software that is unnecessary for the computer's intended purpose. Even if an organization has some specific software installed on 20% of the computers, the other 80% of the computers might get no benefit from additional unnecessary potential security vulnerabilities. So, streamlining the design of the operating system may limit how damage may occur.

Also, if a person does have some extra need, then they can evaluate additional software and install what they choose. When deciding which software to use, people can take comfort that the core operating system has a very solid security history. With other operating systems, regardless of whether other software is pre-included or separately installed, there may be additional reasons for concern about just how secure the latest version of the operating system is. Although fixed bugs shouldn't have any effect upon a current (fully updated) product, history can indicate general effectiveness of approaches, and so history may demonstrate some parallels to current software.