No Total Security

(This section is informational/background/overview, and does not include actions that need to be taken.)

Security is not total: imperfection is required. Here are some reasons why.

People with access

As an organization becomes larger and larger, the possibility of internal attacks, very possibly in the form of personally-profitable fraud (but perhaps in another form such as sabotage), becomes an increased possibility.

Any organization where multiple people have physical access to the hardware, including a home, leaves the computer potentially available to attacks involving insufficient physical security. Since every person needs to sleep, no single person can watch over even a single computer all the time.

Uncontrollable Acts

Unless one can absolutely prevent, with 100% certainty, all possible threats, then “information security” is less than total security. Some threats are not 100% managable. For instance, threats that are generally beyond the control of any typical computer support company could include simultaneous natural disasters. (Such unpleasant events have been known to be referred to as an “act of God”, even by secular (non-religious) institutions.) Another example would be artificial devastation, some of which have historically been so significant that they fall into the category widely referred to as an “act of war”.

[#hidnmalw]: Hidden Malware

Wikipedia's page on “Backdoor (a computing term)”: Section on “Reflections of Trusting Trust” provides an overview on why compilers offer no guarantees or trust. The simple reason is that untrustworthy code can appear exactly the same as trusted code, or at least close enough that no differences are practically detectable. (Even if a person who understands code well enough tried to look at the code, tools like disassemblers which are maliciously modified may cause that nasty code to appear benign.) For this exact same reason, clean-up utilities such as anti-virus programs cannot be fully trusted. (Even if the anti-virus software was thorough in checking the code that it sees, the anti-virus software might be viewing an altered version of the code.) Therefore, this guide does not recommend obtaining trust through such clean-up programs (run on a program that needs cleaning), nor by compiling code on a machine where trust has been lost.

Checking all code used may be impractical: For example, if firmware on a CD-RW drive wrote malicious code to a CD then that CD could have malware, even if it was made from code downloaded from a website or even if the CD was made using software that attempted to make a direct copy of a good CD. CDs which are obtained from a store may not have that unlikely vulnerability, but could be vulnerable to an unlikely attack such as the CD being intercepted while being shipped, and then replaced with a CD with malware. Even an attempt to write code from scratch could be thwarted by a CPU that causes data, other than what is expected, to be written to the hard drive.

Granted, a lot of that last paragraph may have sounded theoretical. However, take no comfort in the supposed unpracticality of certain types of attacks. Some attacks that had once seemed infeasible have actually become reality. Audio CDs have had software that have been considered to be malware (as described by Wikipedia's article on “Sony BMG copy protection rootkit scandal”). Also, firmware on optical drives, shipped straight from a manufacturer, has been known to come with software that anti-malware software has flagged as malware. An example is the “BlueBirds.exe” contained by some models of LG drives. (It seems the Wikipedia section about “BlueBirds” on LG drives, from January 28, 2010 had some useful information which has been removed from a more current Wikipedia page. Part of that information notes that “users thought it was adware.” (Hyperlink removed from quote.) At least some anti-malware software started to identify this software as malicious, which is likely how some of those users started to think of this software as adware.) A worse example may have been when an update for Yamaha's CD-R400 drives was infected with the clearly damaging CIH virus (according to Wikipedia's article on the CIH virus: “History” section). (Surely that was not intentionally done by the company.)

The popular news site Slashdot reported on Keyloggers being planted on Samsung laptops. That was based on Samsung Keylogger report part 1, and more condemningly Samsung Keylogger report part 2. However, the story may have some room for doubt: A “Samsung spokesman” denied that in another article related to keyloggers on Samsung systems. The most telltale proof that the problem came from Samsung, rather than the local store called FutureShop, was that Samsung admitted to this. However, it seems feasible that an employee serving a Samsung call center might have read a statement that indicates Samsung is responsible for anything pre-installed under C:\Windows on a Samsung computer, and ended up admitting to something that shouldn't have been. Regardless of what further investigations prove (whether this was installed by a large company, or secretly installed onto computers by an employee of a store who may have had very malicious intent), the end result is that a purchase of electronics equipment, which featured a large brand name and which was sold as new equipment, was found to include software that logs keystrokes.

Again citing Wikipedia's article on the CIH virus: “History” section) which this time cites CNN's coverage of IBM Aptiva computers infected with CIH.

While not necessarily intended as malicious, HP Laptops came pre-bundled with HP laptops came pre-bundled with a sound driver by Conexant which noted keystrokes. (This may have been intended as a debugging effort to support “Mute” buttons on keyboards, but stored all keystrokes in plain (easily understood) text.

Other examples of viruses in factory-made equipment include:

These may all be outdated examples (since these individual documented problems have been identified and presumably rectified), but they all go to show that malware can conceivably be pre-bundled with a device when it leaves the factory/manufacturing plant.

In fact, one of the quoted sources above (TrendMicro report on a Navigation device “TomTom GO 910” being pre-infected) states, “Users can derive” a lesson “from this incident.” Namely, that “nowadays, even fresh-off-the-shelf products are not completely safe from threats, so precaution is key. Any storage device can be inhabited by threats, so users are advised to scan removable devices before use.” TrendMicro really re-iterates this point: The article of the incident by TomTom notes that the manufacturer of the device “did not do a recall, advising instead their customers to get rid of the Trojans by using antivirus products.” So, “Even with safe computing practices, unexpected cases like this still bring threats.”

Another source of malicious hardware include some USB connections that provide electricity to a “smart phone” which is designed to charge the phone's battery by using a cable that gets plugged into the USB port. Although many USB cables are benevolent, the same type of cable may also be used for data transfer. Slashdot reports about Apple iPhone being compromised by a USB charger (Ars Technica article about “Mactan” attack-chargers)

There have also been rumors about Ethernet ports which may perform some sort of attack (such as simply monitoring traffic, or perhaps doing something else), computer keyboards which contain keyboard loggers (and send the logs to remote locations, possibly by suddenly telling the computer that it is a USB-based network device that needs a connection to the Internet). “Officials warn about the dangers of using public USB charging stations” has reported, “Microcontrollers and electronic parts have become so small these days that criminals can hide mini-computers and malware inside a USB cable itself.” (From _MG_'s video on Twitter, it appears that might not actually be in the cable, but rather the end. Using a circuit board smaller than a penny, it fits into a plastic cover that people generally use to grab a cable in order to pull the cable loose from a USB port.)

Attack of the system startup

There is also a report about a custom BIOS that transmits data using sound that is beyond the range that the human ear can hear (as reported by BadBIOS. (The BadBIOS image is likely just some artwork, and not representative of a graphic that was actually seen from the BIOS.) Any computer (like a laptop) that has a built-in microphone could use sound equipment to receive data transmitted by speakers, without the need for cables or Wi-Fi antennas. Another claim made is that attack hardware can be very small, and even fit inside of cables. This ends up meaning that cables may be untrustable.

Whether or not those reports have been reliably confirmed as something that has actually happened (the keyboard one seems far more likely than some others), the theories behind each of these attacks are definitely technically feasible. (Just because specific aspects are possible is not complete proof that such attacks have actually happened, as noted by an article about reasons that BadBIOS does not appear to be real.) Further efforts could be made to validate the pre-existance of specific attacks. Even still, other theoretical attacks may or might not actually exist. Surely there is no end to the amount of effort that could be spent trying to verify different possibilities that actually might exist.

eWeek: HP Enhances SureStart Tech to Protect Users From BIOS Attacks mention some software used to attack, and states, “Membromi includes a keylogger in the BIOS that allows an attacker to track all keystrokes on an infected system.”

Regardless of whether some pre-existing attacks have been proven, there certainly have been other attacks that have existed. Even when source code is available, Ken Thompson noted, “No amount of source-level verification or scrutiny will protect you from using untrusted code.” (This quote comes from Reflections of Trust. Ken Thompson is the man credited for creating Unix, and other noteworthy achievements like progrmaming languages.)

Some attacks may seem unlikely, but history has shown that even small amounts of vulnerability may be used even if an effective attack would seem likely to be challenging to pull off. With so much potential for various styles of attacks, including possibilities of compromised physical hardware, there is little to no way to guarantee safety with 100% accuracy. Significant clean-up measures and tests may be performed, and may be effective a large percentage of the time, but that does not guarantee safety in every possible case.


A news article from February 4, 2019, Business article, “Huawei Sting Offers Rare Glimpse of the U.S. Targeting a Chinese Giant”, mentions a conclusion by the American Federal Government: “The U.S. believes Huawei poses a national security threat, in part, because it could build undetectable backdoors into 5G hardware and software, allowing the Chinese government to spy on American communications and wage cyberwarfare.”

It really does seem possible for such an attack to be “undetectable”.

Imagine, for a moment, if a company who manufactured a chip (which might be a device's main CPU, or a chip on some other device like a “hard drive controller” chip) had an employee who caused some chips to be malicious. These CPUs could do something that causes security vulnerabilities. Now, perhaps these CPUs only misbehave after they are given a signal. For instance, they only do their malicious thing if a person multiplies three times sixteen, six times in a row. Or, maybe the process would be something a bit more generic, like multiplying any numbers that lead to a result of 48, followed by two subtraction calculations, and then a multiplication that results in 60.

Unless you knew what to look for, such as performing a specific action, software would be unlikely to come up with a test that would notice this specific vulnerability. Microscopically study the physical hardware might be the most realistic way to detect how such an attack would work. However, pulling that off may require disassembling layers of circuitry within a chip.

Now, imagine that you had a data center with hundreds of pieces of computerized equipment, including some computers and devices such as specialized “routers” for Internet traffic. If you order a few dozen pieces of equipment, are you really going to detach every chip and disassemble all the layers of circuitry to study it microscopically? Of course not. To do this, without damaging the merchandise, would require a very extensive amount of effort by people with specialized skills.

Maybe a government might be realistically able to spend such significant efforts as part of a forensic investigation after a problem is detected, but doing this on every piece of equipment before a problem is noticed? Probably not.

So, yes, it certainly seems like it is possible for an attack to be undetectable (at least ahead of time, before the attack's effects are noticed).

What can be done

Safety may not be something that can be 100% guaranteed. What can be feasible done, however, is to take some steps that eliminate much of the potential for certain types of vulnerabilities. For example, installing relatively trustworthy copies of software (like an operating system on a CD) and then only installing software from trusted resources may reduce the likelihood of many types of software-based attacks. (There may still be some vulnerability caused by a bug in the software that is used, but such a risk may be substantially lower than the risk of using software obtained from less truthworthy resources.)

How good is Anti-Malware?

Anti-Malware software (including “Anti-Virus Software”) is not anywhere close to 100% effective. “Why Antivirus Companies Like Mine Failed to Catch Flame and Stuxnet”, written by Mikko Hypponen who is a famous researcher related to the Finnish company F-Secure Corporation, shared some very interesting comments in 2012. Some of those were:

  • “What this means is that all of us had missed detecting this malware for two years, or more. That's a spectacular failure for our company, and for the antivirus industry in general.”
  • “It wasn't the first time this has happened, either. Stuxnet went undetected for more than a year after it was unleashed in the wild, and was only discovered after an antivirus firm in Belarus was called in to look at machines in Iran”
  • “The truth is, consumer-grade antivirus products can't protect against targeted malware created by well-resourced nation-states with bulging budgets.” ... Attackers “have unlimited time to perfect their attacks. It's not a fair war between the attackers and the defenders when the attackers have access to our weapons.
  • “This story does not end with Flame. It's highly likely there are other similar attacks already underway that we haven't detected yet. Put simply, attacks like these work.”
  • “Flame was a failure for the antivirus industry. We really should have been able to do better. But we didn't. We were out of our league, in our own game.” Antivirus software is dead, says security expert at Symantec, subtitled, “Information chief at Norton developer says software in general misses 55% of attacks and its future lies in responding to hacks”.

This doesn't mean that Protective software is useless. A lot of attacks do get caught by such software. However, this does mean that successful backups are critical, and people should not trust that their data is 100% safe just because they installed software that came from a big company that they heard of before.

Related Reading

See also: Get Trust (Tutorial).