Even those who know how to count in hexadecimal (and binary) may benefit from learning about the super-fast converting between binary and hexadecimal. However, it makes sense to start by making sure there's an understanding of what these digits are.
A guide for counting in decimal, binary, and hexadecimal is provided. (The concept of “counting in decimal” is just provided for comparison, because this comparison allows for an easy way to quickly understand how the other numbering systems work.) See: How to count.
- [#digitnam], [#howincrm], [#incrdec], [#incrbin], [#decrbin], [#incrhex]
- (The related sections have been moved. If an older hyperlink went to this section, check the address bar to see which anchor was being referenced. Then, see the latest content, hop on over to the relvant section: [#digitnam]: Terminology note, [#howincrm]: How to increment, [#incrdec]: Counting up in decimal, [#incrbin]: Counting up in binary, [#decrbin]: Counting down in binary, [#incrhex]: Counting up and/or adding in hexadecimal.)
- [#cnvhexbn]: Super-fast converting between binary and hexadecimal
Note: It is clear that individual math skills vary. The term “Super-fast” is used because this can be much faster than using a more generic style of converting bases. For those with the binary and hexadecimal equivilents from zero through 15 memorized, the conversion may be able to go as quick as a person's speed at writing individual hexadecimal digits. That may be quite noticeably much faster than trying to do division (including subtraction of remainders) of a four digit binary number.
What many, many, many technicians and computer programmers may not know is that there is a very easy way to convert beteween “long” binary and hexadecimal numbers. In this case, “long” refers to anything with at least five binary digits (and two hexadecimal digits). It really isn't any more difficult or taxing than referencing the chart of the first sixteen values. Granted, it may take a while to memorize the hexadecimal to binary chart for single hexadecimal digits. Once that is done, though, converting even long sequences of hexadecimal digits to binary is essentially instantaneous, about as quick as simply reading numbers. Converting the other direction involves breaking the number up into groups of four bits and then converting just as instantaneously.
However, before going into that neat time-saving conversion method, let's look at how a single digit is done.
- Converting a single hexadecimal digit (to binary/decimal)
For single digits, it is best to initially just rely on a chart. For those who will be working quite a lot with binary for a couple of months, these single-hex-digit conversions may be good to memorize.
3 0011 4 0100
7 0111 8 1000
A (10) 1010
B (11) 1011 C (12) 1100
D (13) 1101
E (14) 1110
F (15) 1111
(Another chart is made available at the section about hexadecimal: possible values.)
In the short term, it is good to know how to count in binary and decimal and hexadecimal, so that such a chart can be generated on the fly in half a minute to a minute.
However, just memorizing the chart may be an approach that is even better for those who will interact with hexadecimal a fair amount. It may be easier than expected because, by following these tips, once about 40% is solidly memorized, patterns exist so that the rest may become intuitive extremely quickly (without significant effort).
- Memorizing the first sixteen hexadecimal charts
For those who will proceed to make the plunge, here's some memorization tips. (These are simply recommendations, and are by no means the results of any sort of official analysis/studies.)
It's best to try to learn this over a short time, but not in one immediate setting. (Going too fast runs greater risk of confusing some of the new information with other information.) Maybe a few weeks is a nice timetable to try to let things sink in.
The first two, easiest to memorize, are zero and one.
Second, try to learn the powers of two, 0, 2, 4, 8. Also learn the hexadecimal value for sixteen (which requires a fifth bit), and F. Key reasons these are selected are because they are among the easiest and most useful to learn. After that, learn A, which is ten decimal, and is written out like two number tens: 1010. That's probably plenty for a day, or perhaps even a few days, or for those who feel they really do have that very down pat, perhaps just a few hours.
For many of the remaining numbers, they can be fairly quickly derived by using addition and subtraction, and that will help in memorizing them quickly. (This paragraph assumes that addition, at least of decimal numbers, is a skill that has been mastered, and may not work as well for students who, hopefully because of their youth, don't yet have standard decimal addition tables fully memorized.) It is predicted that five (four plus one) will be easy to memorize after four is well memorized, and seven will be fast to memorize after eight and fifteen are known.
After the first twelve hexadecimal digits (from zero to eleven), as well as the last two hexadecimal digits, are able to be quickly converted in one's own head, there's the last digits most difficult to engrain in one's head: C and D. Sticking with the even number will probably be easier: It will be quick to convert from 1100 to 12 (because 1100 is 8, 1000, plus 4, 0100), but remembering converting between C and 12 may be one last remaining stickly point that it will, eventually be worth thing to memorize. It might (or might not) be useful to compare the binary digits of 3, 6, and 12 and remember that the numbers with only two ones, and which have the two ones right next to each other, are one of the multiples of three.
- Fast conversion of multi-hex-digit numbers
First, an example, then an explanation how it is quickly derived. (For the following text, it will be helpful to know that “0x” specifies hexadecimal, as indicated by “How to count” guide, in the subsection called “Numeric terminology”.) As we can see from the hexadecimal charts, 0x9 is 1001 and 0xE is 1110. 0x9E, written out as a concatenation of hexadecimal digits, is equivilant to 10011110 binary.
To get the binary value of 0x9E (10011110), all that was needed was to write out the binary equivilent of 0x9 (1001) followed by the binary equivilent of 0xE (1110). The binary equivilents for many hexadecimal digits is simply a concatenation of each equivilent (including leading zeros when broken into groups of four binary digits).
To convert from binary to hexadecimal is fairly quick as well. To convert 10111001010 to hexadecimal, find out how many bits there are. In this case there are eleven bits. Plan to split the number into groups of four bits, but make the first group just long enough so all other groups of bits are groups of four. Since this has eleven bits, the first group of bits will be three bits long, which allows the second and third groups of bits to each be exactly four bits long. The groups then, are, 101 and then 1100 and then 1010. (If it is easier, one can just add enough leading zeros to the first group so that the number of bits in the entire number is a multiple of four.) After splitting the number into groups of four bits, just convert each group to hexadecimal. In the example, the resulting hexadecimal digits are 5 and then C and then A. Finally, concatenate the hexadecimal digits. The resulting hexadecimal digit is 5CA which is the correct answer.
- Converting between decimal and binary/hexadecimal
- [#cnvdecbn]: Converting from decimal to binary (fairly fast)
- The even-or-odd method
(Note: This summarization is not thoroughly tested at the time of this writing. Test this at your own risk.
This method doesn't involve memorizing a bunch of numbers that are powers of two. It does involve determining whether a number is even or odd. It also involves dividing even numbers by two. If you can do that quickly, then this method may be the preferable way.
This might not be quite as quick as some other ways, but it is pretty simple and straightforward. Because of the simplicity, some people may find this way quicker than other ways (particularly when those people haven't yet memorize a bunch of the powers of two).
The process is:
If the number is odd:
- Write down a one. This should go to the left of all digits already written.
- Subtract one from what is left of the number that is being converted
Otherwise, since the number is even:
- Write down a zero. This should go to the left of all digits already written.
- If the number is odd:
- Divide the remaining number by two.
- If the remaining number is bigger than zero, go back to step one.
- Otherwise, you're done. You've written down the binary number.
More information about the logic of this method may be found from: Princeton: Math Alive: Labs: Cryptography: Part 1: Conversion from Decimals to Binary (Method #2).
- Subtracting powers of two
This might be the faster of the two methods, if you're quick at identifying powers of two and subtracting.
This can often be done by many people using just their head. (Meaning: no calculators, and no scratch paper. Although, typing or writing the answer is typically helpful, as it can be easier than trying to remember the binary digits.)
For example, let's look at the message “Hi”. Looking up Code Page 437, the letters “H” and “i” are decimal ASCII values of 72 and 105. Both of these are less than 255, so the results can be stored in a single byte.
- H (72)
72 is smaller than 128. So let's assign the eighth bit (from the right, so this will be the first bit from the left) to be cleared to a zero value.
72 is “at least as large as” (a.k.a. “greater-than-or-equal-to”, or “≥”, or “>=”) 64. So let's assign the eighth bit (from the right, so this will be the first bit from the left) to be cleared to a zero value, and the seventh bit (from the right) is set to a one. So the bits currently look like:
72 - 0x128 - 1x64 = 8
Eight is less than 32, and less than 16, but is greater-than-or-equal-to 8. So we assign the bits: 0 for the column worth 32, 0 for the column worth 16, and 1 for the column worth 8. The result looks like:
The remaining amount is zero, so all remaining bits can be set to zero.
- i (105)
Let's go a bit more challenging.
i is smaller than 128, but but is at least as large as (greater-than-or-equal-to) 64. So let's assign the eighth bit (from the right, so this will be the first bit from the left) to be cleared to a zero value, and the seventh bit (from the right) is set to a one. So the bits currently look like:
105 - 0x128 - 1x64 = 41
41 is at least as large as (greater-than-or-equal-to) 32. So the next bit is set to one.
41 - 32 = 9.
9 is less than 16, so the next bit is zero.
Nine is greater-than-or-equal-to eight, so we set the next bit to 1:
9 minus 8 is 1. Converting the remaining bits, using the same pattern, results in:
For numbers below two in binary and below ten in hexadecimal, converting the numbers to decimal does not require any change in how the numbers are written out.
Although individual results may vary, many people may find that the fastest, easiest way of converting larger numbers to and from hexadecimal is to first convert the numbers from decimal or hexadecimal into binary, and then to convert the binary number to either hexadecimal or decimal.
To convert from hexadecimal to decimal, first convert from hexadecimal to binary and then go from binary to decimal.
There is the slower method, which sometimes feels like it needs to be deployed; this is how the process is often taught, and may be the most sensible method if there is ever a need to do this sort of thing with other/non-standard bases (possibly octal).
Note: These notes were made rather hurredly. It may be appropriate to spruce up this guide with more visual displays. There may be better tutorials out there. For now, though, this does show the general process.
An assumption at this point is that precedence is understood: ^ (Exponential notation) happens first, followed by multiplication, and then division.
Example: Converting 2A6 to decimal:
Convert that to: 2*16^2 + A*16^1 + 6*16^0, which is like 2*256+10*16+6*16 which is 512+160+96, which is 768. (Wow, that randomly picked number ended up converting to 3 times 256.)
To convert back: We'll use a number that represents an unpleasant concept, when viewed upside down with an LED display. That is, until the number is divided by 10,000 (in decimal), at which point it is a nice greeting. The number in hexadecimal is 1E36.
1E36 = 1*16^3 + E*16^2 + 3*16^1 * 6*16^0 which is 1*4,096 + 14*256 + 48*16 + 6*1 = 4096+3584+768+6.
Example: When Cisco devices find a physical problem which might be an issue with wiring, it may utilize a number of 3,135,023,902. What is the value when written out as hexadecimal? (If reading that, and the number one looks like a lowercase L, then what does it spell out?)
A skilled person may be referred to as...
- 2766 / 256 = about 10.8.
- Since the quotient is 10.8, we'll take the integer of that quotient, which is 10. 10 in decimal is A in hexadecimal. So our first letter is A.
- 10 * 256 = 2,560
- 2,766 - 2,560 = 206. That is a remainder that we still need to covert.
- 206 divided by 16 = exactly 12.875
- 12 in hexacecimal is C
- 12 * 16 = 192. 206 - 12*16 = 206-192 = 14.
- 14 divided by 1 is, unsurprisingly, 14.
- 14 in hexadecimal is E.
- A00 + C0 + E = ACE.
- 4,277,009,102 converted into hexadecimal:
- 4.2 billion is less than 68 billion, so skip the 16^9. Forthat matter, 4.27 billion is less than 4.29 billion, so skip the 16^8.
- Divide 4,277,009,102 by 16^7. 4,277,009,102 divided by 268,435,456 is about 15.93. 15 * 268,435,456 is 4,026,531,840. 4,277,009,102 - 4,026,531,840 = 250,477,262. 250,477,262 should be less than 16^7 and it is.
- We now know the first digit will be 15 in hexadecimal, which is F. We still need to convert the remainder, 250,477,262.
- The last power of 16 used was 16^7. Compare 250,477,262 to 16^6. 16^6 is 16,777,216 which is smaller than 250,477,262, so we'll use that.
- 250,477,262 divided by 16,777,216 is about 14.9236. 14 * 16,777,216 is 234,881,024. 250,477,262 - 234,881,024 = 15,596,238. 15,596,238 should be less than 16^6 and it is.
- We now know that the second digit will be 14. In hexadecimal, that will be an E. We will need to convert the remainder, 15,596,238.
- 15,596,238 divided by 16^5, which is 1,048,576, comes to about 14.87. 14* 1,048,576 = 14,680,064. 15,596,238 - 14,680,064 = 916,174. That remainder should be less than 16^5 and it is.
- Again, the number of 16^x is 14. So we have another 14 which, when converted to hexadecimal, is an E.
- 916,174 divided by 65,536 is about 13.9797. 13 * 65,536 = 851,968. 916,174 - 851968 = 64,206 (which should be less than 65,536).
- 13 in hex is D.
- 64,206 divided by 4,096 is 15.67. 15*4096=61440. 64,206 - 61,440 is 2,766.
- 15 is F
- 2,766 in decimal has already been shown.
Here's another example: Let's see what activity restaurants help with:
So if we add those examples together, then we get to see what restaurants help people to do...
Another number to play around with: 12,648,430 converted into hexadecimal.
Example: A number used by a Java class file structure, as well as the Universal Mach object (“Mach-O”) file format (when using a “universal” binary that runs on PowerPC or IA-32, has used 3,405,691,582. What might have been on the mind of somebody, who may have frequented Starbucks, when coming up with such a situation?
Speaking of Starbucks, they want nothing to do with the message seen behind the hexadecmial equivilent of the longer number 15,310,212,104,174 when it gets converted to a 36-bit hexadecimal number.
For more messages embedded, see: Wikipedia's article on “Hexspeak”: “Notable magic numbers”. Somehow (at the time of this writing) that list seems to be missing the mischevious 32-bit 2,880,289,470.
- [#powoftwo]: Powers of two
- Why use exponential powers of two?
Certain numbers tend to be used by computers more than other numbers. Heavy users of software will notice these numbers are used frequently. RAM chips have also tended to have capacities that match a power of two. The values include values such as: 2 and 4 and 8 and 16 and 32 and 64 and 128 and 256 and 512 and 1,024 and 2,048 and 4,096 and 8,192 and 16,384 and 32,768 and 65,536 and 131,072 and 262,144 and 524,288 and 1,048,576. These numbers are called the “powers of two”.
Also seen fairly commonly are the powers of two minus 1: 1, 3, 7, 15, 31, 63, 127, 255, 511, and 1,023 and 2,047 and 4,095 and 8,191 and 16,383 and 32,767 and 65,535. These numbers are the powers of two, minus one. There are reasons why many limits are related to subtracting one from a power of two, instead of being an actual power of two. First, though, this text will delve into why powers of two are used.
Let's take a look at a standard chart. (A simliar version of this chart is also available at hex values.)
Binary Decimal Hexadecimal
0000 0 0 0 Zero 0001 1 1 1 One 0010 2 2 2 Two 0011 3 3 3 Three 0100 4 4 4 Four 0101 5 5 5 Five 0110 6 6 6 Six 0111 7 7 7 Seven 1000 8 8 8 Eight 1001 9 9 9 Nine 1010 10 A a Ten 1011 11 B b Eleven 1100 12 C c Twelve 1101 13 D d Thirteen 1110 14 E e Fourteen 1111 15 F f Fifteen
One thing to understand is that leading zeros have no impact. So, binary 0001 is the same thing as 1 (no matter whether we are using binary or some other, larger base).
Notice that if we restrict ourselves to two bits, we have four possibilities (00 and 01 and 10 and 11).
Now, what happens when we allow three bits? There are eight possibilities. Namely: We have:
000 and 100 001 and 101 010 and 110 011 and 111
So when there are three bits, we end up having all the possibilities of two bits with a leading 0 (zero), and we also add all of the possibilities of two bits but with a 1 placed before each possibility.
In the chart above, one can also easily see what happens when going from 3 bits (which allow values from 0 through 7) up to 4 bits (which allow values from 0 up through 15 decimal). Notice that the binary pattern of the numbers 9 through 15 look fairly similar to the binary patterns used for the numbers 0 through 7. The only difference is that the numbers 9 through 15 start with a one, instead of a (leading) zero.
This same sort of thing happens every time a bit is added. All of the old possible values remain available, and what is also available is the same values coming right after a value of one.
So, if we have four bits available, there are 16 possibilities. Although we could restrict the computer to a lower number like 10, that ends up wasting some of the other possible values that could, possibly, be put to effective use. So, the main reason why computers tend to use powers of two (so frequently) is just an attempt to effectively use, rather than waste, some of the possible values.
Doubling the possibilities only happens when adding a bit. If adding a unit that is a different size, even more possibilities may become available. For example, when adding a decimal place (e.g., going from 10 to 100), the number of possibilities gets multiplied by ten. So, the prevelence of these powers of two is very related to the reasons for the fact people use binary as the common base for computer units (which are called “binary digits”, or “bits”).
- [#powtwomo]: Variation: Powers of two minus one
As noted in the section about powers of two Also seen fairly commonly are the powers of two minus 1: 1, 3, 7, 15, 31, 63, 127, 255, 511, and 1,023 and 2,047 and 4,095 and 8,191 and 16,383 and 32,767 and 65,535. These numbers are the powers of two, minus one.
There are reasons why we use the powers of two for things like the memory capacity of RAM chips. (See the sections about powers of two and/or reasons for the fact people use binary as the common base for computer units (which are called “binary digits”, or “bits”).) However, surely thereis a reason why are other numbers, like the largest addressible memory location, often showing an odd number which is just under the power of two.
One reason for this is commonly because zero is counted. So, if there are four possible values (0, 1, 2, and 3), the possible numbers include one through three, and also zero. Having zero (instead of one) as the lowest possible value just seems to frequently be useful.
For the negative variations (e.g. -127), this can sometimes be caused by having a power of two signed. One bit is used for a positive or negative sign. As an example, three bits allows for zero through four, and also (counting down from) negative one through negative three. (All possible values, then, are: -3, -2, -1, 0, 1, 2, 3, 4. This example, or smaller examples like -1, 0, 1, 2, are fairly easy to count the digits and so are easier to see how this is happening.) In this sort of scenario, the positive value of four is possible because zero is not taking up a spot in the list of possible positive numbers. Instead, the negative variation of the power of two becomes unrepresented. Zero, and the positive variation of the power of two, must generally be considered to be important to represent. Because the negative power of two is sacrificed, the positive numbers don't also need to sacrifice a possibility for the number zero.
- [#ybinary]: Why use binary (and bits, which are “binary digits”)?
A bit has one of just two possible values. That is the smallest number of values which still allows each bit to have multiple possible values. Keeping the number of possible values smaller can make some things simpler. Many implementations of technology is based on using a simple concept (in elaborate ways).
The term “bit” stands for “binary digit”, quite likely a portmanteau made up of the first letter one of the lowercase I letters that are second from the end, and the t. (So, either “binary digit” or “binary digit”... “binary digit” just seems unlikely.) In theory we could use groups of three values, and call them a “trit” (from “trinary digit” or “trinary digit”) or, depending on the pattern being mimicked, the first letter and last couple (”trinary digit”). However, there may be reasons why it was decided not to be spending time thinking about tits.
On a more serious note, there's actually no compelling reason why technology couldn't have used groups of three bits. Wikipedia's article on “Fast Ethernet”: “100Base-T4” section even refers to “A very unusual” method “used to convert 8 data bits into 6 base-3 digits”. See that? “base-3 digits”. So, base 3 has actually been used in technology. This sort of technology was adapted and is used by the much more common 1000Base-T connections (as described by Wikipedia's article for “Gigabit Ethernet”: “1000Base-T” section).
It is acknowledged, though, that using three bit units is fairly uncommon compared to using two-bit binary units. Some people may wonder if that is an advantage to using three bits instead of two. However, why stop at three? Why not use 4? Or 5? Or 10? The sexagesimal system used 60 values for each digit, and has been used by even ancient cultures.
So, the simple answer is: there's no reason why we couldn't have used some other base. In fact, computer programming languages may often support octal (base 8) and hexadecimal (base 16). Early computers effectively used base 128 when implementing 7-bit bytes, and then support for base 256 became widespread as computers were designed around bytes that took 8 bits and had 256 possible values. So, other values could be used, and, in fact, are.
None of these alternatives, though, offers the sheer simplicity that is sometimes available when there are only two possible choices. So, there remains a reason why binary just seems to be a very natural numbering system to use. In many cases, there just isn't any compelling reason to use the higher bases. So, that's binary's simplicity of implementation is reason enough to lean towards binary.
- [#datasize]: Some standard data sizes
These terms tend to refer to groups of bits, with a consistent number of bits in each group.
- [#bit]: bit
A bit is either cleared to a value of zero, or set to a value of one. Very often, the word “set” does refer to a value of 1, and the term “clear” does refer to a value of zero. So if a knowledgeable computer programmer (or even technician) refers to a bit that is “set”, there may be no need to ask what value it was set to.
- [#byte]: byte
Now-a-days, typically the term refers to 8 bits. (Same as Octet.) Wikipedia's article for “Byte” notes, “With ISO/IEC 80000-13, this common meaning was codified in a formal standard.” However, the term “byte” has certainly had other meanings. Later, Wikipedia's article for “Byte” goes through some standards, noting “Early computers used a variety of 4-bit binary coded decimal (BCD) representations and the 6-bit codes for printable graphic patterns common in the U.S. Army (Fieldata) and Navy.” ASCII used 7 bits (although some standard “code pages”, which were sometimes referred to as “extended ASCII”, were eight bit). A “binary coded decimal” (“BCD”) standard used 4 bits. IBM released a “Binary Coded Decimal Interchange Code” (“BCDIC”) which was 6 bits, and later “Extended Decimanl Interchange Code” (“EBCDIC”) which added a couple more bits.
In practice, there could also have been 10-bit bytes containing 8 bytes of actual data, plus a "stop bit" and a "parity bit", so dial-up modem transmissions could take bits per second and divide by 10 to get bytes per second.
Although the term “byte” is most frequently understood (in modern times) to refer to a group of eight bits, the term “octet” is less ambiguous.
Wikipedia's article for “Byte” notes that byte “is a deliberate respelling” of the word “bite”, which was done “to avoid accidental mutation to” the term “bit”. Such a single-character misspelling (or mis-correction, hoping to fix an apparent error) could impact numbers by a factor of eight, so the difference in spelling reduced the likelihood of some such errors.
- [#octet]: octet
8 bits. RFC documents will often refer to octets in order to avoid any ambiguity about byte size.
- [#nibble]: nibble
- 4 bits. (Part of a byte. No, not spelled as nybble.)
A defined size. Like the term “group”, there can be different sizes for words.
Perhaps the term most commonly refers to the size of data that is typically used by a CPU instruction. Therefore, an 80386 typically used 32-bit words, while older Intel CPUs used smaller words and X64 systems use 64-bit words. However, even that can vary...
RFC 793: TCP, page 16 refers to both “32 bit words” (in the “Data Offset” section) and “16 bit words” (in the “Checksum” section), on the same page of this popular standards document.
So, the term should be considered to be potentially ambiguous until clearly defined. People writing about a word should generally define the size in order to provide the necessary clarity.
May be a set size, e.g. Asynchronous Transfer Mode's 48 bit payload and 5 bit header, or may vary, such as: Ethernet frames that can be up to 1,500 bytes, or Jumbo Frames which have an increased maximum of perhaps 9,000 bytes, or super jumbo frames which may be larger, or IPv4 packets of up to 65,535 octets, or IPv6 jumbograms that may be up to 4,294,967,295 octets. (The OSI Model layers associated with specific protocol data units may be seen at OSI Model.)
Similarly, a data stream (e.g. a file's data) could vary in size.
Hard drive manufacturers found some marketing benefit to using base 10 measurements. RAM manufacturers have stuck with using base 2 measurements.
- Kbps verses KBps (bits vs bytes)
- (Abbreviations are discussed in the section called “Attack of the kibibits!.)
- [#kibibit]: Attack of the kibibits!
When personal computing was just started to become somewhat mainstream in the 1980's, and accepted more throughout the 1990's, a Kilobyte was established to be 1,024 bytes. A Megabyte was established to be 1,024 of these Kilobytes, which equates to 1,048,576 bytes. These numbers were chosen because they were considered to be “close enough”, and these numbers were powers of two.
Eventually, some hardware companies (perhaps most famously, hard drive manufacturers) started realizing that if they could claim larger numbers of kilobits (or groups of kilobits, such as kilobytes or megabytes) if the term “kilo” meant 1,000 instead of 1,024. By using that measurement, 1,024 bits could represent 1.024 kilos instead of just 1.000 Kilo. So, for that extra 2.4%, the marketers started using the term differently. Note, though, that this isn't just a difference of 2.4%. It is really 2.4% per power of 10. (Some numbers are shown by a web page about prefixes.) By the “Tera” level, the difference is over 9.95% and by the Yotta level (10^24) the difference exceeds 20%. So, this does (eventually) get to be notably more significant than just 2.4%.
Well, computer users objected to the use of the terms that was inconsistent to how computer users have been using the terms. However, the hard drive manufacturers apparently decided to go after the marketing benefits.
RAM manufacturers, however, have not. They have continued to market using “binary” variations of Kilobits.
As long as there is continuing different usage, related confusion is expectable. (This is not saying “acceptable”, just “expectable”.) (See: Wikipedia's article on “binary prefix”: section called “Consumer confusion”.) This has led to lawsuits (see Wikipedia's article on “binary prefix”: section about “Legal disputes”. (Forum post notes, “This difference actually led to a class-action lawsuit against Seagate, which (hilariously) they lost.” However, records seem to indicate the companies have tended to get away with not officially admitting to any guilt/wrongdoing. Also, by settling, the companies have been avoiding a judge making an official determination of such guilt. However, Seagate class action suite records customers being able to get “5% cash back on disk drives bought over” a six year period.)
Even official standards organizations have not been able to get universal consensus. Wikipedia's article on “binary prefix”: “Standardization of dual definitions” shows some standards bodies noting the terms may have multiple meanings. Wikipedia's article on “kilobyte” notes, “Although the prefix kilo- means 1000, the term” ... [has] “historically been used to refer to either 1024” ... “or 1000” ... “bytes, dependent upon context, in the fields of computer science and information technology.”
Some people (perhaps primarily computer programmers) have adopted the term “decimal megabyte” to refer to a megabyte in base 10. A “megabyte using a binary base”, sometimes commonly referred to as a “binary megabyte”, has sometimes been abbreviated down as “MiB” (meaning something like “Mega- binary-based byte”)rather than “MB”. Some people have also embraced a new term, “mebibyte”, which means the same thing (and uses the same abbreviation of “MiB”). Other units have been similarly modified by substituting the second syllable with “bi”, to represent a value of 2 raised to a multiple of ten, instead of ten raised to a multiple of three. Unit names, therefore, include: kibibit, mebibit, gibibit, and tebibit.
Sometimes the capitalization of the prefixes can be tell-tale: an uppercase letter (e.g. “K” for “Kilo”, or “M” or “Mega”) may refer to a binary-based unit, while a lowercase letter (e.g. “k” for “kilo”, or “m” for “mega”). Another standard often utilized by the same poeple is to use an uppercase “B” for (octet) bytes, and a lowercase “b” for bits. Using these standards, Kb may be “1,024 bits” while kB may be “1,000 bytes”. These standards are convenient and short standard, although it is not necessarily universally followed. (So, beware of its usage.)
Although some people have tried promoting this and similar terms (such as “kibibit”), the proposed invasion of the kibibits has met substantial resistance. This resistance (at adopting these specific new terms) has been notable enough that many professionals continue to use the traditional terms, even after being informed of the advantages of the simplicity of the new terms.
Wikipedia's “Talk” page about “Mebibyte”: comment titled “"Mebibytes" are studid” notes that the terms “decimal megabyte” and “binary megabyte” are usable and clear, and that the term “MiB” may even be expanded to “binary megabyte” (when reading the term out loud).
Although there have been some standards recognizing dual definitions, other standards have tried to clearly define “megabyte” as being base 10. Some standards that might be related: IEEE 1541-2002 (“Prefixes for Binary Multiples”) (mentioned by Wikipedia's Talk page for Mebibyte: section about name's origin), IEC 60027 (perhaps IEC 60027-3) Wikipedia's article for “IEC 60027”, ISO 31, ISO/IEC 80000, IEC60617-12 (mentioned by Wikipedia's Talk page for Mebibyte). (A commenter, presumably going by the name “TheBug” (related to the German release of Wikipedia) noted, “Unfortunately IEC charges $140 for their standard. I would love to take a look at the list of contributors, I have some suspicions about the involved parties.” mentioned by Wikipedia's Talk page for Mebibyte.)
Perhaps much of the resistance to adopting the new terms would have been lessened if the proposed names were some better-sounding names. The terms “kibi” and “gebi” and perhaps especially “mebi” just tend to sound like ga-ga-goo-goo babble baby talk. Wikipedia's “Talk” page about “Mebibyte”: comment titled “"Mebibytes" are studid” quotes a user (who has gone by “WickWax”), “I assert, that if a global survey were taken, less than 5% of engineering professionals would admit to pronouncing or saying "Mebi..." or "Gibi..." at anytime, anywhere.” If that isn't what comes to people's mind, then perhaps what does come to mind is the word “kibbles”, especially since a following syllable can be “bits”. It can remind people of the childish-looking 1980's television commercial where a “weiner” dachshund is longing for “Kibbles 'n bits 'n bits 'n bits”. Wikipedia's Talk page for Mebibyte notes, “And what the layman would think? Oh...the KiB thing is a spelling error...it looks funny!” Wikipedia's Talk page for Mebibyte: section about name's origin refers to sounding like a person speaking when the person has a cold.
One point in favor of those who cling to the old terms is that, if the new terms were widely adopted, then suddenly a lot of pre-existing text would mismatch the new definitions. (Perhaps the mebibyte promoters may have had some better luck if they also tried to promote additional new terms like “medebytes”? Although, if pronounced as “med ee bite”, that might not necessarily be sufficiently different to avoid the babble factor.)
Much of the discussion on some various Wikipedia pages (Wikipedia's Talk page for Mebibyte, Wikipedia's Talk page for Kibibyte, Wikipedia's Talk page for Binary Prefix) show that people are quite adament about their opinions on the issue, and seem to indicate that the terms “kibibyte” and similar may be a “failed standard” which has obtained very little support beyond Wikipedia. One comment noted, “The use of these terms should be immediately banned from Wikipedia. If you look up Mebibyte or Gibibyte on Goolge you will find that more than 1/3 of the references are found on Wikipedia and the rest seem to be other sites that define the terms. This a self referencing act here.”
- [#codepage]: Code pages
TechNet: Chapter 2 - SQL Server (7.0) Setup shows/mentions some of the more popular code pages.
- [#codpg8bt]: Numbered 8-bit code pages
- [#cp858]: Code Page 858 (“CP850”)
This has the Euro sign.
- [#cp850]: Code Page 850 (“CP850”)
- Wikipedia's page for “Code page 437”: “Internationalization” section notes, “Later MS-DOS character sets, such as code page 850 (DOS Latin-1)” ... “filled the gaps for international use with some compatibility with code page 437 by retaining the single and double box-drawing characters, while discarding the mixed ones (e.g. horizontal double/vertical single).”
- [#cp437]: Code Page 437 (“CP437”)
Starts off with ASCII, and then contains a mix of internationalization characters and shapes that could be used for drawing boxes.
Wikipedia's page for “Code page 437”: section called “Difference from ASCII” cites October 2, 1995 edition of Fortune Magazine as quoting him: “We were also fascinated by dedicated word processors from Wang, because we believed that general-purpose machines could do that just as well. That's why, when it came time to design the keyboard for the IBM PC, we put the funny Wang character set into the machine—you know, smiley faces and boxes and triangles and stuff. We were thinking we'd like to do a clone of Wang word-processing software someday.”
Higher up on that Wikipedia page is a graphic showing the symbols. This is also available by visiting Wikipedia's page on Codepage-437.png.
Wikipedia's article about the ldquo;.nfo” file format (commonly used by the “warez” scene: warez is basically another name for “software piracy”) discusses some of the artistic use that led to some usage of the graphics of this code page.
- Using the code pages with modern graphical systems
MS LineDraw Version 2.00 (info page at Microsoft), MS KB Q179422: WD97: MS LineDraw Font Not Usable in Word (referring to Word 97). Wikipedia's page on “ANSI art”: “External links” section has hyperlinked to http://zeh.com.br/v12/downloads/dos437.zip. (The hyperlink called this “Perfect DOS VGA 437”.) A font called Terminus may be similar (Wikipedia's page for “Terminus (typeface)” refers to different websites. Terminus @ SF, Generating TTF files for Terminus, Older Terminus TTF font.
Currently there are quite a few comments about this in the CSS file used by this page: see comments in common CSS. (Search for “RealTerm”.) (Eventually, those comments may be re-analyzed, compared with some of the other info here, and migrated here.)
(Not all of this info has been checked, but it is provided for reference.) For some other software that may help to view or create such files, see: Wikipedia's list of text editors: section called “ASCII and ANSI art”, Wikipedia's page on “ANSI art”: “External links” section, Wikipedia's article on “.nfo” files: section called “References” and the following “External links” section, Wikiepdia's article: “ASCII art converter”.
Code Page 437 (DOS-Latin-US) to Unicode table. (The “Format” section in the comments provides descriptions for the columns.)
A group of 127 characters. When computers started supporting 8-bits, the resulting code pages were often referred to as “Extended ASCII”. For details on those, see Numbered 8-bit code pages.
Unicode uses 16-bits, and therefore allows a much larger number of characters. See: Wikipedia's article for Unicode: section called “Operating systems” to review support. See also: Wikipedia's article for Unicode: section called “Versions”. As if that wasn't enough versions, there is UTF (Wikipedia's page for Unicode: section called “Unicode Transformation Format and Universal Character Set”.
- Some historical options are mentioned in the discussion of the term “byte”.
- Numeric Network Addresses
- Numbers related to Internet protocols
- Layer 4 Port Numbers
IANA's list of port numbers mostly consists of TCP port numbers and UDP port numbers. However, this can also apply to SCTP or DCCP.
- [#protonum]: Protocol Numbers
Many of the protocols on the Internet have been assigned a number. One set of numbers is often given the name “protocol number”. The most recent list can be found from Protocol Numbers. This is used by the “Protocol” field of an IPv4 packet, and the “Next Header” field of an IPv6 packet.
The list at IANA replaces some RFCs. Related RFCs have included:
RFC 349 and RFC 433 and RFC 503 and RFC 739 and RFC 750 and RFC 755 and RFC 758 and RFC 762 and RFC 770 and RFC 776 and RFC 790.
RFC 790 at tools.IETF.org says this is obsoleted by RFC 820.
RFC 820 at tools.IETF.org says this is obsoleted by RFC 870
RFC 870 at tools.IETF.org says this is obsoleted by RFC 900
RFC 900 at tools.IETF.org says this is obsoleted by RFC 923
RFC 923 at tools.IETF.org says this is obsoleted by RFC 943
RFC 943 at tools.IETF.org says this is obsoleted by RFC 960
RFC 960 at tools.IETF.org says this is obsoleted by RFC 990
RFC 990 at tools.IETF.org says RFC 990 was updated by RFC 997, which itself was updated by RFC 1020, RFC 1020 and RFC 1117. RFC 1020 was obsoleted by RFC RFC 1062 and RFC 1117 and RFC 1166. RFC 1166 was updated by RFC 5737. However, that is an alternate update chain from RFC 990. Also, RFC 990 at tools.IETF.org says that RFC 990 is obsoleted by RFC 1010
. RFC 1010 at tools.IETF.org says this is obsoleted by RFC 1060
RFC 1060 at tools.IETF.org says this is obsoleted by RFC 1340
RFC 1340 at tools.IETF.org says this is obsoleted by RFC 1700
RFC 1700 at tools.IETF.org says this is obsoleted by RFC 3232.
Finally, RFC 3232: “Assigned Numbers: RFC 1700 is Replaced by an On-line Database, which effectively obsoleted RFC 1700 and, along with it, IETF STD 2.
- ICMP(v6) messages
The ICMPv6 Parameters has some message types that have names similar to the ICMP (for IPv4 Parameters, but they are not necessarily the same list or message type names. Also, even when names for the message types match between the ICMP for IPv4 and ICMPv6, the numbers do not necessarily match.
- IGMP Type Numbers
- IGMP (for IPv6) Type Numbers. For IPv6, RFC 2710: “Multicast Listener Discovery (MLD) for IPv6 notes that “MLD uses ICMPv6 (IP Protocol 58) message types, rather than IGMP (IP Protocol 2) message types.”
- Truth Tables