Musings

8-bit computing

One of my parenting responsibilities is as a mentor for my son’s FRC robotics team. I don’t have a formal role, I’m really available to help with anything that students need, but I really enjoy working with the coding team. FRC takes high school students and gives them 6 weeks to build a robot that competes with and against other robots in games designed by the organization. Because the build season is so short, a lot has to happen in the off season. One of the things I’m going to do with the team is give them a history lesson in computer science.

These students have a wide range of backgrounds and many don’t take a formal coding class in high school. Those who do usually focus on a language like Python or Java and don’t get into the reason why their code behaves the way it does. There’s nothing wrong with this approach, but if students are interested in spending some of their summer getting a peek behind the curtain, who am I to complain?

This on-and-off-again series came from a question one of the students had. We were chatting about old 8-bit processors (I’ve already written about the 6502), and he logically assumed that the next iteration were 9-bit processors. It makes sense, right? When I was young the NES was advertised as an 8-bit console, and the Genesis was 16-bit. I had a 32-bit version of Windows and eventually upgraded to a 64-bit version. Every major upgrade in my life has been a multiple of 8-bits, so of course I understood that 9-bits didn’t come next. But it took decades before I learned why. This series will explore what I’ve learned along the way, starting with what a bit is.

The Flashlight

The flashlight has two different states, on or off. A binary digit, now known as a bit (coined by John Tukey in 1947), also has two states, 1 or 0. For the sake of our argument, let’s say that a flashlight that is on is 1 and if it’s off, it’s 0. Now let’s imagine that we want to send a message to someone who is far away, and the only thing we can use is the flashlight. We’re limited to two states, on or off, so we’ll obviously have to combine states together to send a message. But what’s the smallest number of states we could combine and have something usable?

(Of course Samuel Morse already did this, but let’s pretend for now)

I could say that on meant YES and off meant NO. That’s one way to send information, but how would I ask a question, or send someone a happy birthday message? I need more.

one

What if I combine two states together? That gets me 4 options! I can finally send that happy birthday message, but I can’t do much else.

two

Obviously I need a way to represent each letter. So that means I need 26 total. But I should also have a few punctuation marks so they know if I’m making a statement or asking a question (more bacon. vs more bacon?) Including numbers would also speed things up, so I can send “1” instead of “O-N-E.”

So, 26 letters, plus 10 digits (0-9), and two punctuation marks means I need 38 options. How many states do I have to combine together to get that?

Well, the YES or NO is two states, but by adding one more, as seen in the chart above, I get 4 options instead of 3. Written another way, I have 2^1 represented in the first chart and 2^2 represented in the second. 2^3 would give me 8 results! It’s a far cry from the 38 I’d like to have, but each time I add one more state (on or off) to my code, the number of options I have doubles! Having a string of 6 states allows me to have 64 different options, well over the 38 I need. Perhaps I could add in some accent marks or more punctuation.

5five

So as we’ve seen, by combining multiple states (or bits) together we can use one flashlight (or telegraph machine) to send very complex messages. For each state, or bit, we add our number of spaces doubles.

The 8-bit standard

three

Using our base-2 knowledge from above, we can deduce that 8 bits gives us 256 different values. The problem with using more bits is that it can make things slower. As you can see in the picture above, the letter A in Morse code is two states long (dot and dash, an upgrade from our on and off example used earlier). However, in 8-bit binary the letter A is represented as 01000001. Which is easier to convey, Dot-Dash, or Dot-Dash-Dot-Dot-Dot-Dot-Dot-Dash? If I was sending a message to a friend, I'd much prefer the first!

Now, while it would be a very noticeable time difference for us, for a computer it’s largely negligible. That’s not to say computers are instantaneous, though. There was a real reason to limit the number of bits early computers used.

The EDSAC computer, built in 1947, used 5-bits (as holes in a strip of paper) to represent things, giving it a total of 32 characters. How did it get around the limitations we mentioned earlier? It used a special shift character in space 32. This means that when the computer saw 11111* it knew that the next set of bits used the chart at the bottom. So if the computer saw (00000) it interpreted it as the letter “a.” If it saw 11111 then 00000 it interpreted it as the letter “A.”

*These representations are hypothetical because printing a line of holes on a punch sheet would cause it to rip in half. For more information on how punch cards (and tape) actually represented this information, see this video.

four

Using a shift character could certainly slow things down, a capital A would be 10 total bits here instead of 8 like we use today, but it was also cheaper to produce. Each bit required extra physical components to be added to the motherboard, processor, memory, etc.

However, time marched on and it was clear that while a 6-bit computer would be more expensive, it would also be significantly faster. Faster computers are often worth the price. The first IBM 6-bit computer was the 1401. It used punch cards to convey the information to the computer instead of a physical hard drive. This allowed the computer to represent all capital and lowercase letters, as well as the 10 digits with two spaces left over (2^6=64). So how did we get to 8-bits?

Hardware people know (I’ve been told), how difficult it is to make a physical machine use an odd number of bits, so the clear update to the 6-bit 1401 was to jump to an 8-bit computer. But we already mentioned how expensive those could be. Here’s where IBM stepped in. They had the customer base that could afford these more expensive machines and so IBM, under project manager Fred Brooks, created the first 8-bit mainframe computer, the System/360, in 1964.

Before the System/360, anyone buying a new computer system had to scrap their existing programs and start from scratch. There were no commercial software companies, and software was customized (or custom written) for each new machine. The System/360 changed that dynamic overnight by separating software from hardware. For the first time, software written for one machine could run on any other machine in the line.

The first model could support up to 64KB of memory and perform 34,500 instructions per second. This expanded memory and processing speed also allowed for the first operating systems, which has its own fascinating history.

So 8-bit computing was born and we haven't escaped it yet.