Monday, February 28, 2011

Jump around

From last time you'll remember that computer memory can hold different types of information, namely data and instructions. Both of these are just numbers that are given special meaning either by the programmer (data) or by the computer processor itself (instructions). Last time we talked about how normally the processor reads one instruction, performs the action associated with that instruction, then reads the next instruction, performs it, and on and on. The program counter of the processor holds a number which is the memory address of the next instruction it is going to load for the processor to perform.

Imagine as if all of the instructions of a computer program were in the computer memory all in a row, and the program counter points to the first one, then the next one, then the next. That's pretty much how it works, except that model doesn't allow for the repetition of any instructions (or in other words, the processor can never execute an instruction that has a lower address than its current program counter). It also would mean that you can never skip an instruction that maybe you don't want to do right now. The answer to both of these problems is collectively "jumping" and "branching." A jump is a processor instruction that UNconditionally changes the program counter (and thus the source of the next instruction) to a new value, instead of just the next one in the line. A branch is a processor instruction that conditionally changes the program counter (meaning it may or may not change the program counter, depending on whatever condition the programmer dictates).

We'll talk in a later post about what the difference is between conditional branching and unconditional jumping, but first let's go through an example of regular, unconditional jumping. Let's say your program counter (PC) is currently set to 16 (PC=16). This means that the next instruction that will be executed by the processor is located at memory address 16. The processor loads the instruction at memory address 16, and it finds:

jump 60

This means that executing this instruction directly changes the value of the program counter to PC=60. For the processor's next instruction it looks at memory address 60. Let's say at memory address 60 it finds:

jump 16

This instruction will directly change the program counter to PC=16, which you'll recognize is where it just was last time. This little example is very silly because all it does is jump between PC=16 and PC=60 over and over, like an infinite loop. You would never see this in a real program, but it gives you an idea of how unconditional jumping works, jumping both backwards and forwards. In this example if there hadn't been another jump instruction at PC=60, and instead there had been an add, or load, or store, then the program counter would have just behaved like normal after executing the instruction at PC=60, increasing to the next instruction in the list, and so on.

So in summary, jump instructions directly change the value of the processor's program counter (PC), controlling which instruction will get executed next by the processor. Next time we'll look at conditional jumping, or "branching," and where these conditions come from in the first place.

NOTE: when I say that memory location 16 contains the instruction "jump 60," I don't mean that the word "jump" is somehow written into that memory location. What really is going on is that there is a series of bits that would look pretty random to you or me at first glance, but that the processor interprets as "jump" and then the binary number 60. I'm going to talk about computer instructions in terms of English, not in terms of binary.

Sunday, February 27, 2011

Remember these things

We know computer memory is a big long list of bytes that can be read and written (loaded and stored), but now we need to talk about the types of information that is contained in the memory. There are two broad categories of information that the computer keeps track of, namely, instructions and data. Both of these are just numbers when you get right down to it, because that's all the memory can hold in it, so the difference between them lies in how the computer treats these different kinds of information.

Let's talk about data first. The word "data" is the plural form of the word "datum," which just means "piece of information." So "data" means "many pieces of information." All data in a computer is represented as binary numbers, but those binary numbers can stand for a variety of different things. It just depends on what meaning the computer's programmer decides to give those binary numbers (which the computer doesn't really care about, by the way). The programmer could consider the byte 0b0001 0001 to mean the decimal number 33 or the ASCII symbol '!' (ASCII is just a standard convention for treating single-byte numbers as text symbols). It is very common for things we want to represent in a computer to take up more than one byte. For example, numbers larger than 255 must be represented by more than one byte. Also, picture and movie files are larger than one byte. Large quantities of text can be represented by very long lists of single bytes that are interpreted as ASCII symbols (like the example above). In the end, it is the value of the byte(s) and the context which the programmer gives them that really determines what the data in a computer memory means.

The other kind of information that can be stored in computer memory is computer processor instructions. I know we haven't talked about how computer processors work at all yet, but I think this one detail about their operation cannot be avoided at this time. Processors work by reading a part of the computer memory, and then depending on what was contained in that part of the computer memory (or that "instruction"), the processor will perform an action. There is a counter in the processor called the "program counter" which keeps track of the place in memory that the processor will get its next instruction. After every instruction is processed the program counter gets increased (i.e., "counts up") to move on to the next instruction. After that instruction is processed, the program counter "counts up" again, moving on to the next instruction, and so on.

Just like how data is just a series of arbitrary bytes until the programmer steps in to give those bytes meaning, instructions are just series of arbitrary bytes until the processor's designer steps in to give those bytes meaning. There are many different types of instructions, including adding, loading, storing, jumping and branching. There are also many more, but these are the basic building blocks of computer science that we're focusing on for now. We've already talked in detail about adding, loading and storing.

Jumping and branching are special instructions that deal with changing the program counter. Normally the program counter just increases (counts up) to the next instruction after each previous instruction is completed. This doesn't allow for repeating any previous instructions (or looping, as it's called in programming). Jumping and branching allow for the program counter to be set to whatever value the programmer wants. This can include increasing the program counter, or decreasing it. Next time we'll talk more about jumping and branching and what it allows computers to do that would otherwise be impossible.

Note: Yes, I know that I was inconsistent about treating the word "data" as a plural word. This is pretty much universal in all technical and academic literature. Even though the word represents a plural concept, it is almost always treated grammatically as a singular. I will be following this convention from here on out.

Friday, February 25, 2011

Bathtubs are memorable

I'm going to get right to the point. All that nonsense about lined up bathtubs full of buckets? I made it up. I was trying to teach symbolically. Did it work?

Anyway, here's the breakdown. In that metaphor, the lined-up bathtubs collectively represent the memory system of a computer. The bathtubs represent individual bytes, and the buckets in the bathtubs represent individual bits. The numbers? Those were numbers. It would have taken too much imagination on my part to invent a replacement for numbers.

So am I trying to tell you that the memory system of a computer is a giant series of bytes full of bits? Pretty much. That's not how it's physically implemented in the computer (have you ever noticed that memory chips are square-ish, and not 3 meters long and impossibly thin?), but we're not quite ready to talk about the physical implementation of computers yet.

As for the loading and storing, I pretty much just explained that one directly also. There are fundamentally two things that a computer can do with its memory system. It can "load" values out of it, and you can "store" values into it. I know, it might seem like "load" should mean "put something into the memory," but remember that when a number is "loaded" out of the memory, it's "loaded" into somewhere else. Yeah, that doesn't really cut it for me either, but remember that the opposite of "load" is "store," and that sounds even more like you're storing something into the memory (and this time you really are).

All of the bytes in a memory system are numbered. There's an order, and all of the bytes in memory have an "address." The computer loads numbers out of a particular address, and stores numbers into other addresses. A computer might "store 128 into memory address 183,820,"or "load memory address 1,024 and put its contents into variable X." One of the basic things that makes this a reliable system to use is that the contents of the memory system cannot change unless the computer explicitly changes it. That allows us to confidently store numbers into the memory and get the same number back out when we load it later.

Just to recap, a computer's memory system consists of a very long string of bytes that hold their value between accesses. These bytes can be accessed using a specific numbered address for each byte. These accesses can be loads or stores. Loads read numbers out of the memory, and stores put numbers into the memory.

Next time we'll talk about what kinds of things are stored in memory, in preparation for explaining our last big ingredient to building a working computer: branching.

Tuesday, February 22, 2011

Thanks for the memories

I apologize for the all-too-long post last time. It probably should have been two posts. Today, however, I want to paint a mental picture.

Imagine you are in an empty void, floating high above what looks like a thin, straight line starting directly beneath you, but stretching on extremely far ahead of you. You are slowly floating down to where this line begins. As you get closer, you notice that the line is not as narrow as you first had supposed. You get closer still and notice that this entire line is comprised of a series of bathtubs. These bathtubs each have a sign next to them with a number. The one directly below you is marked "0." Next to it is bathtub "1." The furthest you can read with your naked eye seems to be bathtub "23," but you can tell that all of the bathtubs in the line have increasing numers the further away they get from you, so you assume the numbering system just continues on forever.

You finally touch down next to bathtub "0." Strangely enough, this bathtub is not full of water or even Jell-o, but rather it is full of buckets. 8 buckets right in a row. Of course you assume that the buckets are full of Jell-o, but this isn't the case either. The buckets have numbers in them, single binary digits, one digit per bucket. 8 digits per bathtub. Some buckets have 0b0, and some have 0b1.

These bathtubs do not have regular faucets to fill them with water. Instead, where the faucet should be, there are two buttons. One is marked "load," and the other, "store." You look around, and finding yourself alone decide that it wouldn't hurt anyone to try out these buttons. You reach out with your left hand and timidly press the load button of bathtub 0. Electricty runs through your left hand, up your arm, across your body and down to your right hand, where suddenly appears the binary number 0b1101 1000 floating above the palm of your right hand. Don't worry, it doesn't hurt. You notice that this is the exact same binary number that is in the bathtub. It's as if the load button copied the value of the bathtub into your hand.

You next try the store button. Again, electricity runs through your body, this time from right to left, but the value of bathtub 0 remains unchanged. That's odd. Next you decide to try an experiment where you load the value of one bathtub, and try to store it somewhere else. You press load again on bathtub 0, and again the value 0b1101 1000 appears in your right hand. You move to bathtub 1, which has the value 0b0000 0000, and press its store button. Suddenly bathtub 1's value changes to match the value in your hand, and the value that was in bathtub 1 is lost forever. After some more experimentation of loading values from various bathtubs and storing them into other bathtubs, you start to reach the limits of how much fun you can have with this.

It gets boring moving numbers around if that's all you can do. You start wishing you could do something else with these numbers. You wish you could at least add them together (see, I told you addition was going to be important), and do something interesting with the loaded numbers before you store them away again.

And then you wake up. Or something. I'm not very good at endings for dream sequences. Next time we'll talk about the interpretation of the dream, and what these bathtubs and buckets have to do with real computers.

Monday, February 21, 2011

"What does it mean?" and addition examples

With a knowledge of carry bits and single-bit adders, we are ready to dive into some examples of binary addition. Sounds pretty fun, right? Well, I know it does, but before we get into that I want to comment on why we're spending so much time talking about things that might not seem very important.

You came here wanting to learn about computer science and how a computer does what it does, right? Well, believe it or not, the addition of two numbers is a very, very important part of computers. In fact, there are only two more big things you need other than addition in order to have a fully functioning computer. If you create a computer that can add two numbers together (including negative numbers, which we'll get to another time), can "branch" and "jump" (basically, this means handling if-then statements and loops), and has a memory system (meaning there are places you can store data and retrieve it later), then you can program that computer to do anything you can think of. You can program video games, word processors, web browsers or anything else. It's kind of crazy that such complex programs have been created out of such simple building blocks.

Of those extremely basic building blocks of computing, I decided that talking about adders and the binary number system was the easiest and best place to start. A working knowledge of adders and binary will make learning the other concepts far easier. This is all in stark contrast to how computer science is most often taught in the world. It is far more common to start teaching computers with high-level computer languages that hide all of the important details about how computers work. You should consider yourself lucky that I trust you enough to start with the important stuff, saving the boring high-level stuff for later.

Speaking of important stuff, let's get back to examples of 8-bit binary addition. Let's first look at an example where none of the single-bit adders do any carry-out or carry-in (sorry for the poorly formatted text in these examples, I'll figure it out eventually):

+0b1100 1100 (204 in decimal, that extra + at the beginning is just to get the text to line up)
+0b0011 0011 (51)
=
+0b1111 1111 (255)

For each bit of the answer we are adding only 0b0+ob1, giving us 0b1 for the result bit. Now for an example that uses carry:

+0b0000 0111 (7)
+0b0000 0011 (3)
=
+0b0000 1010 (10)

We had to use carry for the "ones place," "twos place," and "fours place." The addition for the "eights place" consisted of 0b0+0b0+0b1 (that last 0b1 being the carry-in from the addition of the "fours place"). Let's look at one more extreme case of carry in action:

+0b1111 1111 (255)
+0b0000 0001 (1)
=
+0b0000 0000 (???)

This makes it look like adding 1 to the maximum value of a byte (255), will result in 0 as the answer. What's really going on is that the true answer (256) cannot be represented by only 8 bits. It is impossible. The true answer consists of 8 bits of 0, and the final carry-out bit set to one. When addition results in carry-out beyond the limits of what the answer can store, we call this "overflow."

This means that the answer that's in the remaining 8 bits is not the right answer, and that you messed up by trying to add two numbers that were too big for the number of bits you wanted the answer to fit in. The best thing to do in this case is to add two 16-bit numbers together, because then you will have the ability to represent the number 256 without any problems. You can convert an 8-bit number to a 16-bit number by just adding extra 0s to the front of it. This is what that addition then looks like (again, sorry for the horrible formatting):

+0b0000 0000 1111 1111 (255)
+0b0000 0000 0000 0001 (1)
=
+0b0000 0001 0000 0000 (256)

In this day and age when we have 64-bit adders at our disposal it is very rare to add numbers together that are too big to fit into the result bits and cause overflow, unless you're doing something weird or wrong (or you're a scientist). Next time we'll take a break from adding binary numbers, and talk about computer memory. However, we're not totally done with addition yet, as we still need to cover subtraction, which is a just special form of addition with negative numbers.

Sunday, February 20, 2011

Carry out, carry in

We've covered addition for two single bit binary values, and how the outcome of that addition is stored in the result bit and the carry-out bit. In a real computer's binary adder, the addition of two (and only two) values only happens for the least significant bit of the binary number (or the "ones place"). For the rest of the bits, we have to add three different values together: the two bits being added like before, plus the carry-in bit, which is just the carry-out bit from performing addition on the bits we just added together (moving right-to-left, like the decimal addition we're used to).

The carry-out of addition is handed to the carry-in of the adder for the next higher bits, so that they can use it to perform their addition. The "ones place" carry-out becomes the carry-in for the "twos place," and the carry-out of the "twos place" becomes the carry-in for the "fours place," and so forth (remember, this is binary, so our places are: 1s, 2s, 4s, 8s, 16s, etc.). The "ones place" is the only place that doesn't use any carry-in at all, because there is nothing lower than the "ones place" that the carry-in could have come from.

When we consider the carry-in bit there are suddenly twice as many options for combinations of inputs to our single-bit adder. Here are all of the combinations:

0b0+0b0+0b0: result=0b0, carry-out=0b0
0b0+0b0+0b1: result=0b1, carry-out=0b0
0b0+0b1+0b0: result=0b1, carry-out=0b0
0b0+0b1+0b1: result=0b0, carry-out=0b1
0b1+0b0+0b0: result=0b1, carry-out=0b0
0b1+0b0+0b1: result=0b0, carry-out=0b1
0b1+0b1+0b0: result=0b0, carry-out=0b1
0b1+0b1+0b1: result=0b1, carry-out=0b1

Even though there are twice as many inputs, there is only one single more output (result=0b1, carry-out=0b1). This is the one combination that cannot be seen from adding together two (and only two) single-digit binary numbers. Notice also that whenever at least two inputs are 0b1, then the carry-out will be 0b1 regardless of what the third input is. Also notice that the result bit is 0b1 only if there is an odd number of 0b1 values among the inputs. Interesting.

Multi-bit adders are created by stringing together many single-bit adders, feeding the carry-out of one directly into the carry-in of the other. When adding two binary numbers together, one number plugs in each of its bits into one input of each single-bit adder, and the other number plugs its bits into the other input of each single-bit adder. After the electrical circuit that comprises the adder has a chance to stabilize, the answer of the addition will appear in the result bits of single-bit adders, plus one final carry-out at the far end. You can make an 8-bit adder by stringing together 8 single-bit adders in this way, and the result will be 8-bits long, plus the carry-out (so really 9-bits long).

Next time we'll show examples of multi-bit binary addition in action, and I might even explain sometime soon what all this has to do with computer science.

Saturday, February 19, 2011

Adding single bits

Today we'll look at what it's like to add two single-digit binary numbers together. Each of the two single-digit binary numbers can either have the value 0b0 or 0b1. Considering both bits together, this means there are 4 different possible combinations of input for single-bit addition ((0b0+0b0), (0b0+0b1), (0b1+0b0), and (0b1+0b1)). Here are the results for all of those additions:

0b0+0b0 = 0b0 = 0
0b0+0b1 = 0b1 = 1
0b1+0b0 = 0b1 = 1
0b1+0b1 = 0b10 = 2

The result for each of those additions can be represented by a single result bit EXCEPT 0b1+0b1, which results in the 2-bit number, 0b10 (the decimal number 2). This leads to a very important observation. When adding two binary numbers, it is possible that the resulting number will require 1 more bit to store the result than either of the inputs. Possible, but not guaranteed to need it. It's exactly the same as in the decimal number system. Adding two single-digit decimal numbers might result in a single-digit answer, as in the case of 2+2=4, or it might result in a two-digit answer, as in the case of 7+8=15. You can never require 3 result digits when adding two single-digit numbers, as 9+9=18 (the result of 9+9 is the largest possible number you can get from adding two single-digit values).

So we know that the addition of two single-digit binary numbers may or may not produce a two-bit result, but what if it does? What does a real computer do with that? In a real computer binary adder, the second bit of the answer is called the carry-out bit. The first bit is called the result bit. When learning arithmetic in elementary school, we are taught about the carry-out bit, or rather the carry digit. When you are adding two numbers you might get "8+6=four carry the one, for a total of fourteen." It's the exact same thing with single-bit binary addition. Let's look again at previous additions in the context of result bits and carry bits.

0b0+0b0, result bit = 0, carry-out bit = 0
0b0+0b1, result bit = 1, carry-out bit = 0
0b1+0b0, result bit = 1, carry-out bit = 0
0b1+0b1, result bit = 0, carry-out bit = 1

So if there's a carry-out bit, does that mean there's a carry-in bit? There is, actually, and it is instrumental to multi-bit addition, which we'll talk about next time.

Friday, February 18, 2011

More on binary

I guess before we can talk about how easy math is to perform on binary numbers, we need to first understand a little more about how binary numbers themselves work. We talked earlier about how a byte can range in value from 0b0000 0000 to 0b1111 1111, and how this corresponds to the range of values in decimal of 0 to 255. Let's break it down a little bit more thoroughly to see how binary numbers are constructed.

Decimal numbers are comprised of several digits, or "places" as you may remember them being called in elementary school. We have the "ones place" the "tens place" the "hundreds place" and so forth. So the number 453 means four "hundreds" five "tens" and three "ones." The number 307 means three "hundreds" zero "tens" and seven "ones." Binary numbers work the same way, except instead of having "ones," "tens" and "hundreds" (which you will notice are all numbers that can be described by 10^(x)), we have "ones," "twos," "fours" and so forth (all of those places can be described as 2^(x)). Another big difference between decimal and binary is the range that each digit can take on. In decimal, digits can range from 0-9 (ten different options for the base-ten, or decimal, number system). In binary, digits can only be in the range 0-1 (two different options for the base-two, or binary, number system).

Let's take a quick look at a few examples that might cement the differences between decimal and binary in our minds.

In decimal we have such numbers as:
1 = one
10 = ten
100 = one hundred
1000 = one thousand

In binary those same symbols mean something very different:
0b1 = one
0b10 = two
0b100 = four
0b1000 = eight

Let's look at one more concrete example, the difference between the decimal number 1101 and the binary number 0b1101. In decimal those symbols mean one "thousands" one "hundreds" zero "tens" and one "ones," for a total of one thousand, one hundred and one. In binary those symbols mean one "eights" one "fours" zero "twos" and one "ones," for a total of (8+4+0+1)=thirteen. Each successive place (moving left as you're reading a binary number) means another power of two. Here's a list of the powers of two leading up to 2^8: 1, 2, 4, 8, 16, 32, 64, 128, 256 (remember, those are their decimal representation). Those are all of the powers of two we'll need for our discussion next time on binary arithmetic.

Thursday, February 17, 2011

Also, bytes

A single bit can represent two distinct "things," as we talked about in the previous post, but what about collections of bits? How many things can 2 bits represent? 8 bits? 1000 bits? 0 bits? The formula to figure this out is given by the exponential formula 2^x, where x is the number of bits. The caret symbol denotes that what follows should be considered as superscript, so 2^x means "two raised to the x power." A single bit gives you 2^1=2 things you can represent. 2^10=2x2x2x2x2x2x2x2x2x2=1024 things you can represent. 2^0=1 thing you can represent (just that one thing, no options, no either/or). A byte is a collection of 8 bits, giving us 2^8=2x2x2x2x2x2x2x2=256 possible things you can represent with one byte.

Unless I mention otherwise, I'm going to talk about bits and bytes from here on out in terms of binary numbers. One bit can represent the numbers 0b0 and 0b1. Whenever I talk about binary numbers I'll start it off with "0b" first (that's a zero followed by the letter 'b'). If I omit the "0b," then I'm talking about regular decimal numbers that we're all so used to. The 'b' in "0b" stands for "binary," so that's an easy way to remember that. I am always going to use binary numbers to represent bit strings. A bit string is just a series of bits of any length. A byte is a bit string of length 8.

Here are two example bit strings for two different bytes: 0b1010 1010 and 0b1111 1111 (I added a space every 4 digits just for readability's sake, and remember, the "0b" at the beginning just means it's in binary). We can consider these as binary numbers, with each bit representing a digit in the binary number system, and when we look at all the bits together we can determine the value of the number. Or, like I mentioned last time, we can give arbitrary meaning to each of those bits and treat them as separate entities (not part of a larger binary number) that each tell us something about the computer, like which physical switches are turned on or which lights are blinking. The human designing the system gets to decide how all of the bytes in the system will be used.

If a byte is a collection of 8 bits, and represents a binary number, then its low value would be 0b0000 0000, and its high value would be 0b1111 1111. 0b0000 0000 is the same as the decimal number 0, and 0b1111 1111 is the same as the decimal number 255. So a single byte can represent numbers in the range of 0-255, or in other words it can represent 256 distinct numbers (0 counts as a number).

Back in the day when we only had 8-bit computers, mathematical operations could only be done directly on numbers of single-byte size, so only on values ranging from 0-255. In order to represent larger numbers and do mathematical operations on larger numbers, programmers had to cleverly build those larger operations out of multiple uses of smaller operations. As 16-, 32-, and 64-bit computers were developed then we no longer had to use those tricks to perform math on large numbers because the computer now has hardware to do those operations directly. A modern 64-bit computer can represent integers in the range of 0-18,446,744,073,709,551,615, so you only have to use those tricks from former years if you want to use numbers larger than that.

Next time we'll look at how to do basic arithmetic on binary numbers, how it's different from arithmetic on decimal numbers, and especially how it's exactly the same.

Of bits

People have heard the terms "bits" and "bytes." Many people even know that computers have 8 bits to a byte. But what is a bit, and why do bytes have 8 of them? "A bit stores either 0 or 1!" That's a pretty good answer, but I think that doesn't tell the whole story.

We'll consider later how bits are stored in a computer and how computers use them, but for now let's just talk about the mathiness of bits. A bit is a "thing" that can be in either of two states. It doesn't really matter what the "thing" that a bit takes the physical form of, and it doesn't matter what the two states are. For example, and bit could be an electrical switch, and when it is turned in one position that means "1", and when it's turned in the other position that means "0." That's probably an example you were expecting if you already knew about bits, but that's just one of many possibilities. How about a glass of water that when it's full it represents "elephant," and when it's empty it represents "pocket knife?" That's another example of a bit (assuming that the glass can only either be full or empty, of course).

The point is that a bit is a "thing with two options." It can either be one way or it can be the other way, but it must be one of those two ways. It cannot be both at the same time, and it cannot be neither. A bit only has the meaning a person gives it. In computers, a bit is most often used in one of a few ways. It can denote a single digit in the binary number system (0 or 1), it can represent "true" or "false" in logic formulas. Also, if the computer is connected to any physical switches, the way a switch is flipped will be represented in the computer as a single bit. In all of those cases, it's up to the human in charge of the computer to decide what the two states of a bit really mean, or in other words, how they're going to use the information they gain by examining the state of the bit, and what it would mean to change that state.

Next time we'll look at how sets of multiple bits can work together to represent a greater variety of things than a single bit can by itself.

The first one

Hi. My name is Seth and I study computers. I really like computers. Sometimes I claim to hate computers. Maybe there's a bit of both going on. Anyway, I wish more people knew about computers and how they work. Hopefully I can share a bit of what I know with you. I'm going to start with things that I consider basic, and move on from there. I don't want any one post to be too long or heavy, so I'm going to try to keep things concise and simple, and break important concepts into multiple, bite-sized posts if I need to. If I ever use any big words, let me know and I'll correct it. Big words should almost never be used by anyone at any time.

Also, if you think I'm wrong about something, let me know in the comments. If I agree that I'm wrong and I feel like fixing it, I will. However, I might be intentionally wrong. I'm trying to explain computer science with as little required background as possible, so I might simplify things and gloss over details that are a big deal in real life, but aren't important for these discussions.