In this chapter we will look at the architecture, or fundamental structure, of modern computers. In the previous chapters we examined the theoretical underpinnings of why computers work. By the end of this chapter, you will have a much better understanding of how computers work.
We will begin by studying the micro-architecture of computing components. That covers the very low level, fundamental questions of how information can be physically encoded and computed by a digital circuit. We’ll look at binary and hexadecimal number systems and study how computational instructions are encoded. We will study logic gates, the basic building blocks of digital circuits, to understand how computational and storage units are constructed. Having covered that, we’ll expand our focus to the overall computer architecture. To understand exactly how computation happens, we’ll dive into the internal architecture and operation of the processor. We will also look at the other essential component, memory, and how it is arranged into a hierarchy of caches.
Modern computing components, in particular processors, are incredibly sophisticated. We’ll mostly focus on a very simplified architecture so that you can understand the overall design without getting too bogged down in various implementation details. Nevertheless, we’ll round off the chapter with a discussion of some sophisticated optimisations found in real world systems.
At various points in the chapter we’ll use the example of a simple addition instruction to illustrate what’s going on.
The definition of a Turing machine is a bit vague about how information should be provided to the machine. It just states that “symbols” are written on the input tape. Which symbols should we use? Over the years, computer designers experimented with a number of approaches but settled fairly quickly, for reasons we’ll see, on binary encoding. Let’s look at what that means.
A single binary value is known as a bit. It can only be one of two values: 0 or 1. The equivalent in the decimal system is a digit, which can be one of ten values: 0-9. Having only two possible values doesn’t seem all that great. How might we express a value greater than 2 in binary? Well, when we want to express a value bigger than 9 in decimal, we just add more digits. Each new digit allows ten times more numbers:
| 100s | 10s | 1s | number of possible values |
|---|---|---|---|
| 9 |
The full chapter continues in the book.
This site uses analytics to understand how readers find and use the book. Allow anonymous analytics?