Second Edition coming soon. Now with AI: Machine Learning, Deep Learning & LLMs Get notified →
« Back to contents

Concurrent programming

Introduction

Programming is undergoing a major transition right before our eyes. In the olden days, programs were executed sequentially by a relatively simple processor. That straightforward model has been upended by the computing industry’s pursuit of higher performance. Innovations in chip design require new programming paradigms.

For decades, the processor industry was in thrall to Moore’s law: an observation by Henry Moore, CEO of Intel, that the number of transistors on a processor doubled every two years. Another observation, Dennard scaling, stated that a chip’s power consumption would remain constant as the transistors became smaller and denser. Taken together, the two observations correctly predicted that performance per watt would increase rapidly. Between 1998 and 2002, processor speeds increased from 300MHz in a Pentium II to 3GHz in a Pentium 4. That’s ten times faster in four years. For programmers this was a good thing. Processor speeds increased so rapidly that upgrading hardware was the easiest way to improve performance.

Such rapid performance increases could not go on forever. Components had to become smaller and smaller so that more could be packed into the same space. Increasingly tiny components made it more and more difficult to dissipate heat and keep the electrons flowing correctly. Clock speeds increased in smaller and smaller increments until finally they increased no more. Manufacturers desperately sought ways to deliver the performance improvements that their customers had come to expect. With no easy way to further increase clock speeds, they came up with the wheeze of bundling multiple processors, or cores, on to the same chip. And thus the multi-core revolution was born.

In theory, a processor with two cores can do twice as much work in the same time as a processor with a single core. In reality, things are not so simple. Doubling the number of cores does not directly improve performance in the same way that doubling the clock speed does. A single-threaded program on a multi-core processor will utilise a single core and leave the others idle. The CPU cannot automatically spread the computation over multiple cores. We must shift to a new programming paradigm in which tasks execute concurrently on multiple cores.

There is no single, correct concurrency model. Some programming languages are based on a particular model, while other languages support multiple models. In this chapter, we’ll begin by defining some important terms.

This is a preview

The full chapter continues in the book.

Get the full book - $29

PDF, ePub & Kindle. Free v2 upgrade included.

or

Want to read more first?

Get Chapter 1 (Theory of Computation) free in your inbox.

V2 Be first to know when the Second Edition launches with three new AI chapters: Machine Learning, Deep Learning, and LLMs.

No spam. Unsubscribe anytime.