The Internet for web developers: introduction

This post is part of a series:

  1. Introduction
  2. Protocols
  3. Internet Protocol
  4. User Datagram Protocol
  5. Transmission Control Protocol
  6. Domain Name System

This is the first post in a series on what web developers need to know about the Internet.

As I’ve mentioned elsewhere, I attended a web development bootcamp in London. Naturally, bootcamps focus on the practical stuff to get you started making rad websites. That left me in the slightly odd position of being a web developer who didn’t really know how the web worked, how the Internet worked and what was even the difference.

And that’s fine – at least to start with. You can get by as a web developer without really knowing too much about the Internet. A simple mental model of clients and servers magically exchanging messages will suffice for many situations. You can just take it for granted that a message you send will arrive safely and completely at the destination.

However, you’ll only be “getting by”. At some point, the underlying structure of the network is exposed to you. Perhaps clients are seeing weird behaviour because you haven’t accounted for latency, or your page loads really slowly or you need to connect your application to some other internal service. It’s at times like these that an understanding of what’s going on under the hood is essential.

This first post introduces the Internet as an overall concept. Later posts will cover its design and implementation before rounding things off with a discussion of the web. The target audience is web developers, but it should also be interesting to anyone who just wants to know what the hell the Internet is.

By the end of this series, you’ll have a great answer for the classic interview question: “what happens when I put in my browser and hit enter?”.

The Internet is a global network of networks

Let’s begin by getting our terminology straight. Computers can be connected together to form a network. A host is any device connected to a network. Hosts on a network are able to exchange information. The network might involve physical connections using cables (so retro!) or, more likely these days, wireless connections using WiFi or Bluetooth. Either way, there will be a component in each host known as the network interface card (NIC) that is responsible for encoding and decoding messages to and from whatever physical medium the network uses.

When you want to call someone, you need to know their phone number. It uniquely identifies the phone you’re calling out of all of the phones connected to the phone network. It’s the same for computer networks. Each host on the network has a unique identifying address determined by its NIC. One host can talk to another by using its NIC to broadcast across the network a message addressed to the NIC of the receiving host. The receiving host’s NIC will observe all of the messages transiting the network. Whenever it detects a message addressed to it, the NIC will excitedly read the message into its host computer’s memory and notify the OS using the interrupt system.

If a network only allows machines to communicate with other machines on the same network, then it is called an intranet because all of the communication stays within (intra) the bounds of the network. Lots of businesses run their own private intranets, accessible only to computers within the office, to share access to storage, printers and so on.

Some networks include machines that are connected to more than one network. Such machines can be configured to function as routers by accepting messages from a host on one network and passing it on to a host on another network. This enables inter-network communication. A message from a sender in one network can reach a receiver in another network even though there is no direct connection between the sender and receiver.

The Internet, with a capital I, is one single, planet-wide inter-network of networks. Any two points on the Internet can communicate with each other by sending a message across their own little sub-network to a router, which sends it across another sub-network to another router and so on across successive sub-networks. Eventually, the message reaches the destination’s sub-network and can find the destination machine.

The Internet

In this diagram, we have an intranet on the left. All of the PCs can chat to each other without using the Internet. One host on the intranet, the router, also has a connection to the Internet, most likely provided by an internet service provider (ISP). At the other side of the diagram is the server. Between sits the Internet. A message can go from a PC on the left to the server on the right by hopping from router to router across the Internet. At each hop, it crosses from one sub-network to another.

The “Internet” is really a bundle of technologies that enables communication across networks. Note that the Internet only concerns itself with cross-network communication. It doesn’t care what the actual messages are about. The (world wide) web is an “application” that we have deployed on that communication layer, along with email, BitTorrent and other useful services.

The Internet is planet scale

The Internet is on a vastly different scale to other computing systems. It reaches across the entire globe. Performance costs that are almost infinitesimally small on the scale of an individual computer suddenly matter very much when we’re working on the scale of a planet.

From the perspective of your computer’s processor, it takes an extremely long time for messages to traverse the Internet. Network requests are slower than system-internal operations by several orders of magnitude. Latency measures how long it takes a message to reach its destination. It is usually measured in milliseconds and lower is better (for comparison, internal operations are measured in nano- or microseconds).

A certain, unavoidable amount of latency is due to the time it takes the signal to physically travel across the Earth. As a rough estimate, it takes 150ms for a message to go from London to San Francisco. That is thousands of times slower than a local memory access. Additional latency is added by the performance of the machines along the message’s path through the network.

Latency has a huge impact on the perceived performance and responsiveness of web applications. It’s why front end developers have to do things like disable form submission buttons once they’ve been clicked. Otherwise, the user could click again and submit the form a second time before the first submission has been processed or even reached the server. The first submission is latent, or hidden, for the time it takes to reach the server. If we assume that it takes 100ms for a message to go between the client and server and 50ms for the server to process the submission, it will be 250ms before the client sees the effects of their action.

The other key performance consideration is how much data can fit into the network’s piping at once. This is known as the bandwidth. If you can only transmit one bit per second, it will take eight seconds to transmit a byte. If you can transmit four bits per second, it will only take two seconds to transmit a byte.

Clearly, if we can transmit more bits of data at once then we can transmit a message more quickly. Bandwidth is measured in bits per second and higher is better. In my lifetime we’ve gone from a humble 56Kbps (kilobits per second) to megabits and now gigabits per second.

Both latency and bandwidth are important for performance but in different ways. If you want to transfer a lot of content (e.g. streaming video), then you’re interested in bandwidth. It doesn’t really matter whether it takes 50ms or 500ms for the first bit of content to reach you, so long as the bandwidth is sufficient to keep the data buffers full and your video playing.

If the network activity involves lots of round trip journeys, when the sender transmits something and waits for a response from the receiver, even fairly small changes in latency can have a large impact on the total transmission time. On the other hand, low latency won’t do us much good if a lack of bandwidth means we can only send a minuscule amount of data per second.

The Internet is resilient

The Internet is designed to be resilience and self-healing. In fact, one of its antecedents, ARPANET, was designed by the U.S. Department of Defense, who were looking for a communication network that was robust and could handle large-scale breakages. There’s an interesting rumour that ARPANET was originally designed to survive a nuclear attack, but sadly the Internet Society pooh-poohed that rumour.

At the core of both ARPANET and later the Internet is the concept of packet switching. Messages are not sent in a single, continuous burst but are divided into chunks known as packets. A packet is made up of a header, which contains metadata telling the network how to handle the packet, and a data payload.

Each packet is sent independently of the others. At each hop of the journey, a given packet might be routed in a different direction depending on network congestion and availability. Imagine that a sender transmits 10 packets to a given host. The first five might follow the same path. Imagine then that one of the routers on the path is under heavy traffic load and can’t receive any more packets. The subsequent packets will be routed around this blockage along another path with sufficient capacity.

Package switching improves resilience. If a node on the network – one of these machines straddling multiple sub-networks – fails for whatever reason, the network will automatically find another route for traffic. It also improves the network’s overall utilisation by encouraging each packet to find the fastest route, thus spreading the traffic load more evenly.

Clearly, the machines on the Internet need to have some way to communicate with each other about the state of the network. Furthermore, whenever a packet arrives at a host, the host needs some way of knowing what it is expected to do with the packet.

The Internet is a stack of protocols

The solution is the protocol. The Internet is based on a stack of protocols, the most important of which are the Internet Protocol (IP) and the Transmission Control Protocol (TCP). Add in some clever routing algorithms and systems for managing and retrieving host addresses, and you have an Internet!

So, what exactly is a protocol? Humans use protocols all the time to coordinate behaviour without explicit instruction. Generally we call them something like “manners”, “politeness” or the “proper way to do things”.

Greetings are a great example. In many cultures, it’s common for two people to greet each other by cheek kissing. If everyone knows that you kiss the right cheek and then the left, two people who’ve never met can execute a successful cheek kiss:

Courtesy of giphy

That’s an example of coordinated behaviour without explicit instruction. Neither person said “I will approach you. I will them air kiss you on the right cheek (facial). I will then kiss the left. We are then greeted”.

But wait! In some places it’s common to kiss three times. What happens when a three-kisser encounters a two-kisser in the wild? If one person is expecting two kisses and the other goes in for a third, we run the risk of awkward nose/lips/eye kissing and much embarrassment! The problem is that the kissing parties did not realise they were operating different protocols because they did not communicate before. However, the whole point of a protocol is to avoid pre-communication.

This presents a bigger problem: how can network hosts even agree which protocol they should be using? As we’ll see, the Internet uses the ingenious concept of a “stack” of protocols. Each layer of the stack has protocols that are responsible for particular tasks and provide the hosts with a common basis from which they can develop more sophisticated communication.

For example, the Internet uses IP to handle the delivery of individual packets across networks. On the next layer up, it uses TCP to build the abstraction of a reliable, two-way connection on top of IP.


As web developers, it’s very easy to take all that for granted. We just expect our requests and responses to be delivered correctly, with no sections missing or in the wrong order. It takes a tremendous amount of engineering to make things look so simple. In the next post, we’ll look at protocols in more depth.

Did you find this useful?

Sign up to the mailing list and get free content sent to your inbox every few weeks.

* indicates required