Skip to main content
Publications lead hero image abstract pattern

Publications

IEEE CTN
Written By:

Geoff Huston, Chief Scientist, APNIC

Published: 26 May 2023

network

CTN Issue: May 2023

A note from the editor:

I have always enjoyed reading about history as I believe it provides the best way to understand the possibilities of a future, disclaimers aside like “past performance is no guarantee of future results”.  That’s just it, there are no warrantees in using history as a periscope to the future, but there’s certainly a lot we can learn by understanding the past in order to have an idea of what the future might hold.  When the CTN editorial board received this article from the author Geoff Huston we were excited to read it.  Geoff takes us on a “good things come in small packages” ride around the history of the internet in order to help him paint a picture of what he believes the next 50 years can bring.  Just think about it, 50 years ago the internet was nothing more than a handful of PDP-10/11 computing nodes with a few universities and government entities connected to it; today it is part of humanity’s functioning fabric, and some might say that it is part of the genome of an ever-evolving society.  What will the future internet look like? That’s the question that Goeff so eloquently tries to address, we hope you enjoy it as much as we did.

Miguel Dajer, CTN Editor-in-Chief

Fifty Years of the Internet

Geoff Huston

Geoff Huston

Chief Scientist

APNIC

Geoff Huston AM,  B.Sc., M.Sc., is the Chief Scientist at APNIC, the Regional Internet Registry serving the Asia Pacific region.

When did the Internet begin?  It all gets a bit hazy after so many years, but by the 1970's research work in packet switched networks was well underway. There were a number of efforts in the late 1960’s that showed the viability of packet switching for data networking, including an effort lead by Donald Davies at the UK National Physics Laboratory, an effort in the US in the form of an ARPA project lead by Larry Roberts [1], and Louis Pousin’s work in France with the CYCLADES network [2]. With the rapid evolution of computers from esoteric specialised devices to more ubiquitous roles in government, industry and research in the 1970’s there was an associated set of pressures to improve the capabilities of communications between these computers. That was some 50 years ago, and so much has happened since then. My objective here is not to retell this history as such, but to use this as a background to formulate some thoughts on the more challenging question as to what does the next 50 years have in store for us?

I’ll try to avoid the even more challenging set of questions about the future of computers and computing over this same period, as the time span is long enough to think well beyond silicon-based structures and muse in the rather diverse directions of quantum physics and biological substrates for computation. I will look specifically at this question of the evolution of computer communications in this period. In addressing this question, I’ve found myself wondering what I would've thought in response to this same question were it posed fifty years ago in 1973. When we look at the musings at the time some of the predictions about communications technologies seem quite prescient given the benefits of hindsight, while other aspects are just way off the mark. It illustrates the constant issue with such musing about the future: Predicting the future is easy. The tough bit is getting it right!

Fifty Years On: The Perspective from 1973

In 1973 the monolithic mainframe computing environment was being challenged by the introduction of so-called mini computers. Large scale shared computers were justified because of the high cost of these devices, the esoteric use cases, and the need to defray these costs over multiple users and uses. Mini computing challenged this concept by bringing down the cost of computing and changing the model of access and use. These were smaller scale devices that could be used for a single purpose or by a single user. It was no accident that Unix, a single user operating system platform used on a PDP-7 mini-computer came out of Bell Labs at that time (equally, it was no accident that the name “Unix” was a deliberate play on the term “Multics”, a time-sharing operating system for multiple concurrent users). Computers would continue to push further in the market space by building computers with a smaller form factor, yet with sufficient capability to perform useful work. What would've been harder to predict at that time was the advent of computers as a mass consumer product. Over in the consumer space our collective fascination was still absorbed by pocket calculators. It was not obvious at the time that the pocket calculator market would enjoy only a fleeting moment in the mainstream.

Figure 1: ARPA network map in 1973. Public Domain
Figure 1: ARPA network map in 1973. Public Domain

At this time in 1973, it was time to also think about the needs of computer-based communication in a more focussed fashion. When we thought about the needs of computer communications, we had a split vision. Local networks that connected peripheral devices to a common mainframe central computer were being deployed and these networks used dedicated infrastructure. The concepts of connecting these devices together were not seen as forming a market that was anywhere near the scale and value of telephony and therefore it was likely that computers would continue to ride across existing telephone infrastructure for the foreseeable future. At best it was thought that these computers could potentially interface into the telephone network at the point of the telephone network’s digital switching infrastructure. From this came the envisaged paradigm of computer communications that mimicked telephone transactions with dynamic virtual circuits, such as the X.25 packet switched networks and the later promotion of ISDN as the telephone industry’s perverse vision of what consumer “broadband” was meant to be.  This created a bifurcated vision for computer communications, with the local networks advancing along a path of "always on, always connected" model using dedicated transmission infrastructure, and longer distance networks straddling the capabilities of the telephone network with a model of discrete self-contained transactions as the driving paradigm.

What happened since 1973 to shape the world of today? Firstly, Moore's Law has been truly prodigious over these 50 years. In the 1980s the network was merely the transmission fabric for computers and the unit of this transmission was the packet. The network itself did very little, and most of the functionality was embedded in these computers. However, in the 1990's the momentum behind computers as a consumer product not only gathered pace but overwhelmed the industry, and the personal computer became a mandatory piece of office equipment for every workstation and increasingly for every home as well. But these computers were not the multi-purpose, always on, always at work, smaller scale models of the mainframe computers that they were displacing but were more in the line of a smart peripheral device. Computer networks started to adopt a network architecture that made a fundamental distinction between "clients" and "servers". Computer networks started to amalgamate some of the essential services of a network, such as a common name service, and a routing system, into this enlarged concept of the network, while "clients" were consumers of the services provided by the network. In a sense the 1990's was a transformation of the computer network from the paradigm of telephony to its own unique paradigm.

However, this change in the model of networking to client/server systems also created a more fundamental set of challenges in the networking environment. In the deregulated world of the Internet of the 1990's the capacity requirements of the network were determined by the actions of the consumer market, and the coupling of consumer demand and network created unparalleled levels of demand. This environment created a feedback loop that amplified demand for service infrastructure without an accompanying amplified revenue stream. The technology models of service delivery were also under stress. Popular services hosted on a single platform were totally overwhelmed, as was the network infrastructure for these services. The solution was to change the technology of service infrastructure and we started to make use of server farms and data centres, exchanges and gateways, and the hierarchical structuring of service providers into 'tiers'. We experimented once more with virtual circuits in the form of MPLS and VPNs and other related forms of network partitioning, and because these efforts to pace the capacity of the service realm tended to lag the demand from the client population we experimented with various forms of "quality of service" to perform selective rationing of those network resources that were under contention.

Perhaps the most fundamental change by the 2000’s was the emergence of content distribution network models. Rather than using the network to bring clients to a single large-scale service delivery point, we turned to the model of replicating the service closer to the service's clients. In this way the client demand was expressed only within the access networks, while the network's interior was used to feed the updates to the edge service centres. In effect the Internet had discovered edge-based distribution mechanisms that bought the service closer to the user, rather than the previous communications model that bought to the user to the service.

Figure 2: Technical, Commercial and Regulatory Challenges of QoS, An Internet Service Model Perspective. XiPeng Xiao, 2008
Figure 2: Technical, Commercial and Regulatory Challenges of QoS, An Internet Service Model Perspective. XiPeng Xiao, 2008

The last 50 years has seen an evolution in networking infrastructure. We've taken packet switching for local area networks and pushed it into high-speed long-distance infrastructure. We haven't constructed SDH circuit fabric for decades, and these days the packet switches of the Internet connect directly to the transmission fabric. Yet in all these basic transitions we still operating these packets using the Internet Protocol. For me, the true genius of the Internet Protocol was to separate the application and content service environment from the characteristics of the underlying transmission fabric. Each time we invented a new transmission technology we could just map the Internet Protocol into it, and then allow the entire installed base of IP-capable devices to use this new transmission technology seamlessly. From point-to-point serial lines to common bus Ethernet systems to ring systems such as FDDI and DQDB and radio systems, each time we've been able to quickly integrate these technologies at the IP level with no change to the application or service environment. This has not only preserved the value of the investment in the Internet across successive generations of communications technologies but increased its value in line with every expansion of the Internet’s use and users.

Fifty Years On: 2023

This now allows us, at last, to look at the next 50 years in communications technologies.  Fifty years is a long time in technology, as we’ve just observed. Perhaps it’s not all that useful to try and paint a detailed picture of the computer communications environment that will be prevalent 50 years hence. But if we brush over the details, then we can look at the driving factors that will shape that future, and select these factors based on the driving factors that have shaped our current world.

What's Driving Change Today?

Bigger

When we stopped operating vertically integrated providers (telephone companies) and used market forces to loosely couple supply and demand we managed to unleash waves of dramatic escalation in demand for Internet services. We viewed telephony communications using a language of multiples of kilobits per second. Today our units of the same conversations are measured not in megabits or gigabits per second, but terabits per second. For example, the Google Echo cable, announced in March 2021, linking the US with Singapore transmitting the Pacific will be constructed with 12 fibre pairs, each with a design capacity of 12Tb/s. Yes, that's an aggregate cable capacity of 144Tb/s. Google’s Dunant cable system delivers an aggregate capacity of 250Tbs across the Atlantic, which will be complemented by the 352Tbps Grace Hopper cable system. We are throwing everything we can at this to build ever-larger capacity transmission systems with photonic amplifiers, wavelength multiplexing and incorporating phase/amplitude/polarisation modulation as well as pushing digital signal processing to extreme levels to extract significant improvements in cable capacity.

Moore's law may have been prodigious, but frankly the consumer device industry has scaled at a far more rapacious rate. We appear to have sold some 1.4 billion mobile Internet devices in 2020, and have done this volume, and higher every year since 2015. Massive volumes and massive capability fuels more immersive content and services.

When we consider “bigger” it’s not just human use of the network that's a critical consideration. This packet network is a computer network, and the usage realms include the emerging world of the so-called Internet of Things. When we look at this world, we have two questions which appear to be unanswerable, at least with any precision. How many “things” are using the Internet today? How many will be using the Internet tomorrow?

There are various estimates as to the device population of the Internet today [3]. There is some consensus around a figure of between 20 and 50 billion devices, but these rely on various estimates rather than more robust analytical measurements. Production volumes for microprocessors run into billions of units per year, so the expectations of growth in the sector are all extremely uncertain but generally incredibly high. Five-year growth projections in this market segment start at around a total of 50B devices and just get higher and higher.

Behind this is the observation that in growing bigger the Internet is no longer tracking the population of humans and the level of human use. The growth of the Internet is no longer bounded by human population growth, nor the number of hours in the day when humans are awake. We’re changing this network to serve a collection of computer devices whose use is based on a model of abundance. Abundant processing capacity, abundant storage, and abundant network capacity. We really don't understand what “bigger” truly means in terms of demands. The best we can do is what we’ve been doing over the past couple of decades: deploy capital, expertise, and resources as fast as these inputs can be assembled. We still seem to be in the phase of trying to keep up with demand, and however big we build this network, the use model has proved more than capable of saturating it.

Faster

At the same time as we are building bigger networks, both in terms of the number of connected clients and in the volume of data moved by the network, we want this data to be pushed through the network at ever faster rates.

We have been deploying very high-capacity mobile edge networks. The industry is being pushed into deployment of 5G systems that can deliver data to an endpoint at a claimed peak speed of 20Gb/s. Now this may be a "downhill, wind on your back, no-one else around" measurement, but it belies a reasonable consumer expectation that these mobile networks can now deliver 100's of Mb/s to connected devices. In the wired world rewiring our wired environment with fibre, and here the language of a unit of wired service is moving away from megabits to gigabits.

But speed is not just the speed of the transmission system but the speed of the transaction itself. Here, the immutable laws of physics come into play and there is an unavoidable signal propagation delay between sender and receiver. If "faster" is more than brute force volume but also "responsiveness" of the system to the client, then we want it both. We want both low latency and high capacity, and the only way we can achieve this is to reduce the "packet miles" for every transaction. If we serve content and services from the edge, then the unavoidable latency between the two parties drops dramatically. The system becomes more "responsive" because the protocol c conversation is faster.

But it's not just moving services closer to clients that makes a faster network. We've been studying the at times complex protocol dance between client and the network to transform a "click" to a visible response. We are working to increase the efficiency of the protocols to generate a transaction outcome with a smaller number of exchanges between client and server. That translates to a more responsive network that feels faster to use. What we are trying to do is remove the long-haul transit element from network transactions. Also, by anticipating demands and pre-provisioning content in content data centre delivery points, we can eliminate the inevitable capacity choke points associated with distance. In networking terms “closer” is essential for “faster”. It’s not all that “faster” needs, but without close proximity between sender and receiver “faster” is simply not possible.

Better

The use of encrypted and authenticated content sessions is close to ubiquitous in today's web service environment. We've been working on sealing up the last open peephole in the Transport Layer Security (TLS) protocol by using encrypted Server Name Indication in the TLS Client Hello message in TLS 1.3 [4]. We are even taking this a step further with the approaches proposed in Oblivious DNS [5] and Oblivious HTTP [6], where we can isolate any other party, even the service operator, from knowledge of the combination of the identity of the client and the transaction being performed. This would imply that nobody other than the client has a priori knowledge of this coupling of identity and transaction.

The content, application and platform sectors have all taken up selected aspects of the privacy and authenticity agenda with enthusiasm, and the question of the extent to which networks are implicitly trustable or not really does not matter anymore. If the network cannot obtain privileged information in the first place, then the question as to whether the network can be trusted with this information is no longer relevant. This question of trust includes the payload, the transaction metadata, such as DNS queries, and even the control parameters of the transport protocol. In today’s networks we deliver a “better” outcome to users and the services they choose to use by taking the stance that all network infrastructure is regarded as untrustable!

It is likely that this is an irrevocable step and the previous levels of implicit trust between services, applications, and content and the underlying platform and network frameworks are gone forever. Once it was demonstrated that this level of trust was being abused in all kinds of insidious ways then the applications and service environment responded by taking all necessary steps to seal over every point of potential exposure and data leakage.

Cheaper

We appear to be transitioning into an environment of abundant communications and computing capability. At the same time these systems have significant economies of scale. For example, the shift in transmission systems to improve the carriage capacity of a cable system by a millionfold has not resulted in a millionfold increase in the price of the cable system, and in some cases the capital and operating cost of the larger system has in fact declined over the years. The result is that the cost per bit per unit of distance has plummeted as a result.

At the same time, we’ve shrunk the network, so that service transactions are local. The rise of the CDN model has changed the Internet. By pre-provisioning content close to every edge, the subsequent on-demand transaction from server to client occurs over a small distance. Not only are smaller distances faster for service transactions, but smaller distances are cheaper to build and operate. Smaller distances consumer less power and have super signal to noise characteristics, and they’re cheaper.

It can be argued that much of the Internet’s service environment is funded by service providers capitalising a collective asset that is infeasible to capitalise individually through online advertising. The outcome is transformational in so far as a former luxury service that was accessible to just a privileged few who could assemble a team of dedicated researchers has been transformed into a mass-market commodity service that is available to all. It’s not just available at an affordable rate. In many cases it’s affordable as in free of any charges at all.

Bigger, Faster, Better and Cheaper

It was often said that it was impossible to meet all these objectives at once. Somehow the digital service platform has been able to deliver across all of these parameters. How has it done this? The way in which we build service platforms to meet ever-larger load and ever-declining cost parameters is not just by building bigger networks, but by changing the way in which clients access these services. We’ve largely stopped pushing content and transactions all the way across a network and instead we serve from the edge. Serving for the edge slashes packet miles which in turn slashes network costs and lifts the responsiveness factor which lifts speed. These seem to be the driving factors for the next few decades.

This is not a more ornate, more functional, more “intelligent” network. This is not a baroquely ornamented “New IP”[7] network, or anything remotely close. These factors represent the complete antithesis of these conventional attributes of a so-called ‘smarter’ network. By pushing functions out of the network on onto the computers that populate the edges of the network, we strip out common cost elements and push them out to the connected devices, where the computing industry is clearly responding with more capable devices that can readily undertake such functions. By pushing services out to the edge of the network we further marginalise the role of a common shared network in providing digital services.

These factors appear to be the dominant factors that that will drive the next 50 years of evolution in computer communications and digital services.

Longer Term Trends

What defines “the Internet” in all this?

We used to claim that “the Internet” was a common network, a common protocol, and a common address pool. Any connected device could send an IP packet to any other connected device. That was the Internet. If you used addresses from the Internet’s address pool, then you were a part of the Internet. This common address pool essentially defined what was the Internet.

These days that’s just not the case and as we continue to fracture the network, fracture the protocol framework, fracture the address space, and even fracture the name space, what's left to define “the Internet”? Perhaps all that will be left of the Internet as a unifying concept is a somewhat amorphic characterisation of disparate collection of services that share common referential mechanisms.

However, there is one thing I would like to see over the next 50 years that has been a feature of the past 50 years. It's been a wild ride. We've successfully challenged what we understood about the capabilities of this technology time and time again, and along the way performed some amazing technical feats. I would like to see us do no less than that over the coming 50 years!

References

  1. https://www.npl.co.uk/getattachment/about-us/History/Famous-faces/Donald-Davies/UK-role-in-Packet-Switching-(1).pdf.
  2. https://historyofcomputercommunications.info/section/8.3/cyclades-network-and-louis-pouzin-1971-1972/
  3. https://techjury.net/blog/how-many-iot-devices-are-there/#gref
  4. Rescorla, E., "The Transport Layer Security (TLS) Protocol Version 1.3", RFC 8446, DOI 10.17487/RFC8446, August 2018, <https://www.rfc-editor.org/info/rfc8446>.
  5. Schmitt, Paul, et al. "Oblivious DNS: Practical privacy for DNS queries." Proceedings on Privacy Enhancing Technologies 2019.2 (2019): 228-244.  (https://odns.cs.princeton.edu/pdf/pets.pdf)
  6. Thomson, Martin, “Oblivious HTTP” work in progress, Internet draft, February 2022. https://www.ietf.org/archive/id/draft-thomson-http-oblivious-01.html
  7. Internet Society, “Huawei’s “New IP” Proposal – Frequently Asked Questions”, February 2022. (web page)

Statements and opinions given in a work published by the IEEE or the IEEE Communications Society are the expressions of the author(s). Responsibility for the content of published articles rests upon the authors(s), not IEEE nor the IEEE Communications Society.

Sign In to Comment