David Clark’s new book, Designing an Internet (MIT Press, 2018) is an important new contribution to Internet governance studies. For the past 35 years, Clark been one of the deepest thinkers about the Internet’s architecture and design principles. He is also one of the few computer scientists to willingly stray into public policy research and attempt to bridge the divide between technical experts, social scientists, and policy makers.
Although the book has its roots in research done for the U.S. National Science Foundation’s “Future Internet Architecture” program, it does not, thank God, spend most of its effort attempting to answer the question of what a “new Internet” would look like, much less put forward a specific new protocol suite.
Instead, most of the book explores in more abstract terms the technical, operational and management implications of different technical design decisions. He shows how, on a large-scale technical system, protocol and architectural design decisions condition and set the parameters of performance, and mediate conflicting interests of different actors. These micro-level analyses however seem to be motivated by a fascination with a much bigger question: at what level of generality or specificity does one make these choices? Architecture is defined as “something on which we all need to agree.” Implementations or mechanisms are the details that fill in the spaces. How much should an architecture constrain and how much should it leave empty? It’s as if he’s the Hayek of tech.
The social and the technical
The editors and promoters of this book speak as if Clark had synthesized social and technical analysis, but that’s not quite the case. Clark is a computer scientist, and 90% of this book is about very nitty-gritty technical features of networking; e.g., the systemic effects of increasing or decreasing the “expressive power” of a packet, the way various software processes strive to shape “per hop behavior” (PHBs, or the barriers and mechanisms that move packets from one router to another), or whether strict adherence to layering is consistent with trends in network management. If you’re into that – and this reviewer is – you will find this book fascinating.
Here’s an example. He talks about a protocol design that would go to great lengths to avoid having a setup packet make a round trip (origin to destination and back). He doesn’t like the idea, because “saving one round trip in the case of a setup packet …is not justified if the result forces a complex mechanism into the network architecture that could otherwise be separated and performed in the larger ecosystem.”
Clark is at his best – he creates a real bridge between social science and computer science – when he shows how these kinds of technical design decisions reflect or create what he calls “tussles.” A tussle is defined thus:
…networks and distributed systems … are composed of elements whose interests are not necessarily aligned. These actors may contend with each other to shape the system behavior to their advantage. My co-authors and I picked the word tussle to describe this process (Clark et al., 2005).
OK, so “tussle” is really just a cutesy term for what political economists would call distributional conflict, where actors contend over the shares they get from interactions. Tussle is what political economy is all about and has been for a few centuries. The absence of any explicit recognition of this fact, both in Clark’s work and among many others, testifies to the persistence of the gap between computer scientists and social scientists. Still, by recognizing the existence of aligned and misaligned interests and adversarial actions, and linking this awareness to a very systematic way of thinking about network design decisions, Clark makes an important contribution to the understanding of Internet governance. Chapter 6 contains his basic model of socio-technical interaction, and it is very useful.
Can a new Internet solve our problems?
We pay special attention to this book because it bears directly on an ongoing debate over the extent to which standards and protocols constitute a form of governance which can work in the favor of human rights advocates. We weighed in recently on this debate with a paper challenging the idea that human rights or values can be engineered into the Internet. Indeed, a lot of the funding that the NSF steered toward social scientists in the FIA program was predicated on the hope that a new Internet might be able to “design away” the social problems caused by the old Internet, or, as one NSF grant recipient put it, “engineer values” into a new system. This is an amazing testimony to the prestige afforded engineers and computer scientists and the idea that “code is law.”
A close and careful reading of Clark will all but dash those hopes. The thrust of his argument is that tussle is inevitable. Designers tilt the playing field in some ways, but tussles are always going to pop up somewhere. Design decisions shape the structure and situs of tussle but do not deterministically control their outcome. Lots of functions migrate to the “larger ecosystem.” The impact of design decisions cannot always be foreseen. The impact of architectural and mechanism design on “values” and societal benefits is highly contingent and can only really be known ex post, not ex ante (as we have argued at length here).
Clark’s musings about cybersecurity are particularly interesting in this regard. We got an untrustworthy internet, he claims, not because the early designers did not pay any attention to security, but because their understanding of the security problem was too narrow. Being close to the military and intelligence communities, they were concerned exclusively with confidentiality, the secrecy of communications. They assumed that the parties communicating would be mutually trusting, and this “distracted us from the insight that most of the communication on the Internet would be between parties who were prepared to communicate but did not know whether to trust each other.” (p. 194) Clark also points out that a future internet that is engineered to be more trustworthy might backfire and create new forms of insecurity through surveillance and repression. You won’t really know until it’s been implemented, and various actors move to gain an advantage through its use. Granted, the design can tilt the playing field in certain ways, but we are never entirely sure what macro-social effect that bias will have.
Constructivism vs evolutionism
An interesting but possibly unintended finding in this book is that it’s not even so straightforward for the designers to “know” what they are doing. As I read his older papers, it hit me that explicit formulations of the architectural principles underlying the Internet came well after the protocols had been designed and operationalized – sometimes as long as ten or fifteen years after. It is probably true that these principles, or something like them, were implicit or tacit in the work that the original designers were doing, but the time gap serves as a reminder that doing things and coming up with explicit conceptualizations of what one is doing are distinct, separate processes.
For example, it was not until 1988, practically a decade after the Internet had become a functioning reality, that Clark wrote a paper articulating “The Design Philosophy of the DARPA Internet Protocols.” On p. 62 of this book, Clark notes that his own 1988 paper did not even mention the “end to end argument” that he (along with Saltzer and Reed) had articulated as a key architectural principle back in 1984. Clark’s 2017 explanation for this omission is that “perhaps in 1988 it was not yet clear that the end-to-end description…would survive as the accepted framing.” So even in 1988, explicit agreed principles had not fully emerged from the fog of creation.
It is evident that the principles and decision criteria that underpinned the mid-1970s – 1980s design and implementation of the Internet constituted emergent and tacit knowledge, not clearly formulated “design principles” or an explicit, thoroughly articulated architectural philosophy. The point here is that the pioneers of internetworking were solving reasonably well-bounded data communication engineering problems, they were not designing “society.” While we can credit them with having a distinctive notion of the benefits of its innovative architecture and some very successful and future-oriented insights into the functional consequences of some of their design decisions, it’s ridiculous to think that they could have incorporated into this design process some notion of the overall societal impact of that architecture. Those who would place on these problem-solvers the enormous burden of anticipating societal impact, or expect them to engineer “values” or “human rights” into their design, would probably paralyze them. At any rate, such expectations are based on a very flawed notion of how technological systems and society are related. The results of human action are not always reducible to intentional human design, as someone said once, I think.
Designing an Internet is a tough but rewarding read. It came at the right time. Anyone who wants to address the relationship between the technical architecture of the Internet and “society,” economics, public policy and security should not utter any more words about the subject until they read and comprehend this book.
p.s. Clark articulates some heresies. On p. 100 he discusses “the evolutionary process in which the Internet mutated from having a single, global address space to a number of private address spaces connected using NAT devices. By and large, the Internet has survived the emergence of NAT and perhaps global addresses did not need to be such a central assumption of the presumed architecture.” If you say this out loud a thousand IPv6 fairies die.
If David Clark’s book is half as good as your article, I can’t wait to read it.