I just returned from the “Internet of Things, Internet of the Future” conference sponsored by the French Presidency of the EU. I spoke on the governance issues, the program is here. “Internet of things” is the nom du jour of the marriage of electronic product codes, RFID and the domain name system. It uses DNS to map product codes to RFID signals, allowing trading partners to have a much more detailed look at their supply chain. But as those tags and codes permeate consumers' physical environment, a host of privacy issues arise.

At a panel on the privacy implications, I was struck by the way so many panelists, perhaps 4 or 5 out of 7, invoked “privacy by design” as a way of responding to RFID privacy threats. They all seemed to agree that it was somehow possible to pre-configure the technology in a way that privacy is structurally protected. Appropriately enough, the next day one of the chief intellectual promoters of this myth, law professor Lawrence Lessig, keynoted the conference. Believe it or not, this was the first time I have actually seen one of Lessig’s now-generic speech/performances. I was surprised at the degree to which his original ideas regarding “code as law” have been dumbed down into statements that border on technological determinism. For example: “innovation [on the internet] is no accident, it’s a consequence of design.”

I think people who put too much stock in the ability of “design” or constructed “architectures” to solve or forestall social problems are wrong, and woefully ignorant of history. The evolution of large-scale technical systems is path-dependent and unpredictable; technical architecture plays a role, of course, but policy, law, politics – and especially economics – loom large. Technical designs reflect those constraints more than they overcome them. Socio-technical systems consist of multiple components (e.g., a computer has an integrated circuit, an operating system, applications software, networking software and hardware, and user interface hardware) all designed and produced by different actors, all with their own evolutionary trajectory.

At no point the evolutionary process is anyone able to blow a whistle and say, “Halt! Everyone stop what you’re doing so we can design this thing to optimize specific social values.” And even if that were possible, the values “architected” by any comprehensive re-design would be the product of a complex set of political and economic bargains. Values like privacy would have to contend with dozens of other concerns.

The evolution of a socio-technical system is always incremental and its design responds to problems as they are encountered. The most important and significant policy problems are usually the ones that people didn’t anticipate in the design phase, or even if anticipated, the problem was ignored because it didn’t seem significant at the time.

Lessig’s claim that the Internet was designed to facilitate competition and innovation needs to be strongly qualified. Amateur historians always fall prey to the fallacy of post hoc intentionality; because a design resulted in x, they surmise, the intent of its original designers must have been x. But that is rarely true. In this case, the computer scientists who developed the end to end principle were trying to come up with a network design that provided the most flexibility for diverse applications. But you don’t get the massive innovation of the 1990s from TCP/IP alone; you also had to have capital markets, telecommunications competition and liberalization; ISP liability limitations, basic civil liberties and a willingness of government authorities to refrain from licensing and restricting the entry of internet service providers and content providers. The “design” of the Internet protocol does not make Burma into an innovation engine or a free society.

The simple fact is that privacy – and other policy problems – are discovered and dealt with as socio-technical systems grow and emerge. The extent to which we regulate, legislate, reject in the market, reconfigure or redesign things to solve privacy problems depends on how strongly people value privacy, how well they mobilize politically and economically and whether they have the legal and regulatory tools to intervene in strategic areas of the value chain. There is no magic shortcut that allows a social objective to be inserted into the “genes” of a technology at the outset and save us from all that laborious work. If privacy advocates think otherwise, someone needs to tell them that their theory sounds suspiciously like the “intelligent design” theology of certain conservative Christian groups. Let this technological Darwinist remind them that science is based on facts not fairy tales.