Reports on the evolving digital political economy

Human Authorship and AI Images

A lot of bad policy about AI is being made because people are overstating its risks. These overstatements usually derive from overestimating the autonomous nature of the AI. Now we are getting some bad law for the same reason. Stephen Thaler applies to the U.S. Copyright office to copyright a picture generated by his AI system. Thaler alleges that the image was generated “without human involvement.” The Copyright Office accepts this as fact and refuses to grant a copyright. Thaler sues the Copyright Office. Predictably, the Judge grants the Copyright Office’s motion for summary judgment, saying: “U.S. copyright law protects only works of human creation.” 

The Judge treated this decision entirely as a matter of law. Procedurally correct, but substantively wrong. There are, in fact, factual issues here that have not been properly aired. Specifically, we question the plaintiff’s claim that the art was generated with no human involvement. Every AI-driven image generator we know of (e.g., Midjourney or DALL-E) requires humans to enter prompts to generate an image. Interesting, useful images coming out of these systems often involve multiple prompts as the human seeks an output more to their liking. Saying there is no human authorship in an AI-generated image is like saying there is no human authorship in a photograph. Yes, machines make the image but humans tell it what and how to image. Since 1884, U.S. law has recognized photography as a form of original authorship. Other than the spurious claims of “no human involvement” It’s hard to understand why AI image outputs are less humanly-authored than photographs. In honor of Stephen Thaler’s legal blunder, however, we have appropriated the image he tried to copyright as our illustration for this week’s Narrative. So sue us.

Sauron called – He wants his Orb back!

If you haven’t heard of the controversial orbs popping up around the globe and scanning people’s eyeballs, that would be the work of the Worldcoin Foundation, based in the Cayman Islands. Together with World Assets Ltd., a British Virgin Islands limited company, they govern the World ID Protocol. The rapidly expanding subsidiary organization doing the work behind this technology is the cleverly named – Tools for Humanity.
The Worldcoin foundation has two primary products – the worldcoin itself and the world ID. Both of these are interlinked. World ID is an identity protocol that allows a user to prove they are uniquely human and (optionally) perform a specific action only once. The aim is to extend this “proof of personhood” (POP) service for companies and governments to leverage through a software development kit. The unique ID is generated to every user only once using a unique pair of a public and a private key whose integrity derives from privacy-preserving zero-knowledge proof schemes. The “coin” part in “Worldcoin” appears for now to only be an incentive to encourage adoption by issuing tokens to early adopters, more on that later.

According to the Worldcoin whitepaper, iris scans are locally processed in the orb hardware and then generate a unique “iris code”  to “numerically represent the texture of an iris.” The user can then choose to store or delete their biometric data. In the latter case, only the hash remains and the user gets to store their private key in the app.

The foundation claims they don’t collect any personal information (name, email, phone number etc), but that other third parties leveraging the World ID might. The foundation collects facial and iris scan which is again proactively deleted if the user doesn’t opt into a data custody arrangement. It is done in the name of proof of personhood; to check if you are a human and the individual you claim to be. 

A potentially significant flaw in this type of single-party trusted system is that the foundation needs to be trusted to delete the parameter, scans in this case. Failure in doing so risks the protocol. This trust is not easy to bootstrap in an open system and yet as of today, around 2.2 million people have signed for a worldID. We know that it’s 2.2 million people and not random bots because, well, that’s what worldID is for. 

The Worldcoin foundation doesn’t want us to take their word for it. They requested two security audits to examine the soundness of their cryptographic constructs and how they’re implemented. Nothing stands out in particular about the auditing security firms, one based in Germany founded in 2011 called Least Authority and the other in the UK founded in 2017 called Nethermind. But their claim to fame is the examination of WorldCoin. Nethermind’s audits shows that some “critical” vulnerabilities were addressed and one issue pertaining to ActionIDs (a unique proof of action for applications leveraging WorldID) remains unresolved. Least Authority focused on elliptic curves finding insufficient security in the current implementation. Better implementations with higher computational hardness to break the cryptographic scheme would come at a cost to system performance. 

While the foundation was transparent in acknowledging the significant security issues, it’s surprising to see that such a sensitive project from a privacy standpoint is able to launch a minimum viable product and allowed to fix critical security issues as they evolve. One could claim that the only way to detect software flaws in a complicated socio-technical system is by using it in the real world. Then again, using low-income people in Kenya and other places for whom a monetary incentive of $30-$60 goes a long way is a bad look. We don’t know if, broadly speaking, people are signing up to get their eyes scanned because they buy into the threat of AI, are in it for speculative gains, or for the monetary rewards.  El Salvador tried that approach as a way to bootstrap their Bitcoin Standard and its associated “Chivo” wallet but has been failing miserably. It seems the lesson wasn’t learned.

There’s nothing wrong in signing up for incentives or speculative gains, as long as the conditions are well understood, but we doubt that is the case if one considers the implementation strategy or the fact that the Worldcoin foundation doesn’t consider its token a security. Despite that, the Worldcoin General Purchasing Term and Tools for Humanity end user license agreement sets out to resolve disputes by arbitration rather than in the courtroom, a move common in the securities industry when it comes to broker-dealer relationships. 

While the technical protocol has gone through a security audit, the orb itself presents potential security risks. We don’t know of any backdoors in the hardware component yet but the probability of installing one by an attacker is not zero. There have been reports of leaked credentials of orbs operators on the black market already which suggests a potential for gaming the system. This is why the centers and orbs are limited in number. 

All right, the Sauron reference may be a bit much. After all it’s not the same right? Frodo prevailed in the end and we don’t know how this will turn out. The foundation and their advocates are pitching a “privacy-preserving solution” to “AI-jobs displacement” by providing “universal basic income”. This ambiguous marketing narrative is a contrived attempt to appeal to the broadest possible polity where weariness of BigTech is the common denominator. Sam Altman told Coindesk, “the hope is that as people want to buy this token, because they believe this is the future, there will be inflows into this economy. New token-buyers is how it gets paid for, effectively.” If you’re thinking that sounds a lot like a Ponzi-scheme, you may be right, but it’s too early to tell. 

We suspect the lack of a well-specified problem-solution statement and engineering around it will yield unexpected results in the real world. Are we solving for AI-generated disinformation, age verification, or universal basic income? It can’t be all of the above. These are thorny problems with different requirements we will explore in more depth in a forthcoming blog post. WorldID is an impressive and promising technical solution in search of a specific problem that isn’t well-defined yet.

The Camp David Statement

The White House issued the Spirit of Camp David statement following the Trilateral Summit with the U.S., South Korea, and Japan on August 18th. The statement reflects an agreement by the three countries to work together to facilitate decoupling from China In high technology areas like semiconductors, batteries, technology security, clean energy, biotechnology, critical minerals, pharmaceuticals, artificial intelligence (AI), quantum computing, and scientific research. It is also supposed to divert supply chains away from China toward ASEAN developing countries. The trilateral collaboration is intended to lead to a greater distribution of semiconductor import sources away from Taiwan and across various ASEAN nations. As a follow up, the United States, South Korea, and Japan agreed to explore ways to provide new World Bank Group concessional resources and support for the poorest countries in ASEAN. 

Additionally, the U.S., South Korea, and Japan have agreed to enhance trilateral cooperation on export controls to prevent cutting-edge technologies from being misused for military or dual-purpose capabilities. Aligned with the technology protection efforts, the countries will enhance joint scientific and technological innovation. This involves establishing new trilateral collaborations through National Labs, expanding joint research and development, and facilitating personnel exchanges particularly in STEM sectors.

These actions are intended to counter China’s potential use of its economic power as leverage over ASEAN nations. Enhancing safeguards for cutting-edge technologies and using National Labs can also be interpreted as part of the United States’ endeavors to manage China’s increasing technological impact on South Korea and Japan. Although the statement outlines general goals and agreements, the finer details of these commitments will be institutionalized during the upcoming meeting in October.

Will the isolation of China from the global supply-chain economy and the reinforcement of export controls on specific technologies through these pledges genuinely contribute to our national security? We might gain a more comprehensive understanding of these nations’ specific commitments in October and our annual IGP conference meeting in November.

Openness Doesn’t Necessarily Work Against Bigness

A recent paper by David Widder from CMU, Meredith Whittaker from Signal, and Sarah West of AI Now, “Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI” finds that,

a handful of maximally open AI systems exist, which offer intentional and extensive transparency, reusability, and extensibility – [but] the resources needed to build AI from scratch, and to deploy large AI systems at scale, remain ‘closed’—available only to those with significant (almost always corporate) resources.

Those aren’t actually “resources” in the economic sense, rather they are possible qualities of AI systems. The resources that are inputs to current generative AI systems are data, compute, and algorithms. Since the emergence of the transformer model era those resources are incredibly important to producing value. So, it’s logical that contention over those resources and creating exclusivity around them is now at the center of many AI debates, from art to semiconductors and software. What’s important to pay attention to is the parties involved and what they are advocating. In the authors’ opinion, current “AI systems don’t operate like traditional software – they require distinct development processes and rely on specialized and costly resources currently pooled in the hands of a few large tech companies.” Perhaps, but these firms are investing and risking millions on the resources needed to possibly create value. Likewise, those same firms rely on openness to help produce value too. E.g., some data used to train models is co-produced. Proprietary and open approaches co-exist, each creating value, just in different ways. Over recent months, advancements in open models and fine-tuning techniques have significantly reduced the cost and simplified the process of experimenting with LLMs on domain specific data sets.

In short, the paper does give the reader an idea of how the political economy perspective provides insight on AI and more generally, digital issues, something IGP has advocated for some time. It’s a critical examination of the notion of “open” or “open source” and Open AI touches on issues of property rights without saying it (because info-communists hate those, yuck!) and how it relates to openness, exclusivity, and production of value.

Amazon Takes Up The Digital Sovereignty Agenda 

Last year, Amazon Web Services (AWS) announced the Digital Sovereignty Pledge, a commitment to provide a spectrum of controls and functionalities within the cloud environment to allow customers to control the location and movement of their data. AWS has launched services like AWS Regions anand AWS Local Zones that enable customers to deploy their data in a region of their choice to comply with data residency regulations. In line with efforts to transform itself into a “sovereign-by-design platform”, last week, AWS launched a new cloud service called Dedicated Local Zones, a type of exclusive on-premise infrastructure that is fully managed by local AWS personnel and includes features such as “data access monitoring and audit programs, controls to limit infrastructure access to customer-selected AWS accounts, and options to enforce security clearance or other criteria on local AWS operating personnel.” 

It is easy to see why AWS is pushing through these infrastructural controls. Data sovereignty regulations vary significantly from one country to another, and may necessitate the establishment of local data centers and infrastructure. This environment has resulted in fragmentation of data storage and management practices,  and a complex web of requirements for organizations to navigate making it challenging for global businesses to maintain a cohesive data strategy.  Disagreements over data sharing and storage might escalate political tensions between countries. 

There are two confused notions of sovereignty here. One just means the owner of data controls it. The other is the 500 year old political notion of sovereignty which refers to the exclusivityand supremacy of state power in a territory. While data sovereignty aims to protect a country’s digital interests, forcing data to be stored within national borders introduces a range of challenges, including economic burdens, potential hindrance to innovation and exposing data to threats. For cloud-based services, which often rely on distributed data storage, striking a balance between safeguarding data and facilitating global data flows is a complex task that requires careful consideration and adaptation to the evolving digital landscape. At the same time, policies being pursued by AWS to respond to digital sovereignty demands could end up providing governments with greater control over citizens’ data, potentially infringing on privacy rights and enabling surveillance. 

1 thought on “The Narrative: September 1, 2023

  1. You are right to complain the “AI risk” is overhyped to the “I” part of the problem, I for one will not be welcoming our new robot overlords, because they are going to remain firmly in the world of fiction. But, the “risks” of AI are not in the intelligence, but in the complete lack of discrimination applying AI outcomes in the real world. This is not a new risk, we saw expert systems in the late 1980s enshrine systemic bias against women applicants to medical school, we have seen “robodebt” formally applied to social security recipients in Australia which wound up in a massive nationwide class action.

    The “A” part of “AI” probably means Awful. Awfully misunderstood, Awfully misapplied, Awfully convenient to enshrine outcomes you want to get distance from by claiming “the machine did it”

Comments are closed.