1. Introduction
Computers are doing to communication
What fences did to pastures
And cars did to streets.
Ivan Illich, Silence is a Commons (1983)Footnote 1The first principle of civilization ought to have been, and ought still to be, that the condition of every person born into the world, after a state of civilization commences, ought not to be worse than if he had been born before that period.
Thomas Paine, Agrarian Justice (1797)Footnote 2Products based on large language models (LLMs) – Artificial Intelligence (AI) products in this Article – are today embroiled in legal battles over intellectual property. In December 2023, the New York Times filed a lawsuit against OpenAI and Microsoft. It alleged that its own copyrighted material had been abused in the training of ChatGPT, and that the generative AI business model practised by these companies was based on mass copyright infringement.Footnote 3 Several visual artists have filed lawsuits against StabilityAI, Midjourney and DeviantArt, alleging that their AI services copy copyrighted images to create art in these artists’ style without their permission.Footnote 4 There is a similar lawsuit alleging Github Copilot, Microsoft’s generative AI-enabled programming tool, reproduces open source code in contravention of such code’s licensing terms.Footnote 5 Some websites that have been mined for AI training data, including Reddit and Twitter, have restricted such data mining through technological means. In the EU, Article 53(1)c of Regulation 2024/1689 (the AI Act) requires providers of general purpose models to put in place policies to comply with EU copyright law.Footnote 6
AI products work by ‘learning’ from vast troves of training data. There is considerable debate about whether, and the extent to which, LLMs can ‘reason’, debates that are unlikely to be resolved without agreement on precisely what it means to reason.
The economic facts and implications are relatively clearer. LLM-based chatbots and other products retrieve data from third parties, including explicitly copyrighted work. When LLMs are queried, they return information that is based on this data. Sometimes, the information returned by LLMs is identical to a work that is part of the training data. Chatbots are generally unable to appropriately cite their sources. Even when a chatbot returns summarised or otherwise modified information, it allows end users to bypass the original works, or platforms hosting original works. One can argue that this causes a loss of revenue for the producers and distributors of such work.
There are strong indications that the utility of current AI technology has been overestimated. Some investment firms have recently assessed that AI cannot recoup investments already made, and that AI ‘performs’ functions at a cost that is several times higher than that of humans.Footnote 7 Other researchers and industry participants have been sceptical of claims about the capability of LLMs for a long time, especially because of the models’ lack of accuracy and reliability.Footnote 8
This is in sharp contrast to AI companies’ own assessments and proclamations. OpenAI’s Sam Altman wrote in a 2021 piece, ‘In the next five years, computer programs that can think will read legal documents and give medical advice. In the next decade, they will do assembly-line work and maybe even become companions. And in the decades after that, they will do almost everything, including making new scientific discoveries that will expand our concept of ‘everything’.Footnote 9 Prior to release or deployment, many new LLMs are tested for ‘power-seeking’ behaviour and the ability of models to act on long-term plans.Footnote 10 Work from OpenAI and other researchers also claims that LLMs can affect 10 per cent of the tasks of 80 per cent of all US workers, and at least 50 per cent of the tasks of 19 per cent of all US workers.Footnote 11 They also claim that higher-income jobs potentially face greater replacement, and the jobs most affected relate to intellectual rather than physical labour. A McKinsey report has claimed that half of today’s work activities could be automated between 2030 and 2060, with the highest and quickest impact being felt in customer operations, marketing and sales, software engineering, and research and development.Footnote 12
The aim of this Article is not to test capability claims, but rather to take capability claims at face value and to their logical conclusion in terms of policymaking. I believe this is a productive endeavour because capability claims are being taken seriously by policymakers in various countries, especially in the US and the UK.Footnote 13 The EU is less encouraging of these claims, but it has acknowledged the usefulness of a report drafted at the UK AI Safety Summit in 2023 that takes an expansive view of AI capabilities.Footnote 14 On employment, the report states, ‘Unlike previous waves of automation, general-purpose AI has the potential to automate a very broad range of tasks, which could have a significant effect on the labour market’.Footnote 15
It is possible that capability claims are inflated to deter competition in AI markets, and to establish regulation that is friendly to current market leaders as well as to the surveillance and security interests of nation-states. Demonstrating that serious capabilities imply radical economic policies can reduce the incentive of market players and governments to inflate capabilities, if indeed they are doing so. In this situation, such clarifications can lead to more appropriate policy proposals tied to actual capabilities.
In addition, while AI technology may not approach true human intellectual capabilities, it can be and is still being used to displace labour. Evidence shows that despite performing worse at tasks than humans, AI technology is already displacing video game industry workers, artists, and others.Footnote 16 Thus, AI could replace labour at a mass scale without approaching human capabilities by degrading the quality of production. Degraded production implies a loss of consumption quality, but that is not the question I am concerned with here.
I argue that such potential and actual replacement constitutes an enclosure of economically relevant intellectual capacity. Further, this wanton enclosure shows that the AI industry benefits from legal defaults and vacuums, and utilises such defaults and vacuums to appropriate economic value. If legal defaults were different, the AI industry would not be as valuable as it is today, its economic impacts might be less destructive, and technological development in the field of AI might take different directions.
More specifically, I show that Thomas Paine’s arguments for compensation for the enclosure of land also apply to AI. Building on Paine, I argue that a data tax feeding into a universal basic income can be denoted as a right based on the economics of the AI industry today.
I do not wish to show that Paine’s analysis of land enclosures and his solution to them were ideal. I instead propose that, among a bouquet of other solutions, Paine’s theory should be seriously explored as leading to a potential solution to AI enclosures. Perhaps this wave of AI does not take us close to overall automation; but if and when such technology does arrive, Paine’s work shows us that another world is possible. To build such a world we must imagine legal frameworks beyond the AI Act’s risk-based approach and beyond copyright; a bold system of taxing data is but a beginning.
Designing such a tax for data is no small task, not least because data is mobile and is produced socially. At the European level, the mobility of data intersects with multiple governance structures across jurisdictions. Creative policy solutions are needed, particularly those that exploit the materiality of data infrastructure – the entity that is taxed might be a data centre rather than abstract data flows. Additionally, taxing data may not be the most important AI policy lever available today. This Article only tries to provide a philosophical foundation for taxing data that derives from the role of AI in enclosing a type of intellectual commons.
2. The enclosure of economically relevant intellectual capacity
LLMs are new technology. The way they use data does not have exact parallels in earlier technology, and consequently in law. Their economic impact too does not have exact parallels. In fact, we have seen in the previous section that if we take the claims of AI firms seriously, the economic impact of LLMs is likely to be pervasive and transformational.
The intellectual functions automatable by LLMs lie on a spectrum of economic usefulness. If capability claims are true, it means that intellectual functions will be increasingly automated as AI technology improves. There is a good case to be made that AI automation reveals that most functions are intellectual functions.Footnote 17 It is also likely that this AI automation will not lead to commensurate job gains in other fields. Indeed, it is difficult to point towards any significant new jobs created by AI.Footnote 18 AI companies themselves employ relatively very few people. The workers who annotate data to train models are paid shockingly low wages, are relatively few in number, and will not be required in perpetuity.Footnote 19 And under market competition, no employer has the incentive to reduce work hours while keeping wages constant or even increasing it. The computational infrastructure industry is growing under AI, but its own jobs could be automatable, and being very concentrated, is not likely to present significant employment opportunities to a global ‘intellectual’ workforce that might be on its way to obsolescence.Footnote 20
Thus, AI may have wrought a world where most intellectual functions are about to be automated and humans, requiring more returns for their labour than LLMs do to run,Footnote 21 will be shut out of the ability to sell intellectual labour-power entirely. In other words, the humans rendered unemployed due to such automation will not have sufficient alternative opportunities for employment.
I propose that through automating intellectual tasks, AI is enclosing the commons of the ability to use our intellect and derive economic value from it. Let us call this ability economically relevant intellectual capacity.
While Elinor Ostrom provides the most well-known description of the commons: it is a resource the consumption of which reduces its availability for others, and from which in principle some people can be excluded.Footnote 22 This description includes both characteristics of the good as well as its management, including legal rights. It will be more productive for us to adopt an expansive definition of the commons so that we might examine legal rights and management separately. A more general form is provided by Agrawal et al (2023), who in their review of commons research define the commons as ‘resources used or governed by groups of heterogeneous users through agreed-upon institutional arrangements’.Footnote 23
I work with this general form and qualify it as follows, through two central claims:
First, economically-relevant intellect is socially produced and governed. All intelligence is social and socially produced, but intelligence as a resource is not necessarily socially governed. But economically relevant intellect is governed socially, by definition, through the economy. This notion of intellect as socially produced and/or governed is developed elegantly as the concept of ‘general intellect’ in Marx and explored in different ways through the fields of distributed cognition and cultural–historical psychology.Footnote 24
Second, an interest in enclosure is an interest in potential. We are interested in the potential of intellect rather than intellect itself. This is why the commons being enclosed is that of economically relevant intellectual capacity. Hess (2008) shows that one characteristic of a commons is its vulnerability to enclosure.Footnote 25 Economically-relevant intellect has not been vulnerable to enclosure so far, and so it is through the very appearance of AI under capitalism that it becomes a commons.
The imperfect description I have arrived at has parallels with land commons. The enclosure of land commons represented the sundering of the land from the labourer; the enclosure of economically relevant intellectual capacity represents the sundering of intellectual ability from the worker. In an environment of general automation, such a worker no longer has even his labour-power to sell. The process is one of pauperisation rather than proletarianisation.
Indicative evidence for such enclosure can be seen in the literature on deskilling perpetuated by AI systems. Field workers, such as agricultural extension workers, community healthcare workers, and schoolteachers, are often recruited to create datasets for training AI models. These field workers are domain experts, but their reduction to data collectors for AI deskills them.Footnote 26 Scholars have warned about the consequences of deskilling through AI in the medical and educational fields.Footnote 27 The analysis in this Article pertains to deskilling, but in a context of general, privatised and centralised automation.
3. Vacuums and defaults
The development of new technology often creates a legal vacuum, especially if the technology is significant enough to modify economic or social relations. If society is unable to collectively deliberate and decide upon the appropriate legal framework for new technology, its legal status emerges through practice into a kind of default. Such defaults are not necessarily the most efficient way of incorporating new technology into society, but they can be persistent or ‘sticky’. Defaults are just the legal situation that ends up transpiring due to a combination of market forces, lobbying efforts, and other circumstances. Such defaults, regardless of their merit, are upheld through state sanction, including with the use of violence.
The field of digital constitutionalism reckons that legal vacuums caused by digitalisation are all-encompassing, to the extent that they require new constitutive rules. A version of ‘legal defaults’, proposed by Katharina Pistor, is that law grants assets priority, durability, convertibility, and universality, rather than these qualities being inherent to assets which the law then recognises.Footnote 28 A comprehensive analysis is also found in Marx’s description of England’s Factory Acts, which he shows arose from a mix of economic and political factors, rather than as the product of a neutral state.Footnote 29 The approach is one of historical materialism: the law does not necessarily represent the outcome of a rational and egalitarian Parliamentary process, but is developed through historical processes, incorporating unevenly as a result the interests of different groups and the ideals espoused by them. Economic power and class antagonisms are primary movers in the creation of the law, and the state cannot be situated outside of the mode of production as a disinterested actor.Footnote 30 The same approach is reformulated with variations in the field of legal political economy, which includes the works of Pistor.
Digital platforms – for commerce, apps, or social media – have entered a comfortable legal default that allows them to extract value from other actors in the economy. Antitrust action is one of the few challenges to this legal default. AI enclosures are similarly made possible through legal vacuums. If the legal vacuum is allowed to persist, it will morph into a legal default. The copyright cases mentioned earlier challenge a particular legal vacuum, as does new AI legislation like the EU’s AI Act. However, these only chip away at the larger process of AI enclosure. AI firms might be able to continue the process of enclosure by compensating copyright holders and complying with risk-based regulations. For instance, OpenAI has already signed content licensing agreements with the Associated Press and Shutterstock, and it is offering some news publications a mere USD one million to license their content.Footnote 31 These efforts may nevertheless lead to a legal default that allows the enclosure of economically relevant intellectual capacity. This is why a more expansive perspective is needed to tackle the economic impact of AI.
These facts further underscore the need for thinking in constitutional terms. Fundamental questions of economic governance can be answered through what we might call economic digital constitutionalism.Footnote 32 Thinking about such fundamental questions means questioning the legal defaults in which the generative AI industry conveniently finds itself.
We should be cognisant that any new constitutionalism maps on to existing constitutions, with fundamental differences in construction. These ‘analog’ constitutions reflect the priorities of the lawmakers or interest groups of their respective countries, and transposing these priorities to a digital constitution necessarily implies some upheaval. The analog characteristics that imagined digital constitutions retain will depend on the process that leads to the actual creation of such a constitution.
4. Questioning defaults: the case of land enclosures
Land enclosures are a case of a legal default being made permanent because it was allied with the interests of some economic groups. Between 1793 and 1815, 8.9 per cent of all land in England was enclosed, or fenced in and conferred private property rights.Footnote 33 Pre-enclosure ‘commons’ can be understood as both a place and a set of access rights.Footnote 34 More specifically, much of what was enclosed was an open field system, where every family had rights over a piece of land in addition to rights over an ecosystem that was managed by the community, including grazing and other activities.Footnote 35 Some of these rights even differed based on the time of year.Footnote 36
Enclosures, along with other factors like war and hostile new laws, led to the violent creation of the English working class and played a part in colonisation worldwide.Footnote 37 The enclosure movement was brutal in its enforcement, helped along by ‘fair rules of property and law laid down by a Parliament of property-owners and lawyers’.Footnote 38
The ideology that drove enclosure rested on two broad claims: one, that enclosure would lead to more productivity and higher rent, and two, that the commons sustained a system of general indiscipline and barbarity.Footnote 39 Active management in the open field system avoided the ‘tragedy of the commons’, and led to sustained agricultural innovation, albeit at a slower pace than in the years after enclosure.Footnote 40
Forced enclosures worsened conditions for the poor. They also created abundant factory labour, which created the political conditions for allowing cheap imports of grain.Footnote 41 This then led to the sharp decline of England’s agricultural economy and a fall in the standard of living of landless labourers. Enclosures did lead to some productivity gains, but these were not distributed widely and occurred at the cost of newly landless labourers. An analysis of a large body of data on land values between 1500 and 1912 has found that the net efficiency gain from enclosure was at most 3 or 4 per cent.Footnote 42
We ought to be careful to not conflate the existence of a commons with perfect egalitarianism. While rural society suffered on average due to enclosure, the period before enclosure was also rife with abuse and exploitation.Footnote 43
We can now draw some parallels and point towards some differences in land and AI enclosures:
1. The enclosure of land in Europe led to some productivity gains. AI, per our assumptions, is likely to lead to many productivity gains.
2. Enclosures were the unabashed, violent, state-backed theft of even the legal rights of commoners over land. The widespread deployment of AI models represents the theft of people’s de facto rights over economically relevant intellectual capacity. AI models also steal data created by people to be capable of this enclosure.
3. Enclosures ended a system of mixed private-common rights. This mix, while different in its specificities, is similar to the rights we see over intellectual capabilities today – anyone can use intellectual capability, but its fruits are private, and the teaching of many skills is privatised.
4. Enclosures led to widespread poverty, resulting from dispossession and lack of compensation. AI models with serious capabilities are likely to dispossess intellectual labourers. AI companies do not provide compensation for the economically relevant intellectual capacity enclosed, or even the data taken.
5. Enclosures created a large pool of factory workers. AI enclosures do not seem to have any corollary to industrial jobs, likely increasing overall dispossession and poverty in the absence of intervention.
5. Questioning AI enclosures
There were some radical contemporaneous attempts to prevent enclosure in England. Commoners retaliated illegally when they assessed they had enough power to do so, and accepted enclosure when their opponents were too strong, given that Parliament was uninterested in legal appeals.Footnote 44 Resistance was both violent and non-violent, including petitions, protests, the burning of property, and the destruction of fences and crops. Some customary rights were recovered and the process of enclosure was slowed in some instances.Footnote 45 Ultimately, the backing of the state ensured that enclosures went ahead, and that the enclosure movement ended only when the middle classes were disturbed and ‘land improvement’ ceased to be an economic priority for the state.Footnote 46
In the realm of AI, one significant attempt to prevent enclosure has been the strike by the Writers’ Guild of America (WGA). A summary of the terms for the 2023 Minimum Basic Agreement (MBA) states, ‘AI can’t write or rewrite literary material, and AI-generated material will not be considered source material under the MBA, meaning that AI-generated material can’t be used to undermine a writer’s credit or separated rights,’ and ‘A writer can choose to use AI when performing writing services, if the company consents and provided that the writer follows applicable company policies, but the company can’t require the writer to use AI software (eg, ChatGPT) when performing writing services.’Footnote 47 Hollywood’s Screen Actors’ Guild (SAG-AFTRA) also has similar demands. While WGA’s win is significant, news has already emerged of Meta having recruited striking actors to train emotion recognition software. The contract for this work proscribed the use of the actors’ likeness for commercial purposes but also involved actors signing away some broad rights in perpetuity.Footnote 48 It is also easy to argue that training an AI model is not a commercial purpose in itself, but the model can still be used for downstream commercial purposes.
The example above demonstrates that once technological capabilities are released, it is near impossible to set the clock back and undo its effects. It is also unwise to argue that significant productivity gains through new technology must not be realised at all.
Another way to respond to efforts to enclose the commons is to redouble efforts to closely manage the commons and incorporate new technology, such that productivity gains are realised even without enclosure. Various projects of land collectivisation and nationalisation fall under this category. An interesting example is that of India’s Scheduled Tribes and Other Traditional Forest Dwellers (Recognition of Rights) Act, 2006, which was a result of a large-scale movement by India’s forest dwellers.Footnote 49 The Act recognises common rights of forest dwellers over minor produce, grazing, fishing and access to water bodies. It contains stronger restrictions on land acquisition and excludes hunting rights.
While it is not impossible that we might build the global infrastructure necessary to manage AI commons, such infrastructure is not close to being built today. It might be extraordinarily difficult to build, and might take a long time, necessitating alternative approaches or at least stop gap responses to enclosure.
One relatively under-explored response is that of Thomas Paine in Agrarian Justice.Footnote 50 Paine’s argument is republican, and not in a socialist tradition; we might call it a (radical) libertarian or even a liberal argument, since it does not seek to socialise land or even to return to a state before private property, and is principle-based.Footnote 51 Agrarian Justice played a central role in crystallising the opinions and actions of a radical part of the newly dispossessed peasants and workers.Footnote 52
The argument Paine lays out in Agrarian Justice is as follows: agriculture and manufacturing, otherwise called ‘civilisation’, create progress but also create poverty where previously there was none. They also increase inequality, leaving some people worse off than they were before. It is not possible to undo agriculture and manufacturing; naturally, we should ensure that these processes at least do not leave anyone worse off than they were before. Land is in the first instance the common property of all humanity, such that every human who is born has property rights over land. Cultivation increases productivity and also brings forth the idea of landed property. Unable to separate cultivation from land, this idea bundles the right of the cultivated produce with the right over land itself. But since it is all humanity that has a right over all land, every owner of cultivated land must pay humanity rent. This ground rent separates rights over cultivated produce and land. The right over cultivated produce still belongs to those who have laboured over the land or those who have inherited or bought the produce. But ground rent is owed because a common resource has been commandeered, or rented, for private gain.
After laying the argument out, Paine proposes that this ground rent be paid into a National Fund in the form of a kind of one time universal basic income for all people over the age of 21, and an annual payment for all people over the age of 50. He proposes to fund this through a property tax, payable at the time of inheritance, set at 10 per cent of the value of the property.
Paine saw the scheme as a means to prevent, rather than alleviate, poverty; and as one based on justice rather than charity. He also saw it as a system that preserved private property and distributed the benefits of growth to every person.
It is easy to see how such an argument might be broadly applicable to AI. Economically relevant intellectual capacity is the common property of all humanity; AI increases productivity, but privatises intellectual capacity along with its product. It is not possible to put the AI genie back in the bottle once it has been released; and therefore, humanity is owed rent for the unilateral privatisation of its economically relevant intellectual capacity. This leads us to a proposal for a tax, tied to some notion of the value of AI, that would finance a universal basic income. Indeed, Agrarian Justice has been previously used as one of many justifications for a UBI.Footnote 53
6. What is to be done?
A ‘data rent’ modelled on Paine’s ground rent seems like a natural corollary. Today’s AI systems rely on talent (algorithms), data, and computational power. The talent is provided by a few people and is comparable to the cultivation of land. The production of computational power relies on the labour of many more people in a concentrated supply chain. However, data is still a better candidate for such a tax in comparison to computational power, because:
1. There is not much of a direct link between all people, their rights and contributions, and computational power. The contributors of data are much more numerous and widespread than the contributors of computational power,
2. Since the supply chain for computational power is so concentrated and skewed in favour of a few countries and firms, there is less of a case to be made for equal returns for all people,
3. There is already a growing tradition of policy, practice, and scholarship in community and national data rights, and even data value paying into a UBI.Footnote 54
4. This tax might function as a version of a Tobin tax to disincentivise the overuse or toxic use of data.
There is a case to be made for data being a good proxy for the equal economically relevant intellectual capacities of all people, even though these are not the same concepts:
1. It is practically quite impossible to separate and determine the exact value of data generated through the behaviour of one person.
2. A data point is most economically valuable when seen together with other data points, and
3. If they have similar computational power, AI models work better in general when they have more data and more diverse data.Footnote 55
Data here is used merely as a proxy for taxation purposes because data has the characteristics described above. Data does not map on to economically-relevant intellectual capacity exactly. Especially if new AI model training methods are developed that do not rely on large amounts of data, data might not function as a good proxy for a tax. However, so far it appears as if AI models are set to rely on human-generated data – when models train on model-generated data, for instance, the quality of their outputs suffers.Footnote 56
The taxation of data is an endeavour riddled with conceptual and administrative problems. It is difficult to determine the ownership of data, the value of data, the source of this value, and the point at which to tax it. Many of these problems arise because data tax is treated as a proxy for an income tax.Footnote 57
Many of the specifics of such a scheme ought to be democratically determined, but we can see now that a tax on the amount of data used by a model can pay into a fund that is earmarked only for basic income payments. This proposal for a data rent is not a tax on income, capital, consumption or even property. It is more akin to a royalty than to a tax, because it is a concession on the extraction of (often) a public resource.Footnote 58 The administration of data rent would likely be easier than many other data tax proposals because there is a logical specific volume and point of taxation in the case of AI models: the training stage of foundation models, and potentially even the fine-tuning stage. For LLMs, it is possible to determine roughly how much data was used to train the model by examining how much computational power was used to do so. As described by Marian (2022), data rent can be operationalised as a direct, per gigabyte tax on data used for training.
A data rent framework also avoids the pitfalls of many ‘data as property’ approaches, which tend to create inequality by compensating people for the amount and value of data they happen to create. Such a tax is also not tied to the profitability of AI development entities, which can be unprofitable in the short term.
The question of how much to tax presents more difficulties. Paine suggests a 10 per cent tax on the value of the property in a fairly arbitrary way, seemingly to arrive at a viable basic income. In this sense it might be useful to determine the tax rate by working backwards from a desired or optimal basic income. This approach will run into problems because a tax rate derived in this manner might have unintended consequences on the supply of AI. We should note that other UBI-like schemes seem to set the tax rate in a similarly arbitrary way, for instance the tax rate on the gross revenue of oil companies charged by Alaska and paid into the Alaska Permanent Fund, which pays out some income to all Alaskan residents.Footnote 59
The fact that the data rent is a kind of rent might provide a useful way to determine the tax rate. The word rent has specific meanings in economics, but in a colloquial sense rent refers to the price of temporary access to a property. Paine’s ground rent is compensation for temporary access to commonly used land, and data rent is compensation for temporary access to data. In a market, land rent is arrived at through supply and demand dynamics, but this approach might be too incoherent for the case of data rent. It is helpful to think of rent more fundamentally: theoretically, the land rent should be at least equal to the income the land owner would have earned by working the land themselves, minus the opportunity cost of working the land. In the case of data rent, the opportunity cost is the value of free time, because we have assumed that AI leads to a situation of general automation. Determining the exact values of these parameters is better suited to a separate exercise. It is unclear if such a calculation can provide a sufficient universal basic income, which means that the determination of the method of calculation should also be made politically.
An entry to UBI through data rent casts UBI as a right rather than as charity, akin to a right to benefit from scientific progress.Footnote 60 It is important that the liberatory idea that with general automation, all persons can enjoy wealth, is not transformed into charity by those who control the means of this automation. It is also important to avoid framing most humans as useless or superfluous in such a world. General automation unilaterally displaces human ability to earn a living; Paine’s theory allows us to see this displacement and simultaneously conceptualise compensation for it.
7. Limitations arising through the global nature of enclosures
The primary limitation of this Article is that it leaves the international question unanswered. While the ability to use intellect is the province of every person, ‘economic relevance’ is assigned through deeply unequal global value chains. Even in the sector under consideration, all leading AI firms are US firms; pieceworkers who annotate data for AI models for a pittance live in the Global South; and data generation has an uneven geographic spread.
Land enclosures have been different in every part of the world, depending upon arrangements prior to enclosure, and the forces driving enclosure. However, land enclosures in England served as templates for colonialism.Footnote 61 Other scholars have shown that the same ‘civilisational’ impetus that drove colonialism also drove enclosures.Footnote 62 Enclosures emanated outwards from England, and we can similarly understand AI enclosures as emanating outwards from the USA, but it is not yet clear what form this emanation will take.
We should note here that Thomas Paine’s own conception of rights over land excluded Indigenous people in the Americas; Indigenous people were not part of the public, and their claims over the land did not comprise the public good in Paine’s formulation.Footnote 63 Paine’s views on the Indigenous people of the Americas and on settler colonialism were complicated and changed through time, but in terms of rights over land, this is a notable exception.Footnote 64
Because the proxy we have selected is data, the international question and the distribution of value is even more pertinent. Data generation differs from country to country, and a global system of data rent could incentivise excessive data collection.
8. Concluding remarks
Neoliberal theory would rather that new technology is not regulated ‘too early’, or before the market has had a chance to help it mature. However, legal vacuums emerge when new technology is developed, and such legal vacuums can turn into legal defaults that can often be suboptimal and even harmful. When a technology that takes advantage of such legal defaults is embedded widely into society (particularly the economy), undoing this default can be quite costly, made costlier by vested interests.
This is not to say that the use of existing law to minimise the harm of AI is impossible. In fact, some scholars have shown over a decade ago that copyright law as it exists can apply to what is now called generative AI.Footnote 65 However, even the strictures of neoliberalism enshrined in current law, like intellectual property protections, are now being questioned by AI firms.Footnote 66 Such questioning demonstrates that the ‘free market’ is defined quite arbitrarily; legal defaults play a large role in this definition, and what is a default today is not necessarily a default tomorrow, its role in the narrow, constructed concept of exchange efficiency notwithstanding.
In the context of AI, other scholars have already pointed out that ‘there is flexibility not only in the design of AI technology but also in its reception.’Footnote 67 In other words, the existence of a certain technology does not entirely determine its effects on society; society is capable of resisting harmful effects, and changing the manner in which technology is produced and deployed to do so. Analysing the legal defaults of which the AI industry takes advantage, and imagining different defaults, shows us what social movements and the law can aspire to in their resistance. A coordinated social response in the form of law can prevent the enclosure of economically relevant intellectual capacity and therefore prevent mass immiseration.
A data rent is also a second-best solution. In the face of a small number of corporations seeking to create new and comprehensive enclosures, a more fundamental overhaul of our socio-economic structure is likely the best response; in its absence, technological development cannot be emancipatory on its own. However, solutions like a data rent can challenge the unlimited accumulation that these new developments in technology desire.
Acknowledgements
The author is grateful for valuable comments and suggestions from Angelo Jr. Golia, Ioannis Kampourakis, and two anonymous reviewers.
Competing interests
The author has no conflicts of interest to declare.