2.1 The Emergence of (New) Digital Human Rights
2.1.1 Introduction
Several recent initiatives have proposed new human rights for the digital sphere. This is in response to new challenges to human rights in an increasingly digitalised world. Recent developments to strengthen the governance of artificial intelligence (AI) have contributed to this debate. However, is there really a need for genuinely new digital human rights or would it suffice to adjust or extend existing rights by interpretation to deal with the new threats from cyberspace? As the proposals claim the emergence of new principles and rights, at which stage is the development of new digital human rights? How have European institutions reacted to the proposals and which regulatory efforts have they undertaken? If the new rights are proposed to be protected at the European level, what about the universal level of a global cyberspace, where there is a threat of increasing fragmentation?
Specific attention will be given to the regulation of AI from a human and fundamental rights perspective. In a dramatic move, a large number of renowned experts and scientists, among them developers of AI, have made a call in an open letter in March 2023 for a pause, a moratorium on the training of AI going beyond ChatGPT 4 in order to provide time to deal with the shortcomings in terms of reliability and some neglected aspects of the new technology. The letter also calls for quicker regulation by governments to provide a proper framework for the development and use of AI. This was complemented by concerns expressed by leading AI developers and even companies. The erratic Elon Musk has welcomed the suggested pause, and in view of the fact that ChatGPT cannot always distinguish between truth and falsehood, announced the development of a ‘TruthGPT’ as an alternative. Prompted by data protection concerns, Italy even temporarily prohibited the use of ChatGPT in order to obtain some assurances. The White House and the US Senate have called on the leaders of major companies to report on their activities. The Council of Europe (CoE) and the European Union (EU) have presented new regulatory approaches to AI, which will be analysed in a comparative way regarding their contribution to the protection of human and fundamental rights.
2.1.2 Protection of Human Rights on the Internet
The question of how to protect human rights online was first raised by civil society at the World Summit on the Information Society of the United Nations (UN), which took place in Geneva in 2003 and in Tunis in 2005. While the final documents of those conferences made only a few references to human rights,Footnote 1 the topic became a major concern in their follow-up in the form of the annual Internet Governance Forum (IGF).Footnote 2 For example, on the suggestion of the Association for Progressive Communications, which in 2006 produced an Internet Rights Charter, the Dynamic Coalition on Internet Rights and Principles established at the IGF in Rio in 2007 began elaborating the Charter of Human Rights and Principles for the Internet. The Charter was drafted in a broad process mainly by civil society and academia and presented as a draft version at the IGF in Vilnius in 2010, while the final version was presented at the IGF in Belgrade in 2011. The methodology was oriented towards the Universal Declaration of Human Rights in order to interpret it and other key UN human rights documents such as the International Covenants on Civil and Political as well as on Economic, Social and Cultural Rights, the UN Conventions on the Rights of the Child, and on the Rights of People with Disabilities for the Purposes of the Internet. The only new right identified was the right to access to the internet, which is formulated in Article 1 of the Charter.Footnote 3 The Charter contributed to the general debate on internet rights, which produced numerous proposals.Footnote 4
The UN human rights bodies were thus far largely absent from the topic. Only in 2011 did the Special Rapporteur on the Promotion and Protection of the Right of Freedom of Opinion and Expression, Frank LaRue, produce a report with a focus on the internet.Footnote 5 In 2012, the UN Human Rights Council adopted its first resolution on human rights and the internet in which it used the famous formula that ‘the same rights people have offline must also be protected online’.Footnote 6 This clarified that all human rights, in principle, are also applicable online, but does not exclude taking the particularities of the internet into account. Since that time, the UN has taken up issues related to the internet in several fora and recently prepared the Global Digital Compact that was agreed upon by its Summit for the Future of the Internet in September 2024. The compact deals with digital cooperation, including the application of human rights online.Footnote 7
Already before the UN started to deal with the issue, the CoE engaged in the question of how to apply human rights, in particular the European Convention on Human Rights (ECHR) to the internet, and since has taken the lead among international organisations in the study and regulation of the new challenges to human rights brought by the internet.Footnote 8 Inspired by the Charter on Human Rights and Principles for the Internet, the CoE elaborated a guide on human rights for internet users, which contains a catalogue of the main digital human rights.Footnote 9 The European Court of Human Rights (ECtHR) has developed its case law on the issue, related mainly to access to the internet, freedom of expression,Footnote 10 and the right to privacy and data protection with regard to the new technologies.Footnote 11 The tech community has also shown awareness of the need to give adequate attention to human rights in regard to technological development by coining the concept of ‘digital humanism’.Footnote 12
2.1.3 Proposals for New Digital Human Rights
In recent years, several proposals for new digital human rights have been launched, such as the Charter of Digital Fundamental Rights of the EU, elaborated by a group of mainly German experts and launched with the help of the Zeit Foundation in 2016 and updated in 2018.Footnote 13 The idea was to complement the EU Charter on Fundamental Rights drafted by a Convention in 2000, which became binding as part of the Treaty of Lisbon in 2009. The proposal contains, for example, the right that net neutrality must be provided in a non-discriminatory way (Article 11). Of particular interest are the rights related to automated systems and decisions; for example, that the criteria for automated decisions, such as in regard to digital profiling, must be transparent and be taken by natural or legal persons. Every person must have a right to the independent review of such decisions by a human being (Article 5). The rights should apply to the EU, state and non-state actors, including internet platforms.
Another initiative came in early 2021 from the author and lawyer Ferdinand von Schirach, who suggested six new fundamental rights to complement the EU Charter on Fundamental Rights, among them two new digital human rights: ‘Everyone has the right to know that any algorithms imposed on them are transparent, verifiable and fair. Major decisions must be taken by a human being.’ The other fundamental right proposed was the right to digital self-determination, according to which ‘excessive profiling or the manipulation of people is forbidden’. This proposal was supported by the WeMoveEurope Foundation, which has collected more than 270,000 supporters.Footnote 14 While these definitions lack the necessary precision to be directly legally applicable, they may serve as principles or be concretised depending on the respective context. The proposal included the idea of organising a new Convention to discuss the enlargement of the EU Charter on Fundamental Rights on the basis of these rights. However, this proved not to be realistic. A right to informational self-determination with a focus on data protection is already part of German law.
The proposals are addressed in the first place to the bodies of the EU, such as the European Parliament, the European Commission, and the governments of European states. However, there has been no known direct response to the proposals, while the European Commission and Parliament have developed their own proposals in the field of digital human rights, to be presented later in this chapter.
2.1.4 Methodology for ‘Creating’ New Digital Human Rights
The various initiatives for the progressive development of digital human rights and principles raises the question of how such new rights are being created. There has been an abundance of declarations and recommendations by various institutions, and non-government and inter-governmental organisations, sometimes developed with a multi-stakeholder approach.Footnote 15 With few exceptions, they are all of a soft law nature. Today, this is the norm in the progressive development of international legal obligations. If the authority of the proposed rules is high, they might be respected even without being legally binding. Therefore, the process of creation, whether by state initiatives or in a multi-stakeholder approach, whether by non-government initiatives or by regulatory bodies at the regional or universal level, makes a major difference. The regulation may not only come from public institutions but also from private platforms; for example, by way of self-regulation, which might follow recommendations from the public sphere, thus providing the horizontal protection of human rights – that is, of individuals against private companies. For example, in the case of the Human Rights Oversight Board of Facebook/Meta, individuals can launch appeals against company decisions that limit their freedom of expression.Footnote 16
There is an obvious danger of an inflation of the use of the concept of ‘rights’, which requires appropriate standards for their recognition. Today, we are faced with many claims for new human rights in various fields, but whether they find their way to general recognition and finally into regulatory norm-setting is a process with many factors. Already in 1984, Alston proposed some criteria for quality control in the creation of new human rights, such as the social value added, non-repetition of existing rights, ability to achieve high international consensus, or being sufficiently precise to produce identifiable rights and obligations.Footnote 17 Rights might also be recognised only at national or regional levels, which raises the question whether human rights need universal standing to be recognised as human rights.Footnote 18 For example, the prohibition of the death penalty as a key human right in Europe applies only among the members of the CoE and a number of other states globally, and therefore is not a universal human right. However, this does not affect its human rights character. The Spanish government has drafted a Charter on Digital Rights, which contains a comprehensive set of rights for the digital environment, including the right to digital identity, to public participation by digital means, and the right of access to digital environments for older persons. In several cases, the details of the rights identified are to be specified by the law.Footnote 19
Much depends on the protection needs identified by relevant actors. For example, while the focus in the past was on data protection, leading to various regulatory efforts such as the modernised Convention 108+ of the CoE or the General Data Protection Regulation (GDPR) of the EU, more recently the main concern has been with illicit content such as hate speech and disinformation on the internet, while today the protection of human rights in the development and use of AI takes centre stage. AI is also used for the Internet of Things. This produces non-personal data, which are not covered by the GDPR, but may still raise protection issues. Generally, the rights are to address protection needs, which in the digital environment can be structured into issues related to identity and access such as digital self-determination and protection from blocking and filtering as well as internet shutdowns, issues of protection against illicit digital content such as hate speech, disinformation, defamation, or online violence, and issues related to protection against technological threats such as surveillance, biometrical data use for facial recognition, and the use of AI to interfere with human rights or the harassment of bloggers or of digital human rights defenders.
Not all the digital rights proposed qualify as human rights. As an advocacy non-governmental organisation (NGO), European Digital Rights (EDRi) has established a network of NGOs from different countries to protect and promote digital rights.Footnote 20 In 2014, EDRi also produced a ten-point Charter on Digital Rights aimed mainly at members of the European Parliament preparing for (re-)election. In practice those rights were rather commitments in the form of principles covering a wide range of issues, such as the promotion of encryption or of free software.Footnote 21
Various questions related to the emergence of new human rights have been studied in detail in a recent handbook on new human rights, which also looks at some examples from the digital world.Footnote 22 The development of new human rights is a process starting with the identification of new protection needs, as mentioned, their articulation in the form of rights, and finally their recognition and implementation. Mart Susi identifies several elements of inadequate protection on the basis of existing human rights as a reason for developing new human rights. For rights derived from other rights, he observes a decrease in abstractness.Footnote 23 This could also be said for rights derived from principles such as the right to informational self-determination, which goes back to a decision of the German Constitutional Court in 1983.Footnote 24 In most cases, an evolutionary interpretation as applied by the ECtHR will suffice. For example, the secrecy of correspondence by letter can be extended to personal communication using the internet. But this also needed additional provisions on data protection, as in CoE Convention 108+, to address new privacy needs.Footnote 25
In view of the particularities of the online domain, Susi developed the non-coherence theory of digital human rights, which claims a change of meaning and scope when human rights are transposed from the physical to the digital world. Accordingly, digital human rights are different by nature from the respective offline human rights.Footnote 26
New human rights may be derived from parent rights (‘implied rights’) or be stand-alone rights. For example, the right to access to the internet has been derived by some from the freedom of expression and information, but others claim that because the relevance of meaningful access to the internet goes far beyond the freedom of expression it should be considered as a stand-alone right.Footnote 27 This is also supported by a report by the Office of the UN High Commissioner for Human Rights on Internet Shutdowns from a human rights perspective, which shows that many more rights are affected than just freedom of expression.Footnote 28 However, while the report refers to international commitments ensuring universal internet access, it does not speak of internet access as a human right, which shows that the right is not yet fully recognised as such at the UN level.Footnote 29
Furthermore, the ‘right to be forgotten’ has been claimed to be a stand-alone right, while its formation is related to the right to privacy and data protection.Footnote 30 Based on a judgement of the European Court of Justice, it has been introduced as part of the GDPR rules of the EU.Footnote 31 In response, Google has received thousands of requests for the deletion of links to personal information. It was largely left to the private company how to deal with these. However, whether a new digital human right has really been created is also being questioned.Footnote 32 So far, it is legally codified only in Article 17 of the GDPR and a few national constitutions, and is thus not generally recognised. However, in Biancardi v. Italy, the ECtHR has upheld the right to be forgotten, without referring to it in that way, against the freedom of expression of a journalist. The delayed de-indexing of an article on criminal proceedings on the internet had been found damaging to the reputation of a person.Footnote 33
Human rights can be individual and collective. Most experts will agree that a human right should be individually enforceable, otherwise we should better speak of principles. But there can also be a collective enforcement, as in the context of social rights. Some universally recognised human rights, such as the right to self-determination, and including the solidarity rights of peace, development, and the environment, can only be realised collectively. They are therefore sometimes called ‘peoples’ rights’. The African Charter on Human and Peoples’ Rights contains several such rights. Therefore, it might be worth also distinguishing between individual and collective digital human rights. For example, some – such as the right to cybersecurity – are quite abstract and need concretisation to be applied to individuals. However, this is nothing unusual if we consider, for example, the right to water and sanitation or the right to a clean, healthy, and sustainable environment recognised respectively in 2010 and in 2022 by the General Assembly of the UN.Footnote 34 Most human or fundamental rights also need concretisation. This is why one function of the various human rights treaty bodies is to elaborate interpretations, mostly in the form of general comments, while the human rights courts can give binding interpretations in individual cases that may provide general directions.
2.2 Governance of AI and Human Rights
2.2.1 Introduction
The impact of the development and application of AI (systems) on human and fundamental rights has become a priority concern for many actors, in particular since the company OpenAI backed by Microsoft has made ChatGPT 3 freely available.Footnote 35 The new technology opens new opportunities, but also new threats, such as the possibility of further facilitating the assessment of people according to their social behaviour (social scoring), as is already in use in China. While it produces astonishing results, which have generated heightened attention and expectations for its potential, there have also been warnings about its disruptive potential for democracy and society at large if used for disinformation. This culminated in the open letter of 22 March 2023 signed by key AI developers and thousands of concerned scientists asking for a six-month moratorium on the training of AI systems more powerful than ChatGPT 4. This is motivated by what the letter calls an out-of-control race, with AI labs ‘developing and deploying ever more powerful digital minds that no one – not even their creators – can understand, predict and control’. One example given is the ‘flooding of information channels with propaganda and untruth’. The uncontrolled race may lead to a ‘loss of control of our civilization’. Consequently, it creates new challenges to democracy and human rights. Time should be taken to introduce much-needed regulation and to address planning and management needs, including the development of shared safety protocols for advanced AI design to be overseen by independent outside experts. AI developers should work with policymakers ‘to dramatically accelerate the development of robust AI governance systems’, including oversight of highly capable AI systems, auditing and certification systems, and liability for harm caused.Footnote 36
The Human Rights Council of the UN also reacted by indicating the potential and risks of the emerging technologies and recommending certain measures to protect the human rights of individuals throughout the life cycle of AI systems. It also recommended strengthening the capacities of the Office of the High Commissioner for Human Rights (OHCHR) to advance human rights in the context of new and emerging technologies, and asked it to prepare a mapping report identifying challenges and gaps in this respect.Footnote 37 The report, presented in 2024, identified several gaps, including the provision of advice that should be provided by the OHCHR to Member States and all stakeholders to support them in integrating human rights from the design to the regulation of digital technologies.Footnote 38
This development shows the limitations of the old approach that industry should lead on new technological developments and that regulation can come later in cases where economic considerations lead to the neglect of societal concerns. The corresponding approaches of self-regulation and declarations of ethical principles, while important, are insufficient in such circumstances where the largest tech giants match each other in regard to economic opportunities, while states only pursue their national interests. The Asilomar Principles on Beneficial AI already elaborated in 2017 with the purpose of ensuring the compatibility of the development of AI with human dignity, rights, and freedoms, were already concerned that AI systems remain under human control.Footnote 39 The Organisation for Economic Co-operation and Development (OECD) adopted a set of principles on AI in 2019,Footnote 40 while UNESCO adopted a consensus-based Recommendation on the Ethics of Artificial Intelligence in 2021.Footnote 41 OpenAI on its website expresses its commitment to ensure that artificial general intelligence benefits all humanity, and also commits to the principle of transparency.Footnote 42 However, the training of AI systems is presently taking place behind closed doors without state or societal control. The danger of serious harm created by these developments has led one of the AI pioneers at Google to resign from his position in order to be free to speak out.Footnote 43
The call for the regulation of the development of AI and its application has accelerated ongoing efforts in Europe and beyond.Footnote 44 Besides the EU and the CoE, the US has responded with a non-binding AI Bill of Rights and a White House executive order. The AI Bill of Rights has a focus on addressing challenges to democracy and certain rights such as privacy and non-discrimination with a focus on serving the American people. It also foresees a right to opt out and of access to a human person to consider and remedy problems.Footnote 45 In reaction to the appeals for regulation, the heads of the main companies were called to the White House and the Senate, where even the chief executive of OpenAI called for regulation.Footnote 46 The issue is also on the agenda of the G7.
China has also adopted ‘Interim Measures for the Management of Services of Generative AI’, which are to complement existing laws on network security and data security. The regulation establishes the requirement that the contents should reflect the basic values of socialism, while the law should also prevent discrimination, hatred, violence, fakes, and other content that could interfere with the economic and social order.Footnote 47
These different and partly competing approaches also carry some ideological baggage. American and European values or Chinese socialism create the danger of a further fragmentation of the internet. Accordingly, universal rules for global problems created by the use of generative AI would be necessary to counter the trend of fragmentation and polarisation. Digital constitutionalism aiming at a normative framework for the protection of human rights and the balancing of powers is one approach to addressing these concerns.Footnote 48 Strengthening international cooperation and closing digital divides are the main aims of the UN Global Digital Compact. It contains several principles and objectives for international cooperation and also aims at enhancing governance in the field of digital technologies for which purpose new institutional proposals have been made, such as the establishment of an International Scientific Panel on AI to conduct independent risk assessments and produce annual reports, a Global Dialogue on AI Governance, and a Global Fund for AI for Sustainable Development. In support of stakeholders, the Office of the High Commissioner for Human Rights in Geneva is to provide advice on ensuring human rights.Footnote 49
As the non-binding recommendations and commitments to the self-regulation of AI were considered insufficient and some of the developers themselves were calling for a binding regulation, European regional organisations took the lead in elaborating new rules. Therefore, the efforts of the CoE and the EU will be studied here in greater detail using a comparative approach. Both of these started long before the hype around ChatGPT.
2.2.2 The CoE on AI and Human Rights
From a human rights perspective, major work on the regulation of AI has been undertaken by the CoE, which has traditionally taken the lead in the field of the internet and human rights. The methodology of the CoE aims to establish a committee by decision of the Committee of Ministers (CoM) for a particular topic to produce a report with a recommendation for adoption by the CoM. The committee is supported by the competent staff of the CoE and traditionally follows a multi-stakeholder approach as it brings together experts from Member States and civil society as well as academia, while the business community is also consulted. In the case of AI, the CoE has engaged in numerous activities since 2017 to study pertinent human rights issues and produce recommendations aimed towards guidance and regulation in this field.
In particular, the Parliamentary Assembly of the CoE adopted a recommendation in 2017 on technological convergence, AI, and human rights,Footnote 50 calling for the drafting of guidelines. In addition, the European Commissioner of Human Rights also came out in 2019 with a recommendation entitled ‘Unboxing Artificial Intelligence: 10 Steps to Protect Human Rights’.Footnote 51 It called for Human Rights Impact Assessments, information and transparency, independent oversight, non-discrimination, data protection, and remedies, to mention just the main steps. Also in 2019, the CoM established the Ad Hoc Committee on Artificial Intelligence (CAHAI), which was succeeded by the Committee on Artificial Intelligence (CAI) in 2021. Having adopted a declaration in 2019 on the manipulative capabilities of algorithmic processes,Footnote 52 one of the first recommendations by the CoM on the human rights impacts of algorithmic systems, with a set of guidelines attached, was adopted in 2020.Footnote 53 Based on a multi-stakeholder consultation, CAHAI with the help of three sub-groups produced a comprehensive feasibility study on a legal framework for the development, design and application of AI.Footnote 54 It came to the conclusion that while there were several applicable instruments, a number of substantive and procedural legal gaps existed that could be best addressed by a new legal framework for which key elements were identified.Footnote 55
In December 2021, after broad consultation, several studies, and conferences,Footnote 56 CAHAI adopted a report entitled ‘Possible Elements of a Legal Framework for Artificial Intelligence, Based on CoE’s Standards on Human Rights, Democracy and the Rule of Law’,Footnote 57 which dealt with issues related to the development, design, and application of AI based on CoE’s standards. For this purpose, the regulatory work of other international organisations such as UNESCO, the OECD, and the EU was taken into account.Footnote 58 It set out the main elements for a transversal convention creating a framework and possible additional legal or soft law instruments to apply generally or to specific sectors, and the public sector in particular. For example, a model for a human rights, democracy, and rule of law impact assessment on a soft law basis was proposed.Footnote 59 Mantelero, who has developed his own Human Rights, Ethical and Social Impact Assessment, critically noted that the CoE approach might be too broad.Footnote 60
Between 2022 and 2024, the CAI negotiated the text for a common legal instrument on AI. The drafts were provided by the secretariat and then discussed in the committee.Footnote 61 The Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law was designed as an open convention with a global vocation. There were fifty-seven states participating, among them some already established observer states of the CoE, including the USA, Canada, Mexico, and Japan, and also the EU. The Framework Convention was finally adopted by the CoM in May 2024 to be opened for signature in September 2024 at the meeting of ministers of justice in Vilnius. For it to enter into force, it needs only five ratifications. All states (including the EU) that have participated in its negotiation, as well as other states invited by the CoM of the CoE, can then become parties to the Convention.
The Framework Convention mainly contains principles for respecting existing human rights, but does not formulate new digital human rights. It also covers the obligations to protect the integrity of democratic processes and respect for the rule of law (Article 5).Footnote 62 The Convention contains relevant guidance for its parties such as the obligation to adopt and apply measures to protect human dignity and individual autonomy during the lifecycle of AI systems.Footnote 63 Its framework character implies that there might be additional instruments to address specific issues of AI governance. There are provisions on transparency and oversight, on accountability and responsibility, and on non-discrimination in the implementation of the Convention.Footnote 64 There is also a right to be informed when one is interacting with AI systems.Footnote 65 It also provides for a risk and impact management framework, which includes an obligation on parties to ensure that adverse impacts of AI systems on human rights, democracy, and the rule of law are adequately addressed.Footnote 66 There is no obligatory human rights, democracy, and rule of law impact assessment, but the CoE plans to elaborate a pertinent methodology. Regarding effective remedies, measures are to be foreseen that inform relevant bodies and where appropriate also affected persons about AI systems having the potential to significantly affect human rights, allowing them to contest decisions made or lodge a complaint to competent authorities.Footnote 67 Measures to ensure that AI systems are not used to undermine the integrity of the democratic process and the rule of law are to be adopted.Footnote 68
Parties should also establish effective oversight mechanisms. However, several general legal safeguards proposed in the elements for a convention by CAHAI, the predecessor of the CAI, such as the right to interaction with a human,Footnote 69 did not make it into the final version of the Convention, although they were partly taken up in the AIA.Footnote 70 With regard to the scope of the Convention, national security interests are excepted. The focus is on regulating public bodies. In view of resistance to including the private sector, the final version provides that parties will declare at the time of signature or ratification whether they will apply the principles and obligations also to private actors.Footnote 71 It seems that a watering down of the Convention was the cost of including the USA and other non-European states in the negotiations. States are free to use the legal tools they consider appropriate in implementing the Convention (Article 3). This would include the possibility of private self-regulation, which has been criticised by civil society, as it could undermine the binding character of the Convention. The general exception for national security has also been criticised.Footnote 72
Like the Data Protection Convention (Convention 108+),Footnote 73 the Biomedicine (Oviedo) Convention and the Cybercrime (Budapest) Convention of the CoE, the Convention will also be open by invitation to non-European states. For this purpose, the interests of future parties from outside the European region, but also in the EU, which was finalising its Artificial Intelligence Act (AIA) in parallel, had to be taken into account in the negotiations.
The CoE therefore moved ahead with a legally binding approach, whereas the field of AI and human rights has so far been primarily the subject of soft law recommendations and guidelines. This approach also goes beyond the ethical dimension covered by UNESCO guidelines,Footnote 74 but aims at legally binding obligations after soft law and self-regulation has been shown to be insufficient. There are obvious advantages and disadvantages to this approach. The advantage of a legally binding obligation and not just a soft law commitment is obvious if we look at the difficulties of having the multitude of recommendations and guidelines in the field respected in practice. The disadvantage is that a convention is negotiated mainly by states, which is also reflected in the composition of the CAI; this includes CoE member and observer states, and representatives of international organisations and private business, while selected members of civil society and academia can only participate if invited as observers. The outcome of the negotiation process needs to be ratified by national parliaments. As in the case of the Cybercrime Convention, only limited membership from non-European states can be expected. In order to achieve greater participation, standards had to be lowered. Therefore, whether the Convention can be expected to produce a valuable global response to the issues at stake is yet to be seen. It might remain a mainly European response to a global challenge, as the other open conventions of the CoE have. However, the effort to open up to the world needs to be recognised.
2.2.3 The EU Act on Artificial Intelligence and Fundamental Rights
The efforts of the EU in the field of regulating AI also aim to have an effect beyond the EU, as the EU, as in the case of the GDPR, aims at a ‘Brussels effect’. In a broad process, the EU has worked on different aspects of AI including its definition and ethical principles, on which the Independent High-Level Group of Experts on AI has produced relevant proposals including guidelines on ethics.Footnote 75 In January 2022, the European Commission presented a European Declaration on Digital Rights and Principles for the Digital Decade, which also contains rights related to the use of AI jointly adopted with the European Parliament and the Council in December 2022.Footnote 76 It focuses on principles and claims that in the digital transformation of Europe people should be at the centre, but, however, avoids saying clearly whether people should have enforceable fundamental rights. The declaration was supposed to serve as a reference work for politics and business.Footnote 77 Under freedom of choice, the rather short declaration also contains principles on AI, such as ensuring transparency about the use of AI, avoiding discrimination, and ensuring that algorithms are not used to predetermine people’s choices, which is reminiscent of some elements of the rights proposed by the WeMoveEurope campaign. This paved the way for the EU AIA to regulate AI.
The proposed EU AIA,Footnote 78 of April 2021, was presented by the European Commission after wide public consultation with the involvement of all interested actors. The explanatory memorandum only saw advantages for fundamental rights,Footnote 79 and indeed compared with the present situation, improvements can be expected. Certain AI practices, such as the evaluation of the trustworthiness of a person leading to a social score, are to be prohibited. For AI, which interacts with humans and uses biometric data, certain transparency obligations should apply.Footnote 80 For non-high-risk AI, providers are encouraged to develop codes of conduct for which the AIA sets a framework.
The Artificial Intelligence Strategy of the EU of 2018 was mainly concerned with making the EU a world-class hub for AI.Footnote 81 While committed to a ‘human-centric approach’, it also includes rules for product safety and civil liability. The resolution adopted by the European Parliament in May 2022, on ‘AI in a digital age’,Footnote 82 focused on the competitive situation of the EU in the global context and addressed various sectors including AI and the future of democracy. It mentioned many challenges for the protection of fundamental rights, which were to be fully respected in the digital transition and the development of AI, and called for ex ante risk self-assessments, data protection impact assessments, and conformity assessments.Footnote 83 It was clear that the risk-oriented approach, for example, in the use of face recognition, surveillance, or transparency requirements, did affect significant fundamental rights issues internally and human rights concerns externally. In addition, the participation of AI users and their rights were considered to be key concerns. In reaction to the debate on ChatGPT, the European Parliament requested stricter rules for ‘general purpose models’ of AI, which are trained on broad data at scale and can be adapted to a wide range of distinct tasks. This includes generative AI such as ChatGPT, which can produce new content.
With regard to fundamental rights, the EU Agency on Fundamental Rights provided an overview in 2020 of the main issues related to AI.Footnote 84 There were suggestions for a more specific inclusion of fundamental rights concerns in the AIA.Footnote 85 A total of 115 European civil society organisations, including EDRi and Algorithm Watch, called for a number of amendments to the AIA to strengthen its impact on fundamental rights.Footnote 86 Besides a better mechanism to deal with new risks, the authors called for meaningful rights and redress for people impacted by AI systems, such as the right for the explanation of decisions taken, the right to a judicial remedy, and the requirement of accessibility to all AI systems. NGOs such as Algorithm Watch produced studies on how to protect worker rights when AI is used in the workplace or how to provide access to data for public interest research.Footnote 87
After key committees of the European Parliament such as the Civil Liberties Committee had adopted a position on the AIA in which most of the concerns of civil society, such as the prohibition of predictive policing systems, emotion recognition, and real-time biometric identification in public spaces were taken into account, the EU Parliament adopted its position in June 2023.Footnote 88 The trilogue (an informal inter-institutional negotiation bringing together representatives of the European Parliament, the Council of the European Union, and the European Commission) ended with an agreement in December 2023 and further negotiations on the adoption of the revised draft by the European Parliament in March 2024. After a very tough negotiation process, the AIA was finally adopted on 21 May 2024.Footnote 89 The final text of the AIA was published in the EU Official Journal in July 2024 to enter into force the following month.Footnote 90
The AIA provides for a risk-management system distinguishing between unacceptable risks, high risks, and lower risks in or from AI systems. With regard to high-risk AI systems, which are characterised by posing significant threats to health, safety, or fundamental rights,Footnote 91 before being placed on the market, providers should meet several requirements such as testing and mitigating foreseeable risks to health, safety, fundamental rights, the environment, rule of law, and democracy with the involvement of independent experts.Footnote 92 In Article 5, it defines several prohibited AI practices, such as manipulative techniques impairing the ability of persons to make an informed decision, social scoring and profiling, certain emotion recognition systems, and certain biometric identification systems, although there are numerous exceptions. Still, they create barriers against the violation of fundamental rights.
The AIA has binding force only in Member States of the EU, but there is the expectation that, similar to the GDPR, it will set a global standard for companies, and thus also have extra-territorial impact.Footnote 93 In this context, it is worth noting that the GDPR already sets some applicable standards of relevance for AI applications, such as the prohibition of automated decisions related to individuals except if they give their consent (Article 22), thus applying the principle of human intervention (human in the loop) regarding data controllers. The Digital Service Act of 2022, which came into force in 2024, regulates online platforms with provisions relating to liability for the deletion of illicit and fake content and to remedies. In line with the competences of the EU, the objectives of the AIA are not focused on fundamental rights issues, but more broadly on risks related to the use of AI and their impact on economic and commercial as well as consumer concerns. It deals with the use of a trustworthy AI in products and services, and thus also adopts a harmonising and conformity-oriented approach. Accordingly, its first objective is described as ensuring safety and conformity of AI systems with fundamental rights during the whole AI lifecycle. For this purpose, it distinguishes different risk categories. Unacceptable risks, such as social scoring or the use of biometric data in public spaces, are prohibited in general; high risks, such as profiling, predictive analysis, and so on, are subject to risk management via assessments and transparency obligations, such as the right to be informed on the use of AI or the right to human oversight. However, there are also large exceptions for law enforcement and the security sector is fully exempted. In addition, there are safety and liability rules as well as a right to effective remedies. Fewer obligations apply to limited-risk and lower or minimal-risk AI systems.
The AIA became applicable according to a phased approach in which the prohibition of AI of unacceptable risk started to apply six months after the entry into force of the AIA (i.e., February 2025), and the obligations of general purpose AI model providers or the appointment of authorities in competent member states started to apply after twelve months. Certain obligations related to high-risk systems listed in Annex III, such as AI systems in biometrics, will become enforceable only after twenty-four months and others only after thirty-six months. A clarification of the rules for the implementation will be provided by delegated acts and guidance from the Commission, as well as codes of practice by the EU AI Office, established in May 2024.Footnote 94 For example, only by early 2026 will the EU Commission guidelines on the classification of high-risk AI systems have to be issued. Consequently, owing to the complexities of the Act, its obligations will be phased in over several years, which raises the question of whether this might not come too late given the ongoing race for ever more powerful AI systems. However, in view of this fact, the European Commission has set up the AI Pact as a framework to assist companies to prepare for the AIA on a voluntary basis.Footnote 95
2.2.4 Possible Complementarity between the Two Regulatory Approaches
While the CoE Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law is fully focused on human rights and democracy standards, fundamental rights in the EU AIA only play a limited role, such as safeguarding against the undermining of existing standards as contained in the EU Charter on Fundamental Rights. While references to fundamental rights are included throughout the entire text of the AIA, they are limited to the rights contained in the Charter. This confirms the relevance of the EU Charter on Fundamental Rights for addressing AI. The EU, being represented in the drafting process of the AI convention of the CoE also contributed to its provisions from the EU background. In this way, possible differences were avoided. For example, the Convention provides that EU parties to the Convention shall in their mutual relations apply EU rules.Footnote 96
Accordingly, while there is an alignment to be observed, the CoE Convention in terms of human and fundamental rights covers a much broader ground. For example, it also directly covers democracy, while the AIA is only of indirect relevance, although this has also been one of the important concerns in the EU process. In this regard, the two texts can also be seen as complementary. If, as can be expected, the EU becomes a party to the Convention, this complementarity will be particularly relevant. However, the EU might want to wait until a significant number of its member states have ratified the Convention. However, as the European Court of Justice has clarified in its opinion of 2021, issued on the request of the European Parliament on the accession of the EU to the Istanbul Convention, there is no need for a ‘common accord’ of all EU Member States before the Council decides to join the Convention.Footnote 97
One major issue highlighted by civil society is the use of invasive applications of AI, such as biometrics for mass surveillance or in the migration context.Footnote 98 The AIA sees the remote use of biometrics in publicly accessible places for law enforcement as a red line, but has exceptions for asylum and migration.Footnote 99
One key problem is how to operationalise the right to digital self-determination in view of self-learning algorithms, which are black boxes even to their programmers. Accordingly, full transparency is technically not possible. Benefits and risks cannot always be fully determined in advance. To ensure that the persons affected remain in control has already shown to be difficult in the case of data protection rules. Therefore, the transparency obligation in Article 50 of the AIA contains the right to be informed if one is interacting with AI systems or if content is artificially created. The principle not to predetermine people’s choices as in the draft EU Declaration on Digital Rights and Principles can hardly be applied in practice where decisions in penal systems or the migration context are often guided by algorithms proposing decisions based on a large number of cases. However, rules on consent, as in data protection, are missing in this context. This makes the principle that safeguards or remedies should be provided all the more important, but the EU principles lack precision in terms of whom those safeguards should be provided for, let alone how they should look. The CoE, in its preliminary reflections and drafts based on Article 13 of the ECHR on the right to effective remedy or safeguards was more precise,Footnote 100 while the EU Declaration does not even refer to Article 47 of its Charter on Fundamental Rights on the same right. However, the AIA contains certain mechanisms, such as a right to an appeal to be provided at the national level, but still needs progress on the issue of enforcement.
Regarding the regulation of AI by the CoE Framework Convention and the AIA, both regulation projects, having been finalised in parallel, did influence each other. For example, the definition of AI chosen for the AIA also appears in Article 2 of the Framework Convention. There was clearly an intention to ensure that where the two processes overlapped the negotiations should achieve complementary results. However, in cases of conflict for EU Member States, the AIA prevails, as the Framework Conventions allows them to give preference to their obligations under the AIA. The open letter calling for a moratorium has contributed to an acceleration of the legal regulations urgently needed to provide legal security and establish oversight institutions.
2.3 General Conclusions
The need for the protection of human rights in the regulation of the internet today is beyond doubt. The online dimension of human rights is receiving increasing attention, as are the emergence of digital human rights that may go beyond existing human and fundamental rights. Such rights may be claimed and recognised at different levels; for example, at the national, European, or universal level. Accordingly, their human rights nature does not depend on recognition at the universal level, although this might be the claim and ambition, in particular when new threats show a global nature. As the example of the right to access to the internet shows, it is widely but not as yet generally recognised as a new human right. In the case of the right to be forgotten, this new right has mainly been established in the EU. The human rights related to the use of AI are presently in the process of concretisation, starting from ethical principles and aiming at concrete obligations at least in some respects in the form of the Framework Convention adopted by the CoE or the AIA of the EU. The focus is on the progressive development and concretisation of general principles and rights, and not on the creation of new digital human rights, which, however, may also be the case in response to new challenges. For example, the AIA now contains the right to know whether one is interacting with an AI system. One could also identify a right to be protected against detrimental uses of AI leading to manipulation or biometric categorisation of natural persons by prohibitions or risk-management systems. In the process of identifying or designing digital human rights in line with a multi-stakeholder approach, civil and public actors may be involved. In the case of most new and emerging threats there is no need for new digital human rights, but existing rights can be extended by interpretation to cover the new challenges. Altogether, the emergence of (new) digital human rights is a highly dynamic part of the progressive development of international law in all its emanations, from soft law to hard law. The regulation of the development and application of AI has accelerated the process of meeting new threats on the basis of extended and new human and fundamental rights in order to close gaps identified in human rights protection in the digital environment.