18.1 Introduction: Children’s Rights in the Digital Environment
Digital technologies have a significant impact on the lives of children and the rights that are specifically attributed to them by the United Nations Convention on the Rights of the Child (UNCRC), Article 24 of the European Union (EU) Charter of Fundamental Rights (CFEU) and many national constitutions. The Council of Europe Committee of Ministers’ 2018 Recommendation on Guidelines to respect, protect, and fulfil the rights of the child in the digital environment recognises that the digital environment is ‘reshaping children’s lives in many ways, resulting in opportunities for and risks to their well-being and enjoyment of human rights’.Footnote 1 This has also been acknowledged by the United Nations Committee on the Rights of the Child (CRC Committee), which adopted a General Comment on the rights of the child in relation to the digital environment in 2021. In General Comment no. 25, the Committee affirms that ‘[i]nnovations in digital technologies affect children’s lives and their rights in ways that are wide-ranging and interdependent […]’.Footnote 2 Over the years, the EU has been relying on the guidance of the UNCRC when adopting and interpreting fundamental human rights instruments.Footnote 3 This is demonstrated for instance in Article 24 of the CFEU,Footnote 4 which contains language that is very similar to that of the UNCRC. The EU’s commitment to the UNCRC was again confirmed in the 2021 EU Strategy on the Rights of the Child, built on six key pillars of which ‘the digital and information society’ is one.Footnote 5 In a recent case, the Court of Justice of the EU has confirmed that CFEU Article 24 represents the integration into EU law of the principal rights of the child referred to in the UNCRC.Footnote 6 Hence, the UNCRC functions as a comprehensive framework that must duly be taken into account when legislative proposals that directly or indirectly affect children are proposed and adopted.
In the past decade, regulatory action by the European Community (EC) has increasingly been targeted at the digital environment, leading to the adoption of influential legislative instruments such as the General Data Protection Regulation (GDPR),Footnote 7 and the Network and Information Security Directive. Other instruments, such as the Audiovisual Media Services Directive (AVMSD), were amended to extend their scope to also cover video-sharing platforms. Two recent legislative initiatives at the level of the EU, the Digital Services Act (DSA) and the Artificial Intelligence Act (AIA), touch upon platforms and technologies that have a significant impact on children and their rights.Footnote 8 Digital services, often provided through platforms, provide children with immense opportunities to communicate, learn, and play, and artificial intelligence (AI)-based applications may offer children personalised learning or medical treatments.Footnote 9 At the same time, the use of tech platforms and AI may also pose risks to children’s rights. Rights that might be negatively affected are, for example, the right to privacy and data protection, freedom of thought, the right to freedom of expression, and the right to protection from violence and exploitation. The AIA acknowledges, for instance, that this technology ‘can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices’.Footnote 10 The question arises to what extent the protection and fulfilment of children’s rights is addressed in these most recent legislative acts, the DSA and the AIA. In order to answer this question, the proposals are analysed, the legislative process is scrutinised, and an assessment is made of how each contributes to the effective realisation of children’s rights in the digital realm. The chapter also suggests some strategies for moving forward, drawing on recent recommendations from the UN children’s rights framework. We propose that EU policymakers adopt a children’s rights approach in their attempts to regulate platform services and AI, so that children’s rights and interests can be a strong regulatory focus rather than a regulatory afterthought.
18.2 Opportunities for and Risks to Children’s Rights from Platform Services and AI Systems
Before analysing the legislative acts, this section zooms in on existing evidence about children’s experiences with platform services and AI systems. The aim is to provide a better understanding of both the potential opportunities for and risks to children’s rights offered by these services and applications. Platform services and other AI-based applications have become an integral part of children’s lives. While AI-enabled toys and voice assistants have infiltrated children’s homes and schools, AI-powered tutors, learning assistants, and personalised educational programmes are becoming more commonplace.Footnote 11 Children are also avid users of (commercial) AI-enabled platform services, such as social media, video-sharing, and interactive gaming platforms. For instance, platforms such as Instagram, Snapchat, and TikTok use advanced machine learning to deliver content and personalise, to use their own word ‘improve’,Footnote 12 the user experience, provide filters that rely on augmented reality technology, and employ natural language processing tools to monitor and remove hate speech and other harmful content. The specific features of these platforms and systems make them particularly appealing to children, but also carry risks.Footnote 13 The lack of transparency and insight into how exactly AI systems generate certain output makes it very difficult for end users to anticipate the potential risks, harms, or violations of their rights.Footnote 14
Research capturing the opinions of children and youth themselves about the opportunities and risks of platform services and AI shows that they have a balanced perspective.Footnote 15 On the one hand, they realise that these services offer many opportunities for entertainment, socialising, and learning, but are never completely safe. On the other hand, children express a great deal of concern, and cite as the main risks being confronted with harmful and illegal content, cyberbullying and hate speech, and violations of their privacy and data protection rights.Footnote 16 In relation to this, questions arise about the long-term impact of platform services and AI on children’s well-being and development. For instance, it has been reported that researchers within Instagram, owned by Meta, who studied user experiences, found that ‘thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse’.Footnote 17 The introduction of AI-based applications into children’s lives could also have side effects at the societal level.Footnote 18 More specifically, it could lead to the normalisation of surveillance, datafication, and commercialisation. Many of these applications are driven by commercial interests and deliberately designed and used to ensure maximum engagement of children, and even to establish behavioural patterns and habits for the future.Footnote 19 Furthermore, lower quality AI-based technologies with a greater focus on entertainment and pacification rather than education and learning might be available for children from disadvantaged backgrounds compared with those from privileged backgrounds.Footnote 20
Because of the impact of platform services and AI on society at large, policymakers and legislators around the world are debating and developing instruments to counteract these risks. However, scholars have identified a disconnect and lack of adequate redress between the potential negative impact of AI on children and the regulatory means to address them.Footnote 21 UNICEF also underlines that most of the recent initiatives targeting AI only refer superficially to children and their rights and interests.Footnote 22 Considering the EU’s commitment to safeguarding children’s rights in the digital environment,Footnote 23 the following section will analyse two of these recent initiatives through the lens of children’s rights and principles. It is important to note that both the DSA and the AIA are likely to have a standard-setting impact around the world, given what scholars and policymakers call the Brussels Effect.Footnote 24 In this sense, these initiatives also present an important opportunity to shape global norms and standards for the design and deployment of digital technologies that are used by and impact children.
18.3 Children’s Rights in the DSA
18.3.1 The Commission Proposal for a DSA
The proposal for the DSA aimed to regulate intermediary services and to ‘set out uniform rules for a safe, predictable and trusted online environment, where fundamental rights enshrined in the Charter are effectively protected’.Footnote 25 This proposal reflects a shift at the EU level from relying on self- or co-regulatory efforts from tech companies to imposing strong legislative obligations on those companies that offer services used by a vast number of EU citizens and affect individuals and society at the same time.Footnote 26 Throughout the proposal by the European Commission, children or minors and their rights are referred to a few times. The preamble to the proposal, for instance, states that it will ‘contribute to the protection of the rights of the child and the right to human dignity online’. Recital 34 clarifies that the proposal intends to impose a clear and balanced set of harmonised due diligence obligations on providers of intermediary services, aiming in particular ‘to guarantee different public policy objectives such as the safety and trust of the recipients of the service, including minors and vulnerable users’.Footnote 27
The most important provision for children (Article 26, Recital 57) in the proposal relates to the supervised risk assessment approach towards ‘very large online platforms’ (VLOPs). VLOPs are platforms ‘where the most serious risks often occur’ and which ‘have the capacity to absorb the additional burden’. A platform is considered a VLOP when the ‘number of recipients exceeds an operational threshold set at 45 million; that is, a number equivalent to 10% of the [EU] Union’.Footnote 28 This includes many large platforms that are popular with children, such as YouTube, TikTok, or Instagram. According to the proposal, VLOPs should identify, analyse, and assess any significant systemic risks stemming from the functioning and use made of their services in the Union. All three categories of systemic risks that are listed are especially relevant to children. The first category refers to ‘the dissemination of illegal content through their services’, with a mention in Recital 57 of child sexual abuse material as a type of illegal content. The second category relates to ‘any negative effects for the exercise of the fundamental rights to respect for private and family life, freedom of expression and information, the prohibition of discrimination and the rights of the child’.Footnote 29 The third category refers to the ‘intentional manipulation of their service, including by means of inauthentic use or automated exploitation of the service, with an actual or foreseeable negative effect on the protection of public health, minors, civic discourse, or actual or foreseeable effects related to electoral processes and public security’.Footnote 30 In order to mitigate the risks, the VLOPs must put in place reasonable, proportionate, and effective measures, tailored to the specific systemic risks (Article 27). Another mechanism that can be used to tackle different types of illegal content and systemic risks are codes of conduct (Article 35). According to the proposal, the creation of an EU-wide code of conduct will be encouraged by the Commission and the new Board for Digital Services to contribute to the proper application of the DSA. Recital 68 refers to the appropriateness of drafting codes of conduct regarding disinformation or other manipulative and abusive activities that might be particularly harmful for vulnerable recipients of the service, such as children.
The explicit references to children in the proposal for the DSA were welcomed by children’s rights organisations but by some considered to be too weak.Footnote 31
18.3.2 Amendments Proposed by the LIBE Committee
During the legislative process, and specifically in the context of the activities of the Committee on Civil Liberties, Justice and Home Affairs (LIBE), a number of child-centric amendments were proposed by a number of LIBE committee members.Footnote 32 Amendment no. 129 introduced a specific reference to Article 24 of the Charter, the UNCRC, and General Comment no. 25 to Recital 3. Amendment no. 412 suggested adding a new Article 12a, requiring the carrying out of a detailed child impact assessment. Amendment no. 414 put forward a specific article on the mitigation of risks to children that aims to address many of the existing concerns regarding children’s rights in the context of VLOPs. The amendment includes, for instance, a reference to taking into account children’s best interests when implementing mitigation measures in general and adapting content moderation or recommender systems in particular; adapting or removing ‘system design features that expose children to content, contact, conduct, and contract risks, as identified in the process of conducting child impact assessments’; ‘proportionate and privacy preserving age assurance’; ensuring ‘the highest levels of privacy, safety, and security by design and default for users under the age of 18’; the prevention of profiling, including for commercial purposes such as targeted advertising; age appropriate terms that uphold children’s rights; and ‘child-friendly mechanisms for remedy and redress, including easy access to expert advice and support’. Amendment no. 427 concerned the publication of child impact assessments and reports about the mitigation measures taken. Finally, amendment no. 772 included a requirement for the Commission to ‘support and promote the development and implementation of industry standards set by relevant European and international standardisation bodies for the protection and promotion of the rights of the child’.
In its Opinion, the LIBE committee only included the amendments regarding Recital 3,Footnote 33 leaving the more substantial amendments out.
18.3.3 Amendments by the Council and European Parliament
Both in the general approach of the Council of November 2021,Footnote 34 and the amendments adopted by the European Parliament (EP) on 20 January 2022, there are remarkably more references to children and minors compared to the Commission proposal.Footnote 35
The general approach by the Council adds that when assessing risks to the rights of the child, ‘providers should consider how easily understandable to minors the design and functioning of the service is, as well as how minors can be exposed through their service to content that may impair minors’ health, physical, mental and moral development’. Risks may arise, for example, ‘in relation to the design of online interfaces which intentionally or unintentionally exploit the weaknesses and inexperience of minors or which may cause addictive behaviour’ (Recital 57). Recital 58 builds on this by requiring that the design and online interface of services primarily aimed at minors or predominantly used by them should consider their best interests and ensure that their services are organised in a way that minors are easily able to access mechanisms within the DSA, including notice and action and complaint mechanisms. Moreover, VLOPs that provide access to content that may impair the physical, mental, or moral development of minors should take appropriate measures and provide tools that enable conditional access to the content. Article 12 refers to explaining the conditions and restrictions for the use of the service in a way that minors can understand, where an intermediary service is primarily aimed at minors or is predominantly used by them. Finally, in Article 27, a reference was added to taking targeted measures to protect the rights of the child, including age verification and parental control tools, or tools aimed at helping minors signal abuse or obtain support.
The EP suggested adding an explicit reference to the UNCRC and General Comment no. 25 in Recital 3. Unlike in the Commission proposal, but reminiscent of guidelines from the Article 29 Working Party,Footnote 36 the EP put forward a prohibition of ‘[t]argeting or amplification techniques that process, reveal or infer personal data of minors or sensitive categories of personal data for the purpose of displaying advertisements’.Footnote 37 A second amendment relates to ensuring that conditions for and restrictions on the use of a service are explained in a way that minors can understand.Footnote 38 This, however, would only be required for intermediary services that are ‘primarily directed at minors or predominantly used by them’. Along the same lines, other amendments aim to ensure the internal complaint-handling systems are easy to access and user-friendly, including for minors, and that online interfaces and features are adapted to protect minors as a measure to mitigate risks.Footnote 39 A more general obligation is proposed to adapt design features to ensure a high level of privacy, safety, and security by design for minors.Footnote 40 Also potentially important for children’s rights was the suggestion to change the wording of ‘any negative effects’ in Article 26 to ‘any actual and foreseeable negative effects’ on the rights of the child. Arguably, this could trigger the application of the precautionary principle.Footnote 41 Regarding the mitigation measures, the EP also proposed to add ‘targeted measures aimed at adapting online interfaces and features to protect minors’ to Article 27. Finally, the suggested Recital 69 encourages the development of codes of conduct to facilitate compliance with obligations regarding the protection of minors, and the proposed Article 34 1a refers to the support and promotion by the Commission of the development and implementation of voluntary standards set by the relevant European and international standardisation bodies aimed at the protection of minors.
18.3.4 The Final Text of the DSA
The DSA, or, as its formal title is expressed, ‘the Regulation (EU) 2022/2065 of the European Parliament and of the Council on a Single Market for Digital Services and amending Directive 2000/31/EC’, was adopted on 19 October 2022 and published in the Official Journal on 27 October 2022.Footnote 42 Not only are the references to child, children, and minors in the adopted text vastly more numerous than in the Commission proposal, but the substance of what is proposed is also much more extensive and arguably promising, depending on actual implementation and enforcement.
These recitals and articles that refer to children and minors can be classified in five broad categories: (a) provisions that are related to child sexual abuse material and the measures in place to tackle this type of material,Footnote 43 (b) transparency obligations regarding terms and conditions, (c) obligations for all online platforms to protect minors,Footnote 44 (d) risk assessment and mitigation obligations towards children for VLOPs and very large online search engines (VLOSEs),Footnote 45 and (e) references to implementation tools such as standards and codes of conduct.
In what follows, the final three categories are examined in depth.
18.3.4.1 Obligations for All Online Platforms to Protect Minors
Article 28 (an article not included in the Commission proposal) formulates extensive obligations for online platforms ‘accessible to minors’. First, such platforms must put in place ‘appropriate and proportionate measures to ensure a high level of privacy, safety, and security of minors, on their service’. This is an obligation that has the potential to safeguard a number of children’s rights (i.e., the right to privacy and the right to protection). One point of contention is the interpretation of which platforms are considered ‘accessible to minors’. This is in part clarified in Recital 71, which states that this includes (a) platforms whose terms and conditions permit minors to use the service, (b) platforms offering services that are directed at or predominantly used by minors, or (c) platforms that are aware that some of the recipients of their service are minors, ‘for example, because it already processes personal data of the recipients of its service revealing their age for other purposes’. In reality, research shows that many children (including very young children) use platforms that are not directed at them and that explicitly state in their terms and conditions that their service is not to be used by children under a certain age (most often set at thirteen years).Footnote 46 It may be expected that certain platforms will try to argue that their platforms should not be considered to be ‘accessible to minors’. Independent research into children’s online experiences and use of platforms might be helpful in this regard, both for the platforms and for oversight bodies. Aside from this issue in relation to which platforms fall within the scope of Article 28, it might also be a challenge for platforms to decide what are ‘appropriate and proportionate measures to ensure a high level of privacy, safety, and security of minors’, particularly when considering that different age groups of children have different privacy and safety needs. In this regard, Recital 71 refers to standards, codes of conduct, and best practices, and Article 28.4 indicates that the Commission (after consulting the European Board for Digital Services, which is established by the DSA in Article 61) may formulate guidelines to support providers of online platforms. Work on such guidelines started in 2024.Footnote 47
Second, Article 28.2 prohibits targeting advertisements based on profiling ‘when they are aware with reasonable certainty that the recipient of the service is a minor’. Such types of advertisements have long been a concern for scholars,Footnote 48 and organisations such as the Article 29 Working Party, which in one of its Guidelines had stated before that ‘organisations should, in general, refrain from profiling [children] for marketing purposes’,Footnote 49 even though this was not explicitly prohibited by the GDPR. In this light, it can only be commended that this is now also explicitly prohibited in the DSA. Yet the question could be raised whether it would not have made sense to include a broader prohibition of profiling children for commercial purposes rather than just targeted advertising. This would have been more in line with the CRC Committee’s call to ‘prohibit by law the profiling or targeting of children of any age for commercial purposes’ in its General Comment no. 25.Footnote 50 Moreover, profiling may also be used to target harmful types of content (e.g., relating to self-harm or eating disorders). Arguably, this could be covered under the risk assessment provisions, but their scope is limited to VLOPs and VLOSEs (see Section 3.5.2). Another doubt that may be raised is how the notion ‘reasonable certainty’ will be interpreted and whether this would require age verification. It would seem from the text of the DSA that this is not necessarily the case, as Article 28.3 states that compliance with the obligations of Article 28 ‘shall not oblige providers of online platforms to process additional personal data in order to assess whether the recipient of the service is a minor’, and Recital 71 adds that this obligation ‘should not incentivise providers of online platforms to collect the age of the recipient of the service prior to their use’. While this may be in line with the principle of data minimisation laid down in the GDPR,Footnote 51 it is uncertain whether the protective aim of the prohibition as well as taking measures to ensure a high level of privacy, safety, and security of minors will be effectively realised if platforms are not incentivised to know which users are actually minors. In any case, the desirability and effectiveness of age verification or age assurance has been the subject of heated debates since the emergence of the internet, and until now, this debate has not been settled.
18.3.4.2 Risk Assessment and Mitigation Obligations towards Children for the VLOPs and VLOSEs
A third category of obligations is aimed at VLOPs and VLOSEs. The first VLOPs and VLOSEs were designated by the EC on 25 April 2023. They include platforms and search engines that are hugely popular with children such as TikTok, Snapchat, Instagram, Wikipedia, Google Play, and Google Search.Footnote 52 Article 34 requires VLOPs and VLOSEs to undertake an assessment of the systemic risks in the EU ‘stemming from the design or functioning of their service and its related systems, including algorithmic systems, or from the use made of their services’. There are four categories of risks that are listed, and as mentioned earlier, at least three of these types of risks are directly relevant for children: ‘(a) the dissemination of illegal content through their services’; ‘(b) any actual or foreseeable negative effects for the exercise of fundamental rights, including rights of the child’, and ‘(d) any actual or foreseeable negative effects in relation to gender-based violence, the protection of public health and minors and serious negative consequences to the person’s physical and mental well-being’. From a children’s rights perspective, it is particularly interesting to observe that in the course of the legislative process, the word foreseeable was added, as this could potentially trigger the precautionary principle. There is still much uncertainty about the occurrence and extent of actual harm when it comes to digital technologies.Footnote 53 There are certain indications that certain platform services might have an impact on the mental health of children in the short and long term,Footnote 54 and that certain features lead to an impact on day-to-day life and, for instance, sleep. Yet there is still little hard evidence about certain impacts on children and their rights in the long run. There is simply not enough research, there are ethical questions that surround research with children, and certain technologies have not existed long enough to draw conclusive results. We have argued before that with respect to delicate issues such as the well-being of children, the precautionary principle might thus come into play.Footnote 55 Simply put, this concept embraces a better safe than sorry approach and compels society to act cautiously if there are certain – not necessarily absolute – scientific indications of a potential danger and not acting upon these indications could inflict harm.Footnote 56 It could of course be up for discussion whether the threshold for triggering the precautionary principle and the threshold for an effect to be foreseeable in the sense of Article 34 are in alignment. From a linguistic point of view, foreseeable does not equate with potential. A foreseeable event is, according to the Cambridge Dictionary, ‘one that can be known about or guessed before it happens’. Whether an effect can be known about will to a large extent depend on research and expert advice from a variety of disciplines. From a children’s rights perspective, however, this notion would need to be interpreted broadly in the best interests of the child, in line with UNCRC Article 3 and CFEU Article 24.
As to the methodology for the assessment of risks and their effect on the rights of the child, inspiration could be drawn from existing methodologies for Children’s Rights Impact Assessments (CRIAs).Footnote 57 With regard to CRIAs, best practices are available that could be useful. From a children’s rights perspective, in any case, it is crucial that the impact on the full range of children’s rights is assessed, and rights are not looked at in isolation but as interacting with each other.
Following the assessment of the risks, Article 35 requires VLOPs and VLOSEs to take ‘reasonable, proportionate and effective mitigation measures, tailored to the specific systemic risks’. One type of such mitigation measure that is proposed is ‘targeted measures to protect the rights of the child, including age verification and parental control tools, tools aimed at helping minors signal abuse or obtain support’. The explicit reference to targeted measures is helpful. Recital 89 seems to indicate that such targeted measures might for instance be needed to protect minors from content that may impair their physical, mental, or moral development. The examples that are given could be helpful as well, although both age verification and parental control tools are not without their difficulties. Regarding age verification, the lack of consensus on desirability and effectiveness has already been pointed to; regarding parental control tools, it has been argued before that these types of empowerment tools should not be used to solely shift the burden for safeguarding the interests and rights of children from platforms to parents.Footnote 58 In addition, other non-child specific risk mitigation measures that are proposed, such as adapting the design, features, or functioning of services, including online interfaces, may also have a positive impact on children’s rights. Recital 89 explains in that regard that VLOPs and VLOSEs must take the best interests of the child into account when taking such measures, particularly when their services are aimed at minors or predominantly used by them.
18.3.4.3 Standards and Codes of Conduct
Finally, as the formulation of the obligations that are imposed on VLOPs and VLOSEs still remains rather abstract, the actual implementation of them will be of the utmost importance. The tools that can support platforms in that regard are standards and codes of conduct.
Article 44 states that the Commission, after consultation with the Board, shall support and promote the development and implementation of voluntary standards set by relevant European and international standardisation bodies, including in respect of ‘targeted measures to protect minors online’. In this regard, it is relevant to observe the efforts that are currently being undertaken by the Institute of Electrical and Electronics Engineers (IEEE) regarding the drafting of a standard for Age Appropriate Design for Children’s Digital Services.Footnote 59
Recital 104 explains that an area for consideration for which codes of conduct could be drafted (Article 45) is ‘the possible negative impacts of systemic risks on society and democracy, such as disinformation or manipulative and abusive activities or any adverse effects on minors’. In the Commission’s 2022 BIK+ Strategy, it was announced that the Commission will ‘facilitate a comprehensive EU code of conduct on age-appropriate design, building on the new rules in the DSA and in line with the AVMSD and GDPR. The code aims to ensure the privacy, safety and security of children when using digital products and services. This process will involve industry, policymakers, civil society and children.’Footnote 60 It continues:
[u]nder the DSA, the Commission may invite providers of very large online platforms to participate in codes of conduct and ask them to commit themselves to take specific risk mitigation measures, to address specific risks or harms identified, via adherence to a particular code of conduct. Although participation in such codes of conduct remains voluntary, any commitments undertaken by the providers of very large online platform shall be subject to independent audits.
At the end of 2022, a call was published by the Commission for members for a Special Group on the EU Code of Conduct on Age-Appropriate Design.Footnote 61 However, work on the Code of Conduct seems to have halted in favour of the drafting of guidelines by the European Commission (supra).
18.4 Children’s Rights in the Proposal for the AIA
18.4.1 The EU Policy Agenda on AI (and Children’s) Fundamental Rights
A second legislative initiative at the EU level that has the potential to significantly impact children’s rights in the digital environment is the AIA. Developing a regulatory framework for AI has been high on the EU policy agenda for some time. Initially, the EC adopted a soft-law approach consisting of non-binding recommendations and ethical guidelines. In June 2018, an independent High-Level Expert Group on Artificial Intelligence (AI HLEG) was established, which was tasked with drafting ethics guidelines for AI practitioners, as well as offering advice concerning the adoption of policy measures.Footnote 62 However, this approach changed in 2021, when the EC explicitly recognised that certain characteristics of AI, such as the opacity of algorithms and the difficulties in establishing causality in algorithmic decision-making, pose specific and potentially high risks to fundamental rights.Footnote 63 As existing legislation failed to address these risks, both the EP and the Council called for legislative action in this area.Footnote 64 Echoing these calls, the AI HLEG also stated the need to explore binding regulation to tackle some of the critical issues raised by AI. In particular, the expert group stressed the need for mandatory traceability, auditability, and ex ante oversight obligations for AI systems that have the potential to significantly impact human lives. According to the AI HLEG coordinator, AI is nothing more than an application, system, or tool developed by humans that can be used in different ways: (a) ways that cause harm, (b) ways that cause unintended harm, (c) ways that counteract harm, and (d) ways that cause good.Footnote 65 Therefore, ‘if we are intelligent enough to create AI systems, we must be intelligent enough to ensure appropriate governance frameworks that harness the good use of those systems, and avoid those that lead to (un)intentional harm’.Footnote 66 In its framework for achieving Trustworthy AI, the AI HLEG also pays (limited) attention to children’s rights. More specifically, as part of the key ethics guidelines, the AI HLEG advises to pay particular attention to situations involving more vulnerable groups,Footnote 67 such as children, and refers specifically to CFEU Article 24.Footnote 68
The fact that the regulation of AI is a hot topic is evidenced by the responses to the EC’s open consultation on AI in February 2020, which attracted far more feedback submissions than any other technology Act.Footnote 69 In these submissions, concerns about AI and children were raised by a range of stakeholders, including companies, academic institutions, and non-governmental organisations (NGOs), mostly in relation to education. For instance, the submissions mentioned that the use of AI in education may have serious consequences on the child’s life course and should therefore be considered as high risk, as it may lead to discrimination, have a serious negative impact on their learning, and their consent might not be properly secured.Footnote 70 More generally, children’s digital rights and AI,Footnote 71 and the use of AI in connection with the evaluation, monitoring, and tracking of children were also mentioned as areas of concern.Footnote 72
18.4.2 The Commission Proposal for an AI Act
In response to these calls for legislative action, in April 2021 the EC unveiled its proposal for the AIA, the first Act of its kind in the world.Footnote 73 It aimed to lay down harmonised rules on the development, deployment, and use of AI systems in the EU,Footnote 74 based on the values and fundamental rights of the EU.Footnote 75 Through a risk-based approach and by imposing proportionate obligations on all participants in the value chain, the proposal aimed to ensure a high level of protection of fundamental rights in general and a positive impact on the rights of special groups, including children.Footnote 76 More specifically, a risk-based categorisation of AI systems was introduced, where different levels of risk correspond to a different set of requirements. The intensity of risks determines the applicability of the requirements, and as such, a lighter regime applies to AI systems with minimal risks, while unacceptable risks are prohibited. The idea is that (groups of) individuals at risk and vulnerable to health, safety, and rights infringement by new AI developments need a higher level of protection.Footnote 77
One category of prohibited practices as proposed by the Commission that is relevant for children are ‘practices that exploit vulnerabilities of specific vulnerable groups such as children or persons with disabilities in order to materially distort their behaviour in a manner that is likely to cause them or another person psychological or physical harm’, because such systems contradict Union values, for instance, by violating fundamental rights (emphasis added).Footnote 78 This is confirmed in Recital 16 and Article 5.1 (b) of the proposal, although the latter does not explicitly refer to children but to ‘a specific group of persons due to their age’. As an example of such an exploitative AI system, the EC referred to a doll with an integrated voice assistant, which, under the guise of a fun or cool game, encourages a minor to engage in progressively dangerous behaviour or challenges.Footnote 79 While this is an extreme example, children’s rights advocates argued that a number of persuasive design features often found in AI systems used by children could fall under this prohibition.Footnote 80 They cited, for example, the autoplay features of social media companies that aim to increase user engagement, and which could be said to affect children’s sleep and education, and ultimately their health and well-being.Footnote 81 Moreover, when such recommender systems promote harmful content, they might even lead to sexual exploitation and abuse.Footnote 82 Nevertheless, the prohibition was criticised by various stakeholders for its limitations, in particular the limitation to physical and psychological harm,Footnote 83 the requirement of malicious intent,Footnote 84 and the lack of references to fundamental rights.Footnote 85
The Commission proposal also mentions children and their rights in the context of the classification of AI systems as high risk and the related requirements for the provision or use of such systems. According to Recital 28 of the proposal, the extent of the adverse impact on fundamental rights caused by AI systems is of particular relevance when classifying them as high risk.Footnote 86 In this regard, Recital 28 contains an explicit reference to CFEU Article 24, which grants children the right to such protection and care as is necessary for their well-being. Moreover, it mentions the UNCRC and the recently adopted General Comment no. 25 on children’s rights in the digital environment,Footnote 87 which sets out why and how State parties should act to realise children’s rights in a digital world. On reflection, however, this does not mean that the proposal considers AI systems that are likely to be accessed by children or impact upon them to be considered high risk by default. The Commission proposal also does not impose any real obligation on providers or users of high-risk AI systems to carry out and publish an assessment of the potential impact on children’s rights. Instead, providers of high-risk AI systems will have to conduct a conformity assessment,Footnote 88 to demonstrate compliance with a list of essential requirements, before placing the system on the market or putting it into service.Footnote 89 These requirements include setting up a risk management system;Footnote 90 ensuring that the data sets used comply with data quality criteria; obliging providers to guarantee accuracy, robustness, and data security; preparing technical documentation; logging; and building in human oversight to minimise risks to health, safety, and fundamental rights.Footnote 91 Regarding the latter, human–machine interface tools and measures should be integrated into the design of the AI system.Footnote 92 Users of high-risk AI systems must use the systems in accordance with the provider’s instructions for use. While this seems like a solid set of requirements, how the full implementation of the risk management as described in Article 9 of the AIA proposal can be ensured without a real obligation to first identify and evaluate risks to children could still be questioned. In addition, the Commission proposal lacks individual rights and remedies against infringements by the provider or user, in contrast to, for instance, data subject rights under the GDPR.Footnote 93
Finally, the Commission proposal also contains transparency requirements that apply to some specific limited-risk AI systems. This category essentially covers systems that can mislead people into thinking they are dealing with a human (e.g., automated chatbots such as the ‘My AI’ tool used by Snapchat).Footnote 94 The proposal requires AI providers to design their systems in such a manner that individuals interacting with these systems are informed that they are interacting with a bot (i.e., ‘bot disclosure’) unless it is contextually obvious. Second, the proposal requires users of emotion recognition systems to inform exposed persons of this and users of AI systems that generate deep fakes to disclose the AI nature of the resulting content. These transparency requirements raised a number of questions (e.g., does this mean that there is a right to an explanation?),Footnote 95 including about the standard of transparency when children are involved. More specifically, do providers have an obligation to offer information in a child-friendly manner – similar to the GDPR transparency obligations – when their AI systems are likely to be accessed by a child? This remained unclear in the Commission proposal.
18.4.3 Amendments Proposed by the IMCO and LIBE Committees
The discussions in the EP were led by the Committee on Internal Market and Consumer Protection (IMCO) and the LIBE under a joint committee procedure.Footnote 96 Additional references to children were added in the IMCO-LIBE draft report.Footnote 97 The first was amendment 208, which proposed a requirement for the future EU advisory body on AI, the so-called AI Board, to provide guidance on children’s rights, in order to ‘meet the objectives of this Regulation that pertain to children’. Second, and perhaps more interesting, was amendment 289, which added to the list of high-risk AI systems ‘AI systems intended to be used by children in ways that have a significant impact on their personal development, including through personalised education or their cognitive or emotional development’. Amendment 23 specified in this context that children constitute ‘an especially vulnerable group of people that require additional safeguards’. Depending on how broadly this category is interpreted (e.g., does it go beyond an educational context?), this could lead to stronger protection. The draft report also did not contain a general obligation for specific protection for children in the context of AI.
Furthermore, in a public event following the publication of the Commission proposal, one of the shadow rapporteurs of the IMCO Committee on the proposal for an AIA openly criticised the fact that the Commission proposal contained no obligation to carry out fundamental rights impact assessments.Footnote 98 In this regard, amendment 90 of the draft report specified that the obligation of risk identification and analysis for providers of high-risk AI systems should also include the known and reasonably foreseeable risks to the fundamental rights of natural persons.Footnote 99 In addition, the shadow rapporteur argued that the Commission proposal overlooked the fact that AI systems that are transparent and meet the conformity requirements – and can thus move freely on the market – could still be used in violation of fundamental rights. This criticism was reflected in the draft report, which underlined that ‘users of high-risk AI systems also play a role in protecting the health, safety and fundamental rights of EU citizens and EU values’,Footnote 100 and placed more responsibilities on the shoulders of said users.Footnote 101
18.4.4 Amendments by the Council and the EP
Both the general approach of the Council and the amendments adopted by the EP introduced a number of noteworthy changes.
The Council adopted its common position (General Approach) on 6 December 2022, which includes several noteworthy elements concerning children and their rights.Footnote 102 First, as Malgierie and Tiani argue, it adopted a wider and more commercially relevant definition of vulnerability.Footnote 103 More specifically, the Council proposed to prohibit not only the exploitation of age, but also of vulnerabilities based on disability, and the social or economic situation of the individual. This was an improvement in light of children’s rights concerning accessibility and protection from economic exploitation. The Council also deleted the malicious intent requirement, and included the possibility that harms may be accumulated over time (Recital 16), thereby resolving some of the criticisms mentioned earlier. In addition, more attention was given to fundamental rights more generally in the context of the requirements for providers of high-risk AI systems. Regarding classification, the Council proposed that AI systems that are unlikely to cause serious fundamental rights violations or other significant risks should not be classified as high risk. Regarding the requirements for providers of high-risk systems, the Council text included a requirement for the ‘identification and analysis of the known and foreseeable risks most likely to occur to health, safety, and fundamental rights in view of the intended purpose of the high-risk AI system.’Footnote 104
Following lengthy discussions on the more than 3,000 amendments tabled in response to the draft report by the IMCO-LIBE committees, the EP plenary session adopted its negotiating position (Compromise Text) on 14 June 2023.Footnote 105 However, despite numerous amendments being tabled with the potential to directly impact children and their rights, none of these child-specific amendments were included in the Compromise Text of the EP. Consequently, these amendments were not part of the trilogue negotiations.Footnote 106 In relation to this, children’s rights organisations raised concerns about the level of consideration given to children’s rights during the legislative process.Footnote 107 Nevertheless, the Compromise Text does include several notable amendments that could impact children and their rights. First, it included a ban on AI systems inferring the emotions of a natural person in education institutions, which has implications for school children.Footnote 108 Second, the EP proposed to include as part of the risk management system for providers of high-risk AI systems a requirement to identify, estimate, and evaluate known and reasonably foreseeable risks to fundamental rights (including children’s rights). Third, the introduction of general principles applicable to all AI systems under the proposed Article 4a is noteworthy. This article requires operators of AI systems to make their best efforts to develop and use these systems in accordance with these principles. The principles encompassed various aspects, including the preservation of human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, as well as social and environmental well-being. To foster the voluntary application of these principles to AI systems other than high-risk AI systems, the EP proposed the establishment of codes of conduct. These codes should particularly ‘assess to what extent their AI systems may affect vulnerable persons or groups of persons, including children, the elderly, migrants and persons with disabilities or whether measures could be put in place in order to increase accessibility, or otherwise support such persons or groups of persons’.Footnote 109 Finally, a new Article 4d outlined requirements for the EU, its Member States, as well as providers and deployers of AI systems to promote measures fostering AI literacy, which could be beneficial for children. This included teaching basic notions and skills regarding AI systems and their functioning.
18.4.5 The Final Text of the AIA
The final text of the AIA was adopted by the EP on 13 March 2024 and endorsed by the Council on 21 May 2024.Footnote 110 The specific references to children and their rights have remained, with noteworthy changes.
First, with regard to the prohibited practices, Article 5b now states:
the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm [emphasis added].
Thus, the final text does not contain a malicious intent requirement (i.e., ‘with the objective, or the effect’), and adopts the broader concept of vulnerability (‘disability, ‘social or economic situation’). Furthermore, Article 5 now covers ‘significant harm’ to ‘that person or another person’, extending beyond physical or psychological harm (infra).Footnote 111 However, a lack of clarity regarding the actual scope of the prohibition remains. A crucial point that needs clarification concerns the threshold for significant harm. For instance, would a violation of children’s rights meet this threshold? According to Recital 29, this may include harms accumulated over time. In addition, this provision is rather broad regarding who suffers harm and seems to cover third-party effects as well.Footnote 112
Second, the references to children’s rights in the context of the classification (AIA Recital 48) and requirements for (AIA Article 9.9) high-risk AI systems have also remained, with some subtle changes. At first glance, these references give the impression that the EU considers that children’s rights and their best interests play an important role in the regulation of high-risk AI systems. Article 9.9 of the AIA, for example, states that ‘when implementing the risk management system as provided for in paragraphs 1 to 7, providers shall give consideration to whether in view of its intended purpose the high-risk AI system is likely to have an adverse impact on persons under the age of 18’ (emphasis added). However, as mentioned, this does not mean that AI systems that are likely to be accessed by children or impact them are considered high risk by default. Notably, the word specific was omitted from the final text, arguably reducing the emphasis compared with the EC proposal. The AIA classifies all systems used within a list of predetermined domains as high risk.Footnote 113 A distinction is made between two sub-categories of AI systems: (a) AI systems that are products or safety components of products that are already covered by EU health and safety legislation, and (b) standalone AI systems used in specific (fixed) areas.Footnote 114 Regarding the latter, one of the areas included that is particularly relevant for children is educational and vocational training – both in terms of determining access to such training and evaluating individuals.Footnote 115 This could include, for example, AI-enabled online tracking, monitoring, and filtering software on educational devices, which could have a chilling effect on children’s right to freedom of expression or violate their right to privacy. This is reminiscent of the Ofqual algorithm debacle in the United Kingdom, where an automated decision-making system was employed to calculate exam grades, leading to discriminatory outcomes for children from lower socio-economic backgrounds.Footnote 116 Such systems can clearly violate children’s right to education, as well as their right not to be discriminated against and perpetuate historical patterns.Footnote 117 Another area where AI systems are likely to present high risks to children (as well as adults) is in the allocation of public assistance benefits and services.Footnote 118 Recital 37 specifies that, owing to the vulnerable position of persons depending on such benefits and services, AI systems used in this context may have a significant impact on the livelihoods and rights of the persons concerned – including their right to non-discrimination and human dignity. A concrete example is the so-called benefits scandal in the Netherlands, where an AI-enabled decision-making system withdrew and reclaimed child benefits from thousands of families, disproportionally affecting children from ethnic minority groups.Footnote 119 Aside from these two areas, Annex III lists biometric identification, categorisation, and emotion recognition; management and operation of critical infrastructure; employment; law enforcement; migration; and administration of justice and democratic processes as high-risk areas. The EC can also add sub-areas within these areas (subject to a veto from the EP or Council).Footnote 120 However, other domains where AI systems and automated decision-making are employed would not be considered high risk, even if they are likely to be accessed by children or impact them or their fundamental rights. This leaves out a whole range of AI systems that could affect the daily lives of children, such as online profiling and personalisation algorithms, connected toys, and content recommender systems on social media.Footnote 121
The final text includes only voluntary commitments for low-risk AI systems. Given the rapid development of AI technologies and how difficult it is at this stage to imagine the future impact on children and their rights, this feels like a very light regulatory regime. A more cautious approach – based on the precautionary principle – could have been to include a binding set of general principles (supra), including the best interests of the child (similar to recital 89 of the DSA), fairness, and non-discrimination for all AI systems in the AIA.Footnote 122
With regard to the transparency requirements for certain AI systems, the final text includes a specific reference to children – or at least to ‘vulnerable groups due to their age’. Article 50 of AIA states that AI systems should be designed so that individuals are informed that they are interacting with an AI system, unless it is contextually obvious. In relation to this, Recital 132 of AIA specifies that when implementing this obligation, the characteristics of vulnerable groups owing to their age or disability should be taken into account, if these systems are intended to interact with those groups as well.Footnote 123
Finally, the final text also grants rights to individuals (including children) affected by the output of AI systems, including the right to lodge a complaint before a supervisory authority,Footnote 124 and the right to an explanation in certain instances.Footnote 125 However, children are not specifically mentioned in these provisions.
18.5 Discussion: A Children’s Rights Perspective on the DSA and the AIA
It can only be welcomed that both instruments refer to children and their rights. The question is, however, whether the proposals have the potential to ensure that children’s rights will be effectively implemented. For the DSA, the answer is quite clear: it holds great promise for advancing children’s rights, depending on the actual implementation and enforcement. Where references to children in the proposal were few and far between in the Commission proposal, the final text appears to take children and their interests (finally) seriously by imposing obligations on VLOPs that could make a difference. Moreover, in addition to the provisions that directly refer to children that have been discussed earlier, there are of course other provisions that will indirectly have an impact on children as a subgroup of individuals in general. Examples are the provisions regarding recommender systems (Articles 27 and 38) and the design of online interfaces (Article 25). While the text of the law provides indeed many opportunities, the actual realisation of children’s rights will depend on the implementation and enforcement. The DSA does create enforcement mechanisms and oversight bodies that are responsible for ensuring this.Footnote 126 In 2024, the Commission already launched formal proceedings against, among others, TikTok and Meta (Facebook and Instagram) related to the protection of minors.Footnote 127 The Commission expresses concerns about, for example, age verification, default privacy settings, and behavioural addictions that may have an impact on the rights of the child. It thus seems that the enforcement of the DSA will move forward faster than, for instance, the enforcement of the GDPR.
For the AIA, it remains to be seen whether it will succeed in guaranteeing children’s rights. Children’s rights are mentioned in the Preamble and provisions of the Act – which is laudable – and it clearly acknowledges the potential negative impact of AI systems on their rights. However, whether these acknowledgements are sufficient for protecting and promoting children’s rights in an increasingly AI-driven world is questionable. First, the prohibition of AI systems that exploit vulnerable groups leaves questions about the threshold for significant harm and its interplay with other instruments. Second, while the final text mentions that the impact of AI systems on children’s rights is considered ‘of particular relevance’ when classifying them as high risk,Footnote 128 As a final consideration, the AIA does not explicitly introduce a general obligation for ‘specific protection’ for children when AI systems and automated decision-making infiltrate their lives – in contrast to, for instance, the GDPR when it comes to the processing of their personal data (recital 38) or – arguably – the DSA requirement to ensure a high level of privacy, safety, and security for minors. By introducing an obligation to ensure the best interests of children for all AI systems that are likely to impact children, this could have led to more effective rights implementation in practice.
From a children’s rights perspective, a few questions remain. First, the adoption of any legislative initiative that affects children should be accompanied by a thorough Children’s Rights Impact Assessment (CRIA).Footnote 129 Although both proposals have been preceded by an impact assessment, it can hardly be said that they would qualify as CRIAs. A true CRIA would assess the impact of the proposed measures on the full range of children’s rights and would balance conflicting rights (both where children’s rights would conflict with each other and where children’s rights would conflict with the rights of adults, businesses, or other actors). A CRIA is also the way to evaluate whether a proposed measure takes the best interests of the child as a primary consideration. The best interests of the child is a key children’s rights principle, which is laid down in Article 3 of UNCRC and Article 24 of CFEU. This principle should guide all actions and decisions that concern children. This is also very closely linked to another children’s rights principle, laid down in Article 12 of UNCRC, which is the right to be heard. Children’s views must be taken on board and must be given due weight. Although in the preparatory steps leading towards the adoption of the proposals, children’s rights organisations have had opportunities to share their views and suggestions, this does not replace the actual engagement of children in that process. This is – again – a lesson to be learnt. Moreover, CRIAs might also be helpful for the addressees of the legislative acts, when assessing risks that their services or systems pose for children and their rights.
Second, experiences with other legislative instruments, such as the GDPR, have shown that the vague wording in the legislative instruments leaves addressees often at a loss for how to implement obligations (notwithstanding their often-well-meant intentions). Hence, fast and concrete guidance,Footnote 130 for instance, by means of Commission Guidelines (Article 28.4 DSA), codes of conduct or guidelines by the newly established European Board for Digital Services or the European Artificial Intelligence Board will be essential. In addition, whereas enforcement by Data Protection Authorities of the GDPR has been argued to be slow, lacking, or not prioritising children, it will be up to Member States to ensure that the DSA and AIA oversight bodies are well resourced, and it will be up to the Commission to take up its supervisory role when it comes to the VLOPs and VLOSEs.
Finally, both instruments adopt an approach that predominantly focuses on safety and risks. There are few to no obligations for platforms to take measures to support children, to enable them to use the services to fully benefit them, and explore the opportunities that such services offer. Although this is for instance something that the BIK+ Strategy has more attention for, it is perhaps a missed opportunity to put into practice some of the more positive measures the General Comment no. 25 requires States to take.
18.6 Conclusion
The EU legislator is determined to minimise risks that are posed by platforms and AI-based systems by imposing various obligations on a range of actors. While many of those obligations are not child-specific, some of them are. Children who grow up today in a world where platforms and AI-based systems are pervasive might be particularly vulnerable to certain risks. The EU legislator is aware of this and pays increasing attention to the risks to children and their rights, although this is not necessarily the case to the same extent across different legislative initiatives. While the DSA emphasises the protection of minors quite heavily, the AIA is less outspoken. Both during the legislative process, and in the stages of implementation and enforcement, the rights and principles contained in the UNCRC should be duly taken into account in order to effectively realise children’s rights in the digital environment.
