The fourth and final part of this book looks at vulnerable groups in the digital context. The digital revolution has reshaped societies, economies, and human interactions. However, its benefits have not been equitably distributed. Vulnerable groups – those marginalised owing to socio-economic status, age, gender, ethnicity, language, geographic location, or other circumstances – often find themselves on the periphery of these advancements.Footnote 1 In many cases, the rapid adoption of digital technologies has exacerbated pre-existing inequalities and created new barriers to the enjoyment of fundamental rights. The United Nations Secretary-General has drawn attention to the fact that ‘technologies can be, and increasingly are, used to violate and erode human rights, deepen inequalities and exacerbate existing discrimination, especially of people who are already vulnerable or left behind’.Footnote 2 This part explores some of these human rights challenges for vulnerable groups, aiming to provide different perspectives in relation to the question: What additional challenges do vulnerable groups face in the digital realm? The focus areas – the digital divide, children’s rights, linguistic inclusivity in education, and the realities of vulnerable workers in the context of digital transformation – were selected because they address very different but interconnected dimensions of vulnerability in the digital environment, and to illustrate the broad range of issues that can arise in relation vulnerable groups. Together, the chapters examine how digitalisation interacts with broader social, cultural, and economic systems, often reinforcing existing disparities, but at the same time, they also draw attention to the transformative potential of inclusive and rights-based digital policies and practices.
Chapter 17. The Digital Divide: Reinforcing Vulnerabilities
Chapter 17 explores the systemic inequalities exacerbated by unequal access to digital technologies, known as the digital divide. This divide disproportionately affects vulnerable populations such as women, older people, disabled people, rural communities, and low-income individuals, limiting their access to services, education, and other opportunities. The chapter looks at the root causes of the digital divide, including socio-economic disparities, geographic isolation, cultural and language barriers, technological gaps, and insufficient policy interventions. It emphasises the human rights implications of this divide, such as restricted access to education, healthcare, and democratic participation, and examines international and regional policy responses, noting their shortcomings in addressing the issue comprehensively.
In response to the core question of this part – What additional challenges do vulnerable groups face in the digital realm? – the chapter identifies several unique challenges faced by vulnerable groups in the digital realm: (a) greater difficulty in gaining access to digital devices, services and the internet; (b) limited digital skills preventing effective use of digital technologies (fewer educational and training opportunities tailored to these groups’ needs); (c) amplification of the digital divide by existing inequalities, such as gender discrimination, generational isolation, and poverty, perpetuating cycles of disadvantage (e.g., women face online harassment and cultural restrictions, while older adults may be excluded from digital healthcare and public services); and (d) frequent overlooking of the needs of vulnerable groups by market-driven digital initiatives or inadequate regulatory frameworks.
Chapter 18. How the EU Safeguards Children’s Rights in the Digital Environment: An Exploratory Analysis of the EU Digital Services Act and the Artificial Intelligence Act
Following the broad examination of the digital divide, Chapter 18 narrows the focus to a specific vulnerable group – children – and assesses whether and how legislative efforts seek to protect them in digital environments. Eva Lievens and Valerie Verdoodt scrutinise the EU’s Digital Services Act (DSA) and Artificial Intelligence Act (AIA) from the perspective of children’s rights in the digital environment. They explore the potential of these legislative frameworks to safeguard children from harm while enabling their safe engagement with digital platforms and artificial intelligence (AI) systems. The authors emphasise the unique vulnerabilities of children in the digital realm and assess how effectively the acts consider these vulnerabilities. They answer this part’s core question by noting the following additional challenges faced by children in the digital realm: (a) exploitation of their vulnerability (AI and platform design utilise manipulative practices such as addictive design, autoplay features, and the promotion of harmful content); (b) difficulty in navigating platforms and AI systems safely (as usually there are no child-specific interfaces or adequate transparency about their operation); and (c) being less likely to benefit meaningfully from digital engagement (as there is an over-emphasis on risk and a neglect of the positive potential).
Chapter 19. Right to Education in Regional or Minority Languages: Invasions, COVID-19 Pandemic, and Other Developments
Chapter 19 explores the intersection of linguistic rights, digital access, and education during the COVID-19 pandemic. It focuses on how the rapid digitalisation of education disproportionately affected students from regional or minority language communities. Vesna Crnić-Grotić demonstrates how national and regional policies frequently failed to account for linguistic diversity when implementing emergency online education measures. This lack of linguistic inclusivity in digital education threatened the right to education for these communities as their academic progress was hindered. In relation to this part’s overarching question, the chapter notes the following additional challenges that students from minority language communities face in the digital realm: (a) less access to the internet and the digital tools necessary for participating in online education; (b) linguistic marginalisation (the dominance of major languages in digital content and platforms excluding regional and minority language speakers, denying them equitable access to education and information online); and (c) systemic neglect of linguistic diversity (and thus the unique needs of minority language groups) in educational policy and planning.
Chapter 20. Technological Acceleration and the Precarisation of Work: Reflections on Social Justice, the Right to Life, and Environmental Education
Chapter 20 examines how digital transformations have reshaped labour dynamics in Brazil, exacerbating inequalities for vulnerable workers. Raizza da Costa Lopes, Samuel Lopes Pinheiro, and Florent Pasquier show how the ‘gig economy’ and the ‘uberisation’ of work are driving a shift towards precarious working conditions, including financial instability, lack of legal protections, and opaque algorithmic control. Additionally, the chapter connects labour issues with environmental education, emphasising its role in addressing socio-economic and ecological inequalities. It advocates for integrating human rights principles, particularly the right to life and social justice, into labour and environmental policies. The chapter answers the core question of this part of the volume by drawing attention to the following additional challenges for vulnerable workers in the digital realm: (a) opaque algorithmic management, which controls their tasks and incomes while offering little transparency or recourse (leading to insecurity and potential exploitation); (b) unpredictable earnings owing to the gig economy model (often resulting in problems covering basic living expenses); (c) lack of labour protection (platform work may not have health insurance, pensions, or even adequate safety standards); and (d) stress owing to the demand for constant availability and productivity (which undermines their well-being and work–life balance).
Shared Themes and Interconnections
The four chapters collectively highlight the complex challenges vulnerable groups face in the digital realm, with each chapter addressing a specific issue regarding inequality and vulnerability. Despite the different focus areas, there are commonalities in the problems faced by vulnerable groups in those specific contexts. The main parallels follow, framed around the overarching themes that connect the chapters.
A. Structural Inequalities in Digital Access
Each chapter emphasises how structural inequalities in access and design deepen exclusion, reinforcing systemic barriers for vulnerable populations in different contexts.
Chapter 17 on the digital divide explores how foundational barriers, such as a lack of internet connectivity, devices, and digital literacy, disproportionately exclude vulnerable groups.
Chapter 18 on children’s rights shows how children are uniquely affected by systemic inequalities, such as inadequate privacy safeguards, harmful content, and a lack of user-friendly design on digital platforms.
Chapter 19 on minority language education demonstrates how digital education platforms (mostly designed for dominant languages) marginalise minority language speakers, especially during crises such as the COVID-19 pandemic.
Chapter 20 on the precarisation of work shows how gig workers, particularly in the Global South, are excluded from equitable digital participation owing to platform-centric labour models that prioritise profits over worker well-being.
B. Insufficient Policy and Governance Frameworks
The absence or inadequacy of regulatory frameworks is noted across all chapters. Gaps in regulation and policymaking can have the effect of perpetuating inequalities or leaving vulnerable groups at the mercy of exploitative systems (which is why the authors advocate for improved lawmaking and policymaking in the area).
Chapter 17 critiques the lack of comprehensive policies regarding digital inclusion, leaving the needs of vulnerable populations insufficiently addressed.
Chapter 18 notes how regulatory frameworks, such as the DSA and AIA, while promising, are often limited in scope and lack robust enforcement mechanisms for safeguarding children’s rights.
Chapter 19 draws attention to how policy has failed to ensure linguistic inclusivity during the COVID-19 pandemic, pointing out the lack of foresight and planning for marginalised communities.
Chapter 20 discusses resistance from digital platforms to regulatory efforts, illustrating how weak governance leaves gig workers without basic labour protections.
C. Amplification of Vulnerabilities during Crises
Most chapters in this part also look at the impact of the COVID-19 pandemic on vulnerable groups. The authors show how crises can act as accelerants, amplifying pre-existing vulnerabilities and exposing gaps in infrastructure, policy, and protection mechanisms across sectors.
Chapter 17 notes how the pandemic deepened existing digital disparities, bringing the example of telemedicine during COVID-19 that excluded older people who lack digital skills.
Chapter 19 focuses on the pandemic’s impact on educational access, with minority language speakers disproportionately left behind owing to digital unpreparedness.
Chapter 20 shows how global crises, such as COVID-19, exacerbate the precarisation of platform workers, particularly in the Global South.
And as the fourth common theme, the authors emphasise the transformative potential of inclusive rights-based interventions to counter digital exclusion and exploitation.
By progressing from structural issues to specific case studies and concluding with a global perspective, this part of the book seeks to respond to the question What additional challenges do vulnerable groups face in the digital realm? and to provide a nuanced understanding of how the digital domain interacts with the rights of vulnerable groups. Digitalisation, while offering opportunities, can disproportionately harm vulnerable groups by deepening systemic inequalities (or even exposing them to exploitation – see Chapter 21) and failing to adequately safeguard their rights. The authors draw attention to different aspects and contexts of this problem and emphasise the need for inclusive policies and practices that prioritise equity and human dignity in the digital age.
17.1 Introduction
Digital technologies have permeated almost every aspect of modern life. The potential for such technologies to enhance the enjoyment of human rights is coupled with risks of exclusion, surveillance, and growing inequality, particularly for vulnerable populations. We need to ensure that everyone benefits from digitalisation, even those who currently lack the skills or the means necessary for it. As the European Parliament has highlighted, digital technologies ‘can either help create a more inclusive society and reduce inequities, or they can amplify existing inequalities and create new forms of discrimination’.Footnote 1 It is often the most vulnerable sectors of society that are not benefiting from digitalisation (as they tend to have fewer resources and more obstacles to access). Accordingly, their needs and human rights require special attention in this process.
The gap between demographics and regions that have access to digital technology and those that do not is called the ‘digital divide’. There is nothing new about the digital divide; it started receiving attention from the mid-1990s.Footnote 2 Despite long-standing awareness of the problem, it persists. As the United Nations (UN) General Assembly noted in 2016: ‘Despite the previous decade’s achievements in information and communications technology connectivity, […] many forms of digital divides remain, both between and within countries and between women and men. […] [D]ivides are often closely linked to education levels and existing inequalities, and we recognize that further divides can emerge in the future, slowing sustainable development’.Footnote 3
Such divides are problematic as they demonstrate how a significant number of individuals are lacking access to the plethora of benefits that digitalisation has brought (such as faster bureaucracy, access to information at all hours, new ways to express oneself). The European Parliament has recognised that digital divides ‘may accentuate social differences by reducing some workers’ opportunities to obtain quality employment’, and acknowledged the especially problematic position of vulnerable groups in relation to the digital divide by noting the potential ‘negative impact of the digitalisation of public and private services on workers and people such as older people and persons with disabilities, low-income, socially disadvantaged or unemployed citizens, migrants and refugees or people in rural and remote areas’.Footnote 4 As M. N. Cooper has highlighted, those on the right side of the digital divide ‘find themselves better trained, better informed, and better able to participate in democracy’, whereas the ‘disconnected become disadvantaged and disenfranchised’, with exclusion manifesting in all aspects of society.Footnote 5
Vulnerable groups are disproportionately impacted by the digital divide, making it both a symptom and a driver of systemic inequities.Footnote 6 In the words of the UN Secretary-General, ‘[d]igital divides reflect and amplify existing social, cultural and economic inequalities’.Footnote 7 The digital divide can perpetuate a cycle of disadvantage for vulnerable groups. Bridging the digital divide and ensuring equal access to digital technology is crucial for promoting equity and social inclusion in our increasingly digital world. This divide not only restricts access to critical services such as education, healthcare, and employment, but also undermines fundamental human rights, including the right to equality, dignity, and participation in societal decision-making. Some of the human rights implications of the digital divide are studied in this chapter to illustrate that the digital divide is not just a practical problem but also a legal one.
This chapter focuses on the digital divide in relation to women and older people as sample groups because they are uniquely positioned at the intersection of systemic exclusion and under-representation, making them illustrative of how the digital divide magnifies inequalities and contributes to human rights violations. By examining these groups, the chapter seeks to illustrate the barriers that vulnerable groups are up against, draw attention to the human rights issues that they face and demonstrate the need for tailored solutions within broader efforts to address digital inequities. The chapter also examines international action regarding the digital divide and whether additional steps need to be taken to adequately respond to the multitude of challenges that the digital divide presents.
17.2 The Digital Divide and Its Contributing Factors
There is a plethora of different definitions of the digital divide. The European Union (EU) has referenced the Organisation for Economic Co-operation and Development (OECD) definition of the digital divide,Footnote 8 which refers to ‘the gap between individuals, households, businesses and geographic areas at different socio-economic levels with regard both to their opportunities to access information and communication technologies (ICTs) and to their use of the internet for a wide variety of activities’.Footnote 9 When the term ‘digital divide’ first emerged in the late twentieth century, it was used to describe the gap between people who had access to mobile phones and those who did not. Over time, its meaning has broadened to encompass the technical and financial ability to use technology and access the internet. As technology evolves, the concept of the digital divide continues to change.Footnote 10
The digital divide is influenced by a range of interconnected factors that determine access to and use of technology for individuals and communities. Such factors include socio-economic disparities, geographic isolation, cultural and language differences, technological barriers, and gaps in law and policy. Each of these elements plays a role in determining who can benefit from the opportunities provided by digital technologies and who remains excluded. Understanding these underlying causes helps design effective strategies to bridge the divide and promote equitable digital inclusion.
First, socio-economic inequalities, such as education, employment status and income levels directly influence access to digital technologies and the internet. Low-income households often cannot afford devices or reliable internet connections. And individuals with a limited educational background may lack the skills to effectively use digital tools.Footnote 11 This disparity is evident across various demographics and regions. For instance, in India, the digital divide is heavily influenced by income and educational attainment, particularly among disadvantaged caste groups.Footnote 12 Second, remote or rural regions often suffer from a lack of investment in broadband and mobile networks (owing to higher costs and logistical challenges). Geographic isolation hinders digital accessibility, creating a stark gap compared with urbanised areas that have robust technological infrastructure. A similar tendency persists at the international level, with developing countries lagging behind developed nations in technology uptake. Third, cultural norms and language differences often limit the inclusivity of online spaces. Many websites and digital tools are predominantly available in a small number of global languages, creating obstacles for non-native speakers or those of linguistic minorities.Footnote 13 Cultural attitudes towards technology, such as mistrust or unfamiliarity, can further deepen this impact. Fourth, technological and infrastructure barriers are among the more obvious causes of the digital divide. The lack of broadband networks and high device costs clearly restrict access to digital technologies. The quality and speed of available internet also vary, affecting user ability to engage fully with digital services. And fifth, regulatory frameworks and government policies can play a critical role in determining the availability and affordability of digital infrastructure. Moreover, inadequate support for public digital initiatives or over-reliance on market-driven models can exclude marginalised populations. It has also been argued that sometimes groups or entities can use ‘political institutions to enact policies that block the spread of the Internet’.Footnote 14
Summing up, these challenges perpetuate unequal access to technology and its benefits. Recognising the interconnected nature of these factors is essential for fostering digital equity and ensuring that the benefits of technology are accessible to all.
17.3 Gender Gap
The digital divide manifests differently across various groups, highlighting distinct patterns of exclusion. Among these, the gender gap and the age gap are particularly significant, as they reflect systemic barriers rooted in social, cultural, and economic inequalities. These gaps not only reveal the unique challenges faced by specific populations but also illustrate the broader structural issues that perpetuate digital inequities worldwide.
One of the most widely recognised digital divides is the gender digital divide (gender gap). According to the International Telecommunications Union (ITU), 70 per cent of men are using the internet worldwide, compared with 65 per cent of women, meaning that globally there were 244 million more men than women using the internet in 2023.Footnote 15 In low-income countries only 20 per cent of women have access to the internet (compared with 35 per cent of men).Footnote 16 Yet, as UN Women has emphasised, ‘digital inclusion and literacy are critical to the well-being and success of women and girls in society, including their ability to take an informed part in electoral processes and exercise their right to vote and to stand for election’.Footnote 17
17.3.1 Specific Issues Faced by Women and Their Human Rights Implications
The gender digital divide highlights the problems women and girls can encounter in accessing and using digital technologies, particularly in developing countries.Footnote 18 While digital tools offer opportunities for education, economic empowerment, and social engagement, systemic barriers rooted in cultural norms, economic inequalities, and safety concerns disproportionately hinder the digital inclusion of women. This section explores the distinct obstacles women face in their digital journey and looks at the impact of the gender gap on women in relation to various aspects of their lives, including employment opportunities, education, and social inclusion. The effects are placed in the context of human rights to pinpoint the potential human rights infringements arising from the gender gap.
Women in many regions face significant barriers to accessing digital technologies owing to affordability issues and limited infrastructure, particularly in low-income and rural areas. The lack of affordable devices and reliable internet disproportionately affects women, as they are more likely to have lower incomes and fewer economic opportunities.Footnote 19
A particularly difficult aspect to grapple with is the existence of entrenched gender norms and societal expectations, which may discourage women from using digital technologies or pursuing education in digital skills. Cultural biases can restrict women’s access to public spaces such as internet cafés or limit their ownership of devices.Footnote 20 For example, in Jordan, societal attitudes even result in university educated men being uneasy about allowing women equal access to the internet and computers, reinforced by cultural mores and educational institutions.Footnote 21
Moreover, in many (especially developing) countries, women tend to have fewer opportunities for formal education and training, which results in them lacking digital literacy and skills. This limits their ability to use technology effectively and benefit from its advantages. For example, in India, women’s digital competencies are significantly lower than men’s, influenced by household dynamics, caste, and limited digital exposure.Footnote 22 And if women do access the internet, they often end up having a more negative experience than men, owing to online harassment and abuse, which disproportionately affects women, deterring them from engaging with digital platforms. Online abuse infringes upon their right to privacy and security, as laid down in human rights instruments such as the Universal Declaration of Human Rights (UDHR): ‘[n]o one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation’.Footnote 23 Women are often targeted with gender-based violence online, such as cyberstalking, threats, and harassment, which creates a hostile environment that limits their digital participation. Many studies have concluded that women are significantly more likely to experience cyberstalking and gender-based abuse than men.Footnote 24
The gender gap brings with it many detrimental effects on the everyday lives and opportunities of women and girls. The digital divide negatively affects women’s educational prospects, impacting women and girls’ right to education and exacerbating existing gender disparities in learning opportunities. Digital tools provide critical access to educational resources, online courses, and skills building programmes, yet many girls, particularly in low-income and rural areas, are excluded owing to economic, infrastructural, and cultural barriers.Footnote 25 This exclusion restricts their ability to gain necessary competencies for academic and professional success. This educational gap further aggravates the employment divide, as women are less prepared for the digital economy.Footnote 26 Women are less likely to work in technology-related fields,Footnote 27 and are often excluded from higher-paying jobs that require technological proficiency, perpetuating economic disparities between genders and impacting the human right to work. As proclaimed in the UDHR, ‘[e]veryone has the right to work, to free choice of employment, to just and favourable conditions of work and to protection against unemployment’.Footnote 28 This free choice of employment is restricted if women are not given the opportunity to develop the skills necessary to have a choice to work in ICT or in higher-paying jobs that require technological proficiency. Moreover, the lack of digital literacy can hinder women’s ability to participate in lifelong learning opportunities, which are crucial for adapting to the rapidly changing job market.
The gender digital divide extends beyond individual impacts to affect women’s roles in their communities. Women with limited access to ICTs are less able to engage in social, community, and civic activities that are increasingly mediated through digital platforms. This exclusion can lead to a diminished voice in community decision-making processes and reduced social capital. This, in turn, impacts their human right to participate in public affairs.Footnote 29 In contrast, women who do have access to ICTs can leverage these tools for community building and advocacy, underscoring the stark contrast in opportunities based on digital access. Women with limited or no digital access often also lack confidence in their ability to learn ICT skills and have a perception that technology is not meant for them, which further limits their ability to engage with digital tools, thereby reinforcing the gender gap.Footnote 30
Consequently, the gender gap also significantly restricts women’s freedom of expression, limiting their ability to participate in public discourse, advocate for their rights, and engage with conversations on different levels (local, regional, global). Digital platforms offer spaces for women to voice opinions, share experiences, and connect with wider communities. When women cannot access such platforms, their marginalisation is perpetuated and traditional power dynamics reinforced. Restricted access to technology leads to their perspectives remaining under-represented in both local and global dialogues. And, of course, overall, the gender digital divide significantly undermines the right to non-discrimination and equality,Footnote 31 as it perpetuates and exacerbates systemic gender disparities in access to opportunities and resources.Footnote 32
To avoid such overarching negative impacts on women and ensure the protection of their core human rights, it is essential to find ways to bridge the gender digital divide, in order to foster equality, empower women, and ensure their full participation in the digital society and economy.
17.3.2 International Action in Relation to the Gender Gap
As the gender digital divide remains a critical barrier to achieving gender equality in the digital age, several international organisations have adopted declarations, policies, or programmes to address this issue. These organisations have aimed to bridge gaps in digital access, skills, and representation, but the approach has been haphazard and inconsistent. There has not been systematic engagement with the gender gap in high-level policy documents. This section outlines the most significant efforts in some of the international organisations that have addressed the problem at least to a certain extent.
The UN has been one of the organisations drawing attention to the gender digital divide. Already in 1995, the Fourth World Conference on Women in Beijing recognised the transformative potential of ICTs for women’s empowerment. The declaration identified ‘Women and the Media’ as a critical area, calling for equitable access ‘to expression and decision-making in and through the media and new technologies of communication’ and the promotion of ‘balanced and non-stereotyped portrayal of women in media’.Footnote 33 A prominent step, twenty years later, was to include target 5b of ‘[e]nhanc[ing] the use of enabling technology, in particular information and communications technology, to promote the empowerment of women’ in the Sustainable Development Goals.Footnote 34 Unfortunately, the only indicator that was chosen for assessing the achievement of this target was the ‘[p]roportion of individuals who own a mobile telephone, by sex’, which has limited the follow-up activity and analysis to aspects connected to this narrow indicator.
A year later, the General Assembly called for ‘immediate measures to achieve gender equality in Internet users by 2020, especially by significantly enhancing women’s and girls’ education and participation in information and communications technologies, as users, content creators, employees, entrepreneurs, innovators and leaders’, and reaffirmed its ‘commitment to ensure women’s full participation in decision-making processes related to information and communications technologies’.Footnote 35 Clearly, the goal was not reached as no concrete large-scale action followed that document.
An important development in the digital sphere was the 2020 UN Secretary-General Roadmap for Digital Cooperation.Footnote 36 Its thematic areas include digital human rights, achieving universal connectivity and digital inclusion. The implementation of the roadmap is managed and coordinated by the Office of the Secretary-General’s Envoy on Technology, established at the beginning of 2021. For women’s rights, the most important aspects of the roadmap are digital inclusion (as it emphasises the need to address the gender digital divide) and cyber-violence.
The Commission on the Status of Women (CSW; a functional commission of the UN Economic and Social Council) is the main global inter-governmental body exclusively dedicated to the ‘promotion of gender equality, the rights and the empowerment of women’.Footnote 37 The CSW’s annual sessions regularly include discussions on ICTs and digital equity. The 2023 priority theme was ‘innovation and technological change, and education in the digital age for achieving gender equality and the empowerment of all women and girls’. The agreed conclusions of that session urge governments at all levels to ‘[p]rioritiz[e] digital equity to close the gender digital divide’ and to ‘[l]everag[e] financing for inclusive digital transformation and innovation towards achieving gender equality and the empowerment of all women and girls’.Footnote 38 The document includes a plethora of well-founded recommendations and declarations of the importance of the issues, but fails to include specific measurable targets that would help ensure implementation. As there are no binding commitments coming from this document, it is unlikely that it will lead to tangible action in the short term, but it could serve as guidance to the states genuinely invested in tackling this issue.
The UN also supports gender equality in ICT through its specialised agency, the International Telecommunication Union (ITU). Since 1998, the ITU has adopted several resolutions to promote gender equality and its mainstreaming. The first resolution was on gender and telecommunications policy in developing countries.Footnote 39 In 2018, the ITU adopted a resolution on gender mainstreaming in the ITU and the promotion of gender equality and the empowerment of women through telecommunications/ICT.Footnote 40 A year earlier, in 2017, the ITU Working Group on the Digital Gender Divide adopted ‘Recommendations for action: bridging the gender gap in Internet and broadband access and use’, but follow-up activities have been very limited (just two progress reports – from 2017 and 2018).Footnote 41
The EU’s efforts in relation to the gender gap are mostly limited to the last ten years. The main relevant strategy is the EU’s Women in Digital policy, which has the aim of ensuring that ‘everyone, regardless of gender, gets a fair chance to benefit from and contribute to the digital age’.Footnote 42 In 2019, twenty-six EU countries, along with Norway and the UK, signed the Women in Digital Declaration to achieve equality in tech.Footnote 43 The signatories of the declaration agreed to take action to create a national strategy to encourage women’s participation in digitalisation, stimulate companies to combat gender discrimination at work, and advance a gender-balanced composition of boards, committees, and bodies dealing with digital matters.Footnote 44
The 2022 European Declaration on Digital Rights and Principles addresses the gender digital divide by emphasising inclusivity and gender balance as necessary elements of the digital transformation. The Declaration has the ambitious aim of ‘promot[ing] a European way for the digital transformation, putting people at the centre, built on European values and EU fundamental rights, reaffirming universal human rights, and benefiting all individuals, businesses, and society as a whole’.Footnote 45 Chapter 2 on Solidarity and inclusion proclaims that ‘technology should be used to unite, and not divide, people’ and that the ‘digital transformation should contribute to a fair and inclusive society and economy in the EU’. The EU committed to ‘a digital transformation that leaves nobody behind’ and ‘should benefit everyone, achieve gender balance […]’. And with Chapter 4, the EU committed to ‘promoting high-quality digital education and training, including with a view to bridging the digital gender divide’. The broad language (e.g., ‘achieve gender balance’ and ‘promoting […] digital education’) lacks measurable targets and enforcement mechanisms to ensure accountability.
The EU 2022 Digital Compass & Digital Decade Policy Programme 2030 (DDPP) is unique as it sets concrete targets for 2030 in areas such as digital skills, digital infrastructure, and making public services more digital.Footnote 46 It also emphasises the importance of women having equal opportunities in the ICT work sector and sets an ambitious target to increase the number of female ICT professionals, which involves increasing the number of girls and women studying ICT, both at school and at university. Importantly, EU Member States have to submit national strategic roadmaps about their actions to achieve all DDPP targets, which are published online, and report to the Commission about progress, which should add pressure on states to take action to meet the targets. This type of approach should also be adopted in relation to other aspects of the gender gap.
Other regional organisations are also addressing some facets of the gender gap in their policy. The Digital Transformation Strategy for Africa (2020–30) recommends promoting ‘gender-inclusive education frameworks and policies and boosting relevant education opportunities and digital skills development for women and girls in STEAM-subjects to narrow the gender digital divide’.Footnote 47 And at the fifteenth session of the Regional Conference on Women in Latin America and the Caribbean, the member states of the Economic Commission for Latin America and the Caribbean (ECLAC) signed the Buenos Aires Commitment, underscoring the need to support women’s participation in Science, Technology, Engineering, and Mathematics, and eliminating occupational segregation.Footnote 48 While these regional initiatives recognise the importance of addressing the gender digital divide through education and occupational inclusion, they fall short of creating systemic change. The policies lack implementation plans, mechanisms, and funds, and do not tackle deeply rooted socio-economic and cultural barriers.
In addition to policy documents, there have been several global initiatives targeted at closing the gender digital divide, including, among others, International Girls in ICT Day (ITU), the Global Partnership for Gender Equality in the Digital Age (the EQUALS initiative), the EQUALS in Tech Awards (ITU, UN Entity for Gender Equality and the Conference on Trade and Development), Gender-Sensitive Indicators for Media (UNESCO), Women on the Homepage (UNESCO), the Global Survey on Gender and Media (UNESCO), the Broadband Commission Working Group on Broadband and Gender, and the Best Practice Forum on Gender and Access of the Internet Governance Forum.Footnote 49
Despite various regional and international policy commitments and global initiatives, the gender digital divide persists (albeit slowly decreasing). While efforts by organisations such as the UN highlight the importance of integrating gender equality into the digital agenda, the lack of binding commitments and systematic implementation frameworks limits progress. There is a need for cohesive, measurable, and actionable strategies to ensure that the digital transformation benefits everyone, regardless of gender, and that the human rights of women and girls are not negatively impacted. The gender gap undermines their ability to fully exercise their rights to education, work, freedom of expression, and access to information. This not only limits individual potential but also hampers progress towards gender equality more broadly. The gender digital divide exacerbates existing vulnerabilities by reinforcing systemic inequalities that disproportionately affect women, particularly those in marginalised communities. Limited access to digital tools and skills excludes women from opportunities in education, employment, and civic participation, deepening poverty and social exclusion. The lack of representation and participation in the digital economy and technology design also preserves biases, further entrenching gender inequality. To avoid perpetuating such issues, promises on paper need to be translated into concrete action.
17.4 Age Gap
The digital divide disproportionately affects older populations. According to the ITU, younger generations are significantly more likely to use the internet than older populations. Globally, internet usage rates are highest among individuals aged fifteen to twenty-four, reaching over 75 per cent, while fewer than 55 per cent of people aged sixty-five and older are online.Footnote 50 And only around one-third of those aged fifty-five to seventy-four, the retired and the inactive, have at least basic digital skills.Footnote 51 This age-based digital divide (grey digital divide, age gap) limits older adults’ access to vital services, social connections, and opportunities for lifelong learning. As societies digitise, the inability to engage with technology not only marginalises older individuals but also raises human rights concerns. As the EU Agency for Fundamental Rights has noted, ‘[o]lder persons, a heterogeneous group with diverse socio-economic backgrounds, are among those whose enjoyment of fundamental rights might be at risk from digitalisation’.Footnote 52 Their right to participate in civic and public life, the right to work, to health, and to education, can all be impacted by digital exclusion. The age gap exacerbates existing inequalities, as those excluded from digital connectivity face challenges in accessing services, healthcare, and opportunities for social inclusion.
17.4.1 Specific Issues Faced by Older People and Their Human Rights Implications
The age-based digital divide highlights the significant barriers older generations face in accessing and effectively using digital technologies.Footnote 53 The rapid digitalisation of services and social interaction is leaving many older people behind, owing to obstacles such as lack of digital literacy, limited access to devices or internet connectivity, and design biases in technology that cater predominantly to younger users.Footnote 54 Moreover, technophobia and cyberphobia can pose significant self-imposed barriers to engaging with ICT.Footnote 55 This section examines the specific issues stemming from the age gap in digital inclusion and looks at their human rights implications.
FRA emphasises that only one in four people aged sixty-five to seventy-four in the EU 27 have at least basic digital skills, which, along with up-to-date technological tools, are essential to participate in public life.Footnote 56 The right to access to public services is part of the right to good administration protected, for example, under Article 41 of the EU Charter of Fundamental rights.Footnote 57 This includes equal access to public services that are in the process of being digitalised.Footnote 58 As governments and businesses shift services online, older individuals without digital access often struggle to apply for benefits, schedule government appointments, or use banking services. This creates a dependency on others or exclusion from essential services.
The lack of digital skills can also prevent older individuals from accessing the information necessary for informed decision-making. Many voting resources and election updates are primarily available online. Older adults without digital skills or internet access struggle to find essential information about candidates, polling locations, or registration deadlines. This limits their ability to make informed decisions or participate fully in democratic processes. Voter registration, government consultations, and even voting are increasingly moving online, which reduces the ability of older people (without digital skills or access) to participate in such processes, and may end up infringing their right to participate in civic and public life. As much of today’s political mobilisation and discussion occurs in digital spaces, but older individuals with limited digital access are often excluded from these forums, the perspectives of older people end up under-represented.Footnote 59 This exclusion not only diminishes their influence but also perpetuates generational divides in political representation and policymaking.
Older persons may also struggle with accessing digital healthcare services and information. People without internet access miss out on crucial health information, such as vaccination updates and preventive care guidance, exacerbating health inequities, especially in underserved areas.Footnote 60 Telemedicine, vital for remote care and during emergencies such as the COVID-19 pandemic, often excludes older individuals lacking digital skills, leading to delayed diagnoses and untreated conditions. Being unable to use digital healthcare systems, including electronic health records and online appointment platforms, creates further barriers and impacts the ability to manage one’s healthcare effectively. Such problems may end up impacting older persons’ right to health.
The age gap can fuel social isolation by limiting the ability of older adults to connect in an increasingly digital world. Without internet access or digital skills, many miss out on video calls, social media, and online communities that sustain relationships and combat loneliness. Human rights instruments, such as the EU Charter of Fundamental Rights, recognise the ‘rights of the elderly to lead a life of dignity and independence and to participate in social and cultural life’.Footnote 61 But if older people lack digital literacy or access to social media and messaging platforms, they are at a higher risk of social exclusion and loneliness, as family and friends increasingly rely on digital communication to stay connected. This disconnect is especially impactful for those with mobility challenges or in rural areas, where digital tools often replace in-person interactions.Footnote 62 This exclusion from (digital) social life negatively affects mental health, increasing the risk of depression and cognitive decline.Footnote 63
The age-related digital divide can also impact older adults’ rights to education and work. As (adult) education shifts online, those without sufficient digital skills face barriers to lifelong learning and skill development, limiting their ability to adapt in a changing job market. Many older individuals looking to stay in or re-enter the workforce struggle with the technological skills required in many jobs, widening economic inequality and reducing their employability. Similarly, as job applications and interviews are increasingly digital, older adults struggle to access employment opportunities.Footnote 64 Being excluded from online platforms for networking, remote work, and training can deepen economic and social inequalities.
Summing up, the age gap exacerbates age-based discrimination, undermining older people’s human right to participate in civic and social life, the rights to education and work, the right to vote, and the right to health, among others. Without intervention, this digital exclusion deepens systemic inequalities, further marginalising older individuals.
17.4.2 International Action in Relation to the Age Gap
In order to achieve the equitable inclusion of older adults in the digital realm, some international organisations have introduced policies to address this disparity. Yet efforts remain limited and fragmented. High-level policy documents have yet to systematically engage with the unique challenges faced by older individuals because of the digital divide. This section highlights some (sporadic) policy initiatives by international organisations to tackle the age gap specifically; it does not look at broader instruments that address the (human) rights of older persons not limited to the context of digitalisation.
One of the main UN instruments in this area is the Madrid International Plan of Action on Ageing.Footnote 65 The plan emphasises the need to enhance the quality of life of older persons by ensuring their full participation in society, which includes access to ICTs. It encourages the development of programmes to reduce the digital divide and promote digital literacy among older persons.Footnote 66 In 2010, the UN’s Open-Ended Working Group on Ageing was established.Footnote 67 It has advanced the promotion of a rights-based approach towards ageing, but has not paid much attention to addressing the age gap.
In 2022, ministers from the member states of the UN Economic Commission for Europe committed to ‘promoting user-friendly digitalisation, enhancing digital skills and literacy to enable older persons to participate in an increasingly digital world, while also ensuring the right to access to information, participation, and services through access to digital devices and the Internet, and to suitable offline or other secure alternatives in user-friendly and accessible formats’.Footnote 68 However, this is a regional commission, which includes fifty-six member states, so the declaration does not reflect a global consensus. In 2013, the UN Human Rights Council established the mandate of the independent expert on the enjoyment of all human rights by older persons.Footnote 69 One of its annual thematic reports addressed the impact of automation on the human rights of older persons,Footnote 70 but the independent expert has not engaged with the age gap in detail.
The 2020 UN Roadmap for Digital Cooperation addresses the age-related digital divide through its broader focus on inclusivity and equitable digital access.Footnote 71 The roadmap highlights the importance of leaving no one behind, emphasising the need to close gaps in digital access and skills for vulnerable groups (including older people). The roadmap calls for partnerships across governments, private sectors, and civil society to address barriers, such as those faced by older populations in adopting digital technologies. The ITU as a UN specialised agency also has relevant policy goals. Its Connect 2030 Agenda has the ambitious target to bridge all digital gaps, including the age gap.Footnote 72 Other relevant targets include broadband services being affordable to all, broadband access to every household, universal access to the internet by all individuals, the majority of individuals having digital skills, and the majority of individuals accessing government services online. If successful, this would be a significant step towards eliminating the age gap. Yet the targets are very broad and do not have any specific actions or binding commitments attached to them.
The EU’s approach is focused on inclusion in general and does not have many instruments specifically targeting older people (e.g., Europe’s Digital Decade policy programme sets targets such as achieving basic digital skills for 80 per cent of adults by 2030, but does not specify how high this percentage should be among older people).Footnote 73 The main document with a distinct focus on older people and the digital divide is the 2020 Council of the EU conclusions on the human rights, participation, and well-being of older persons in the era of digitalisation.Footnote 74 The conclusions advocate for tailored strategies to enhance digital literacy among older people, improve their access to digital infrastructure, and foster their active engagement in the digital society. But the document is phrased in a very soft manner, with the Council inviting member states and the European Commission to consider, promote, and enable different steps that would improve the situation of older persons.
There are no EU directives or regulations dedicated specifically to protecting the fundamental rights of older persons or addressing the age gap. Two directives that do have a somewhat positive impact on accessibility are the Web Accessibility Directive and the European Accessibility Act.Footnote 75 The former directive obliges states to ensure that public sector websites and mobile apps have specific technical accessibility standards, which are accessible to everybody, including persons with disabilities. And the latter has the aim of improving cross-border trade in accessible products and services between EU Member States. The EU Agency for Fundamental Rights (FRA) has drawn attention to the fundamental rights implications of digital exclusion among older adults, in the particular context of access to public services. Its 2023 report underscores the risk of marginalisation in accessing essential services, including healthcare and social benefits, and advocates for inclusive digital policy frameworks.Footnote 76 Despite the acknowledgement of the issue in the EU, action is lagging.
In general, regional and specialised organisations have not been focusing on the age-related digital divide. Although many organisations have general policies in relation to the digital divide, the specific issues that older people face have not received much attention. Yet the age-related digital divide continues exacerbating existing vulnerabilities among older adults by amplifying their risk of exclusion across multiple domains. It can further marginalise those already disadvantaged by factors such as low income, poor health, or geographic isolation, particularly in rural areas where digital infrastructure is often less developed. Older individuals who lack digital skills or access to technology may struggle to book medical appointments, access telehealth services, or manage financial transactions, leaving them more vulnerable to unmet needs and financial instability. Social vulnerabilities are also intensified as digital technologies have become central to communication and community engagement. Older people without digital literacy are at greater risk of loneliness and social isolation, as family, friends, and community networks have become reliant on digital platforms for connection. This isolation can contribute to mental health issues, such as depression and anxiety, which are already prevalent among older populations.
An aspect to bear in mind when addressing the age gap (and the digital divide in general), is intersectionality. Intersectionality highlights how overlapping vulnerabilities, such as age, gender, socio-economic status, and geographic isolation, compound the impacts of the digital divide.Footnote 77 Older women in rural areas exemplify this, facing barriers from age-based exclusion, entrenched gender norms, and limited infrastructure. These intersecting disadvantages amplify the risk of marginalisation. Addressing the age-related digital divide thus requires policies that account for the complex, intersecting needs of marginalised groups to ensure that digital inclusion efforts are equitable and effective.
In sum, the age-related digital divide magnifies the disparities older adults face, reinforcing cycles of exclusion that intersect with economic, social, and health vulnerabilities. Bridging this divide is not merely a matter of technological advancement but a fundamental requirement for ensuring dignity, autonomy, and inclusion for older individuals in contemporary society. Addressing this issue holistically is essential to mitigating its broader societal impacts and safeguarding the human rights of an ageing population.
17.5 Conclusions
The digital divide represents a critical fault line in the global move toward digitalisation. Despite decades of attention, the problem persists owing to the interplay of systemic factors, including socio-economic disparities, geographic isolation, cultural norms, insufficient policy interventions, and inadequate resources. This chapter has examined the gender and age dimensions of the digital divide, illustrating how these gaps perpetuate and exacerbate exclusion and vulnerability among women and older populations. Bridging this divide requires a more cohesive, enforceable, and inclusive approach that prioritises the voices and needs of marginalised groups. This has been acknowledged by the UN Secretary-General, who has noted that ‘[r]isk factors that affect the ability of vulnerable and marginalized groups to have access to connectivity should be specifically identified and addressed’.Footnote 78 The same has been recognised by the European Parliament, which ‘call[ed] for careful examination of people’s needs when it comes to digital developments and innovation, especially the needs of vulnerable groups, in order to assess how they can benefit from these new technologies’ as ‘the digital transition must take place in a way that benefits everyone’.Footnote 79
Both covered dimensions of the digital divide reflect broader systemic failures to address structural inequalities. While international and regional bodies have adopted policies to address these gaps, their efforts are often inconsistent, fragmented, and lack enforceable commitments. For instance, international instruments such as the UN’s Sustainable Development Goals include digital inclusion targets, but fail to address the problem in a comprehensive manner. Similarly, the EU’s Digital Decade policy programme and other regional initiatives advocate for inclusivity but provide limited mechanisms to enforce digital equity for women and older individuals. The organisations themselves are also calling for more action. Some of the concrete aspects that have been noted as key to bridging the digital divide are ‘better metrics, data collection, and coordination of initiatives’ (UN Secretary-General),Footnote 80 and ‘strengthened enabling policy environments and international cooperation to improve affordability, access, education, capacity-building, multilingualism, cultural preservation, investment and appropriate financing’ (UN Economic and Social Council).Footnote 81 The European Parliament has emphasised the need to design ‘online services in a comprehensible way so that they can be accessed and used by people of all ages and levels of educational attainment’,Footnote 82 and the importance of promoting ‘basic and specialised skills with a specific focus on the most vulnerable groups of people, and the development of education and training systems including lifelong learning, re-skilling and up-skilling’.Footnote 83 Such calls for action have yet to lead to significant results.
Addressing the digital divide is not merely a matter of technological advancement but a profound human rights imperative. Civil society groups such as AGE Platform Europe have emphasised that human rights need to be used as a compass for digitalisation more broadly.Footnote 84 International and regional frameworks must go beyond aspirational targets and implement binding commitments and concrete initiatives that address the specific barriers faced by vulnerable groups. Achieving digital equity is essential not only for fostering individual empowerment but also for advancing broader societal goals of inclusivity, fairness, and human rights in an increasingly digital world.
18.1 Introduction: Children’s Rights in the Digital Environment
Digital technologies have a significant impact on the lives of children and the rights that are specifically attributed to them by the United Nations Convention on the Rights of the Child (UNCRC), Article 24 of the European Union (EU) Charter of Fundamental Rights (CFEU) and many national constitutions. The Council of Europe Committee of Ministers’ 2018 Recommendation on Guidelines to respect, protect, and fulfil the rights of the child in the digital environment recognises that the digital environment is ‘reshaping children’s lives in many ways, resulting in opportunities for and risks to their well-being and enjoyment of human rights’.Footnote 1 This has also been acknowledged by the United Nations Committee on the Rights of the Child (CRC Committee), which adopted a General Comment on the rights of the child in relation to the digital environment in 2021. In General Comment no. 25, the Committee affirms that ‘[i]nnovations in digital technologies affect children’s lives and their rights in ways that are wide-ranging and interdependent […]’.Footnote 2 Over the years, the EU has been relying on the guidance of the UNCRC when adopting and interpreting fundamental human rights instruments.Footnote 3 This is demonstrated for instance in Article 24 of the CFEU,Footnote 4 which contains language that is very similar to that of the UNCRC. The EU’s commitment to the UNCRC was again confirmed in the 2021 EU Strategy on the Rights of the Child, built on six key pillars of which ‘the digital and information society’ is one.Footnote 5 In a recent case, the Court of Justice of the EU has confirmed that CFEU Article 24 represents the integration into EU law of the principal rights of the child referred to in the UNCRC.Footnote 6 Hence, the UNCRC functions as a comprehensive framework that must duly be taken into account when legislative proposals that directly or indirectly affect children are proposed and adopted.
In the past decade, regulatory action by the European Community (EC) has increasingly been targeted at the digital environment, leading to the adoption of influential legislative instruments such as the General Data Protection Regulation (GDPR),Footnote 7 and the Network and Information Security Directive. Other instruments, such as the Audiovisual Media Services Directive (AVMSD), were amended to extend their scope to also cover video-sharing platforms. Two recent legislative initiatives at the level of the EU, the Digital Services Act (DSA) and the Artificial Intelligence Act (AIA), touch upon platforms and technologies that have a significant impact on children and their rights.Footnote 8 Digital services, often provided through platforms, provide children with immense opportunities to communicate, learn, and play, and artificial intelligence (AI)-based applications may offer children personalised learning or medical treatments.Footnote 9 At the same time, the use of tech platforms and AI may also pose risks to children’s rights. Rights that might be negatively affected are, for example, the right to privacy and data protection, freedom of thought, the right to freedom of expression, and the right to protection from violence and exploitation. The AIA acknowledges, for instance, that this technology ‘can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices’.Footnote 10 The question arises to what extent the protection and fulfilment of children’s rights is addressed in these most recent legislative acts, the DSA and the AIA. In order to answer this question, the proposals are analysed, the legislative process is scrutinised, and an assessment is made of how each contributes to the effective realisation of children’s rights in the digital realm. The chapter also suggests some strategies for moving forward, drawing on recent recommendations from the UN children’s rights framework. We propose that EU policymakers adopt a children’s rights approach in their attempts to regulate platform services and AI, so that children’s rights and interests can be a strong regulatory focus rather than a regulatory afterthought.
18.2 Opportunities for and Risks to Children’s Rights from Platform Services and AI Systems
Before analysing the legislative acts, this section zooms in on existing evidence about children’s experiences with platform services and AI systems. The aim is to provide a better understanding of both the potential opportunities for and risks to children’s rights offered by these services and applications. Platform services and other AI-based applications have become an integral part of children’s lives. While AI-enabled toys and voice assistants have infiltrated children’s homes and schools, AI-powered tutors, learning assistants, and personalised educational programmes are becoming more commonplace.Footnote 11 Children are also avid users of (commercial) AI-enabled platform services, such as social media, video-sharing, and interactive gaming platforms. For instance, platforms such as Instagram, Snapchat, and TikTok use advanced machine learning to deliver content and personalise, to use their own word ‘improve’,Footnote 12 the user experience, provide filters that rely on augmented reality technology, and employ natural language processing tools to monitor and remove hate speech and other harmful content. The specific features of these platforms and systems make them particularly appealing to children, but also carry risks.Footnote 13 The lack of transparency and insight into how exactly AI systems generate certain output makes it very difficult for end users to anticipate the potential risks, harms, or violations of their rights.Footnote 14
Research capturing the opinions of children and youth themselves about the opportunities and risks of platform services and AI shows that they have a balanced perspective.Footnote 15 On the one hand, they realise that these services offer many opportunities for entertainment, socialising, and learning, but are never completely safe. On the other hand, children express a great deal of concern, and cite as the main risks being confronted with harmful and illegal content, cyberbullying and hate speech, and violations of their privacy and data protection rights.Footnote 16 In relation to this, questions arise about the long-term impact of platform services and AI on children’s well-being and development. For instance, it has been reported that researchers within Instagram, owned by Meta, who studied user experiences, found that ‘thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse’.Footnote 17 The introduction of AI-based applications into children’s lives could also have side effects at the societal level.Footnote 18 More specifically, it could lead to the normalisation of surveillance, datafication, and commercialisation. Many of these applications are driven by commercial interests and deliberately designed and used to ensure maximum engagement of children, and even to establish behavioural patterns and habits for the future.Footnote 19 Furthermore, lower quality AI-based technologies with a greater focus on entertainment and pacification rather than education and learning might be available for children from disadvantaged backgrounds compared with those from privileged backgrounds.Footnote 20
Because of the impact of platform services and AI on society at large, policymakers and legislators around the world are debating and developing instruments to counteract these risks. However, scholars have identified a disconnect and lack of adequate redress between the potential negative impact of AI on children and the regulatory means to address them.Footnote 21 UNICEF also underlines that most of the recent initiatives targeting AI only refer superficially to children and their rights and interests.Footnote 22 Considering the EU’s commitment to safeguarding children’s rights in the digital environment,Footnote 23 the following section will analyse two of these recent initiatives through the lens of children’s rights and principles. It is important to note that both the DSA and the AIA are likely to have a standard-setting impact around the world, given what scholars and policymakers call the Brussels Effect.Footnote 24 In this sense, these initiatives also present an important opportunity to shape global norms and standards for the design and deployment of digital technologies that are used by and impact children.
18.3 Children’s Rights in the DSA
18.3.1 The Commission Proposal for a DSA
The proposal for the DSA aimed to regulate intermediary services and to ‘set out uniform rules for a safe, predictable and trusted online environment, where fundamental rights enshrined in the Charter are effectively protected’.Footnote 25 This proposal reflects a shift at the EU level from relying on self- or co-regulatory efforts from tech companies to imposing strong legislative obligations on those companies that offer services used by a vast number of EU citizens and affect individuals and society at the same time.Footnote 26 Throughout the proposal by the European Commission, children or minors and their rights are referred to a few times. The preamble to the proposal, for instance, states that it will ‘contribute to the protection of the rights of the child and the right to human dignity online’. Recital 34 clarifies that the proposal intends to impose a clear and balanced set of harmonised due diligence obligations on providers of intermediary services, aiming in particular ‘to guarantee different public policy objectives such as the safety and trust of the recipients of the service, including minors and vulnerable users’.Footnote 27
The most important provision for children (Article 26, Recital 57) in the proposal relates to the supervised risk assessment approach towards ‘very large online platforms’ (VLOPs). VLOPs are platforms ‘where the most serious risks often occur’ and which ‘have the capacity to absorb the additional burden’. A platform is considered a VLOP when the ‘number of recipients exceeds an operational threshold set at 45 million; that is, a number equivalent to 10% of the [EU] Union’.Footnote 28 This includes many large platforms that are popular with children, such as YouTube, TikTok, or Instagram. According to the proposal, VLOPs should identify, analyse, and assess any significant systemic risks stemming from the functioning and use made of their services in the Union. All three categories of systemic risks that are listed are especially relevant to children. The first category refers to ‘the dissemination of illegal content through their services’, with a mention in Recital 57 of child sexual abuse material as a type of illegal content. The second category relates to ‘any negative effects for the exercise of the fundamental rights to respect for private and family life, freedom of expression and information, the prohibition of discrimination and the rights of the child’.Footnote 29 The third category refers to the ‘intentional manipulation of their service, including by means of inauthentic use or automated exploitation of the service, with an actual or foreseeable negative effect on the protection of public health, minors, civic discourse, or actual or foreseeable effects related to electoral processes and public security’.Footnote 30 In order to mitigate the risks, the VLOPs must put in place reasonable, proportionate, and effective measures, tailored to the specific systemic risks (Article 27). Another mechanism that can be used to tackle different types of illegal content and systemic risks are codes of conduct (Article 35). According to the proposal, the creation of an EU-wide code of conduct will be encouraged by the Commission and the new Board for Digital Services to contribute to the proper application of the DSA. Recital 68 refers to the appropriateness of drafting codes of conduct regarding disinformation or other manipulative and abusive activities that might be particularly harmful for vulnerable recipients of the service, such as children.
The explicit references to children in the proposal for the DSA were welcomed by children’s rights organisations but by some considered to be too weak.Footnote 31
18.3.2 Amendments Proposed by the LIBE Committee
During the legislative process, and specifically in the context of the activities of the Committee on Civil Liberties, Justice and Home Affairs (LIBE), a number of child-centric amendments were proposed by a number of LIBE committee members.Footnote 32 Amendment no. 129 introduced a specific reference to Article 24 of the Charter, the UNCRC, and General Comment no. 25 to Recital 3. Amendment no. 412 suggested adding a new Article 12a, requiring the carrying out of a detailed child impact assessment. Amendment no. 414 put forward a specific article on the mitigation of risks to children that aims to address many of the existing concerns regarding children’s rights in the context of VLOPs. The amendment includes, for instance, a reference to taking into account children’s best interests when implementing mitigation measures in general and adapting content moderation or recommender systems in particular; adapting or removing ‘system design features that expose children to content, contact, conduct, and contract risks, as identified in the process of conducting child impact assessments’; ‘proportionate and privacy preserving age assurance’; ensuring ‘the highest levels of privacy, safety, and security by design and default for users under the age of 18’; the prevention of profiling, including for commercial purposes such as targeted advertising; age appropriate terms that uphold children’s rights; and ‘child-friendly mechanisms for remedy and redress, including easy access to expert advice and support’. Amendment no. 427 concerned the publication of child impact assessments and reports about the mitigation measures taken. Finally, amendment no. 772 included a requirement for the Commission to ‘support and promote the development and implementation of industry standards set by relevant European and international standardisation bodies for the protection and promotion of the rights of the child’.
In its Opinion, the LIBE committee only included the amendments regarding Recital 3,Footnote 33 leaving the more substantial amendments out.
18.3.3 Amendments by the Council and European Parliament
Both in the general approach of the Council of November 2021,Footnote 34 and the amendments adopted by the European Parliament (EP) on 20 January 2022, there are remarkably more references to children and minors compared to the Commission proposal.Footnote 35
The general approach by the Council adds that when assessing risks to the rights of the child, ‘providers should consider how easily understandable to minors the design and functioning of the service is, as well as how minors can be exposed through their service to content that may impair minors’ health, physical, mental and moral development’. Risks may arise, for example, ‘in relation to the design of online interfaces which intentionally or unintentionally exploit the weaknesses and inexperience of minors or which may cause addictive behaviour’ (Recital 57). Recital 58 builds on this by requiring that the design and online interface of services primarily aimed at minors or predominantly used by them should consider their best interests and ensure that their services are organised in a way that minors are easily able to access mechanisms within the DSA, including notice and action and complaint mechanisms. Moreover, VLOPs that provide access to content that may impair the physical, mental, or moral development of minors should take appropriate measures and provide tools that enable conditional access to the content. Article 12 refers to explaining the conditions and restrictions for the use of the service in a way that minors can understand, where an intermediary service is primarily aimed at minors or is predominantly used by them. Finally, in Article 27, a reference was added to taking targeted measures to protect the rights of the child, including age verification and parental control tools, or tools aimed at helping minors signal abuse or obtain support.
The EP suggested adding an explicit reference to the UNCRC and General Comment no. 25 in Recital 3. Unlike in the Commission proposal, but reminiscent of guidelines from the Article 29 Working Party,Footnote 36 the EP put forward a prohibition of ‘[t]argeting or amplification techniques that process, reveal or infer personal data of minors or sensitive categories of personal data for the purpose of displaying advertisements’.Footnote 37 A second amendment relates to ensuring that conditions for and restrictions on the use of a service are explained in a way that minors can understand.Footnote 38 This, however, would only be required for intermediary services that are ‘primarily directed at minors or predominantly used by them’. Along the same lines, other amendments aim to ensure the internal complaint-handling systems are easy to access and user-friendly, including for minors, and that online interfaces and features are adapted to protect minors as a measure to mitigate risks.Footnote 39 A more general obligation is proposed to adapt design features to ensure a high level of privacy, safety, and security by design for minors.Footnote 40 Also potentially important for children’s rights was the suggestion to change the wording of ‘any negative effects’ in Article 26 to ‘any actual and foreseeable negative effects’ on the rights of the child. Arguably, this could trigger the application of the precautionary principle.Footnote 41 Regarding the mitigation measures, the EP also proposed to add ‘targeted measures aimed at adapting online interfaces and features to protect minors’ to Article 27. Finally, the suggested Recital 69 encourages the development of codes of conduct to facilitate compliance with obligations regarding the protection of minors, and the proposed Article 34 1a refers to the support and promotion by the Commission of the development and implementation of voluntary standards set by the relevant European and international standardisation bodies aimed at the protection of minors.
18.3.4 The Final Text of the DSA
The DSA, or, as its formal title is expressed, ‘the Regulation (EU) 2022/2065 of the European Parliament and of the Council on a Single Market for Digital Services and amending Directive 2000/31/EC’, was adopted on 19 October 2022 and published in the Official Journal on 27 October 2022.Footnote 42 Not only are the references to child, children, and minors in the adopted text vastly more numerous than in the Commission proposal, but the substance of what is proposed is also much more extensive and arguably promising, depending on actual implementation and enforcement.
These recitals and articles that refer to children and minors can be classified in five broad categories: (a) provisions that are related to child sexual abuse material and the measures in place to tackle this type of material,Footnote 43 (b) transparency obligations regarding terms and conditions, (c) obligations for all online platforms to protect minors,Footnote 44 (d) risk assessment and mitigation obligations towards children for VLOPs and very large online search engines (VLOSEs),Footnote 45 and (e) references to implementation tools such as standards and codes of conduct.
In what follows, the final three categories are examined in depth.
18.3.4.1 Obligations for All Online Platforms to Protect Minors
Article 28 (an article not included in the Commission proposal) formulates extensive obligations for online platforms ‘accessible to minors’. First, such platforms must put in place ‘appropriate and proportionate measures to ensure a high level of privacy, safety, and security of minors, on their service’. This is an obligation that has the potential to safeguard a number of children’s rights (i.e., the right to privacy and the right to protection). One point of contention is the interpretation of which platforms are considered ‘accessible to minors’. This is in part clarified in Recital 71, which states that this includes (a) platforms whose terms and conditions permit minors to use the service, (b) platforms offering services that are directed at or predominantly used by minors, or (c) platforms that are aware that some of the recipients of their service are minors, ‘for example, because it already processes personal data of the recipients of its service revealing their age for other purposes’. In reality, research shows that many children (including very young children) use platforms that are not directed at them and that explicitly state in their terms and conditions that their service is not to be used by children under a certain age (most often set at thirteen years).Footnote 46 It may be expected that certain platforms will try to argue that their platforms should not be considered to be ‘accessible to minors’. Independent research into children’s online experiences and use of platforms might be helpful in this regard, both for the platforms and for oversight bodies. Aside from this issue in relation to which platforms fall within the scope of Article 28, it might also be a challenge for platforms to decide what are ‘appropriate and proportionate measures to ensure a high level of privacy, safety, and security of minors’, particularly when considering that different age groups of children have different privacy and safety needs. In this regard, Recital 71 refers to standards, codes of conduct, and best practices, and Article 28.4 indicates that the Commission (after consulting the European Board for Digital Services, which is established by the DSA in Article 61) may formulate guidelines to support providers of online platforms. Work on such guidelines started in 2024.Footnote 47
Second, Article 28.2 prohibits targeting advertisements based on profiling ‘when they are aware with reasonable certainty that the recipient of the service is a minor’. Such types of advertisements have long been a concern for scholars,Footnote 48 and organisations such as the Article 29 Working Party, which in one of its Guidelines had stated before that ‘organisations should, in general, refrain from profiling [children] for marketing purposes’,Footnote 49 even though this was not explicitly prohibited by the GDPR. In this light, it can only be commended that this is now also explicitly prohibited in the DSA. Yet the question could be raised whether it would not have made sense to include a broader prohibition of profiling children for commercial purposes rather than just targeted advertising. This would have been more in line with the CRC Committee’s call to ‘prohibit by law the profiling or targeting of children of any age for commercial purposes’ in its General Comment no. 25.Footnote 50 Moreover, profiling may also be used to target harmful types of content (e.g., relating to self-harm or eating disorders). Arguably, this could be covered under the risk assessment provisions, but their scope is limited to VLOPs and VLOSEs (see Section 3.5.2). Another doubt that may be raised is how the notion ‘reasonable certainty’ will be interpreted and whether this would require age verification. It would seem from the text of the DSA that this is not necessarily the case, as Article 28.3 states that compliance with the obligations of Article 28 ‘shall not oblige providers of online platforms to process additional personal data in order to assess whether the recipient of the service is a minor’, and Recital 71 adds that this obligation ‘should not incentivise providers of online platforms to collect the age of the recipient of the service prior to their use’. While this may be in line with the principle of data minimisation laid down in the GDPR,Footnote 51 it is uncertain whether the protective aim of the prohibition as well as taking measures to ensure a high level of privacy, safety, and security of minors will be effectively realised if platforms are not incentivised to know which users are actually minors. In any case, the desirability and effectiveness of age verification or age assurance has been the subject of heated debates since the emergence of the internet, and until now, this debate has not been settled.
18.3.4.2 Risk Assessment and Mitigation Obligations towards Children for the VLOPs and VLOSEs
A third category of obligations is aimed at VLOPs and VLOSEs. The first VLOPs and VLOSEs were designated by the EC on 25 April 2023. They include platforms and search engines that are hugely popular with children such as TikTok, Snapchat, Instagram, Wikipedia, Google Play, and Google Search.Footnote 52 Article 34 requires VLOPs and VLOSEs to undertake an assessment of the systemic risks in the EU ‘stemming from the design or functioning of their service and its related systems, including algorithmic systems, or from the use made of their services’. There are four categories of risks that are listed, and as mentioned earlier, at least three of these types of risks are directly relevant for children: ‘(a) the dissemination of illegal content through their services’; ‘(b) any actual or foreseeable negative effects for the exercise of fundamental rights, including rights of the child’, and ‘(d) any actual or foreseeable negative effects in relation to gender-based violence, the protection of public health and minors and serious negative consequences to the person’s physical and mental well-being’. From a children’s rights perspective, it is particularly interesting to observe that in the course of the legislative process, the word foreseeable was added, as this could potentially trigger the precautionary principle. There is still much uncertainty about the occurrence and extent of actual harm when it comes to digital technologies.Footnote 53 There are certain indications that certain platform services might have an impact on the mental health of children in the short and long term,Footnote 54 and that certain features lead to an impact on day-to-day life and, for instance, sleep. Yet there is still little hard evidence about certain impacts on children and their rights in the long run. There is simply not enough research, there are ethical questions that surround research with children, and certain technologies have not existed long enough to draw conclusive results. We have argued before that with respect to delicate issues such as the well-being of children, the precautionary principle might thus come into play.Footnote 55 Simply put, this concept embraces a better safe than sorry approach and compels society to act cautiously if there are certain – not necessarily absolute – scientific indications of a potential danger and not acting upon these indications could inflict harm.Footnote 56 It could of course be up for discussion whether the threshold for triggering the precautionary principle and the threshold for an effect to be foreseeable in the sense of Article 34 are in alignment. From a linguistic point of view, foreseeable does not equate with potential. A foreseeable event is, according to the Cambridge Dictionary, ‘one that can be known about or guessed before it happens’. Whether an effect can be known about will to a large extent depend on research and expert advice from a variety of disciplines. From a children’s rights perspective, however, this notion would need to be interpreted broadly in the best interests of the child, in line with UNCRC Article 3 and CFEU Article 24.
As to the methodology for the assessment of risks and their effect on the rights of the child, inspiration could be drawn from existing methodologies for Children’s Rights Impact Assessments (CRIAs).Footnote 57 With regard to CRIAs, best practices are available that could be useful. From a children’s rights perspective, in any case, it is crucial that the impact on the full range of children’s rights is assessed, and rights are not looked at in isolation but as interacting with each other.
Following the assessment of the risks, Article 35 requires VLOPs and VLOSEs to take ‘reasonable, proportionate and effective mitigation measures, tailored to the specific systemic risks’. One type of such mitigation measure that is proposed is ‘targeted measures to protect the rights of the child, including age verification and parental control tools, tools aimed at helping minors signal abuse or obtain support’. The explicit reference to targeted measures is helpful. Recital 89 seems to indicate that such targeted measures might for instance be needed to protect minors from content that may impair their physical, mental, or moral development. The examples that are given could be helpful as well, although both age verification and parental control tools are not without their difficulties. Regarding age verification, the lack of consensus on desirability and effectiveness has already been pointed to; regarding parental control tools, it has been argued before that these types of empowerment tools should not be used to solely shift the burden for safeguarding the interests and rights of children from platforms to parents.Footnote 58 In addition, other non-child specific risk mitigation measures that are proposed, such as adapting the design, features, or functioning of services, including online interfaces, may also have a positive impact on children’s rights. Recital 89 explains in that regard that VLOPs and VLOSEs must take the best interests of the child into account when taking such measures, particularly when their services are aimed at minors or predominantly used by them.
18.3.4.3 Standards and Codes of Conduct
Finally, as the formulation of the obligations that are imposed on VLOPs and VLOSEs still remains rather abstract, the actual implementation of them will be of the utmost importance. The tools that can support platforms in that regard are standards and codes of conduct.
Article 44 states that the Commission, after consultation with the Board, shall support and promote the development and implementation of voluntary standards set by relevant European and international standardisation bodies, including in respect of ‘targeted measures to protect minors online’. In this regard, it is relevant to observe the efforts that are currently being undertaken by the Institute of Electrical and Electronics Engineers (IEEE) regarding the drafting of a standard for Age Appropriate Design for Children’s Digital Services.Footnote 59
Recital 104 explains that an area for consideration for which codes of conduct could be drafted (Article 45) is ‘the possible negative impacts of systemic risks on society and democracy, such as disinformation or manipulative and abusive activities or any adverse effects on minors’. In the Commission’s 2022 BIK+ Strategy, it was announced that the Commission will ‘facilitate a comprehensive EU code of conduct on age-appropriate design, building on the new rules in the DSA and in line with the AVMSD and GDPR. The code aims to ensure the privacy, safety and security of children when using digital products and services. This process will involve industry, policymakers, civil society and children.’Footnote 60 It continues:
[u]nder the DSA, the Commission may invite providers of very large online platforms to participate in codes of conduct and ask them to commit themselves to take specific risk mitigation measures, to address specific risks or harms identified, via adherence to a particular code of conduct. Although participation in such codes of conduct remains voluntary, any commitments undertaken by the providers of very large online platform shall be subject to independent audits.
At the end of 2022, a call was published by the Commission for members for a Special Group on the EU Code of Conduct on Age-Appropriate Design.Footnote 61 However, work on the Code of Conduct seems to have halted in favour of the drafting of guidelines by the European Commission (supra).
18.4 Children’s Rights in the Proposal for the AIA
18.4.1 The EU Policy Agenda on AI (and Children’s) Fundamental Rights
A second legislative initiative at the EU level that has the potential to significantly impact children’s rights in the digital environment is the AIA. Developing a regulatory framework for AI has been high on the EU policy agenda for some time. Initially, the EC adopted a soft-law approach consisting of non-binding recommendations and ethical guidelines. In June 2018, an independent High-Level Expert Group on Artificial Intelligence (AI HLEG) was established, which was tasked with drafting ethics guidelines for AI practitioners, as well as offering advice concerning the adoption of policy measures.Footnote 62 However, this approach changed in 2021, when the EC explicitly recognised that certain characteristics of AI, such as the opacity of algorithms and the difficulties in establishing causality in algorithmic decision-making, pose specific and potentially high risks to fundamental rights.Footnote 63 As existing legislation failed to address these risks, both the EP and the Council called for legislative action in this area.Footnote 64 Echoing these calls, the AI HLEG also stated the need to explore binding regulation to tackle some of the critical issues raised by AI. In particular, the expert group stressed the need for mandatory traceability, auditability, and ex ante oversight obligations for AI systems that have the potential to significantly impact human lives. According to the AI HLEG coordinator, AI is nothing more than an application, system, or tool developed by humans that can be used in different ways: (a) ways that cause harm, (b) ways that cause unintended harm, (c) ways that counteract harm, and (d) ways that cause good.Footnote 65 Therefore, ‘if we are intelligent enough to create AI systems, we must be intelligent enough to ensure appropriate governance frameworks that harness the good use of those systems, and avoid those that lead to (un)intentional harm’.Footnote 66 In its framework for achieving Trustworthy AI, the AI HLEG also pays (limited) attention to children’s rights. More specifically, as part of the key ethics guidelines, the AI HLEG advises to pay particular attention to situations involving more vulnerable groups,Footnote 67 such as children, and refers specifically to CFEU Article 24.Footnote 68
The fact that the regulation of AI is a hot topic is evidenced by the responses to the EC’s open consultation on AI in February 2020, which attracted far more feedback submissions than any other technology Act.Footnote 69 In these submissions, concerns about AI and children were raised by a range of stakeholders, including companies, academic institutions, and non-governmental organisations (NGOs), mostly in relation to education. For instance, the submissions mentioned that the use of AI in education may have serious consequences on the child’s life course and should therefore be considered as high risk, as it may lead to discrimination, have a serious negative impact on their learning, and their consent might not be properly secured.Footnote 70 More generally, children’s digital rights and AI,Footnote 71 and the use of AI in connection with the evaluation, monitoring, and tracking of children were also mentioned as areas of concern.Footnote 72
18.4.2 The Commission Proposal for an AI Act
In response to these calls for legislative action, in April 2021 the EC unveiled its proposal for the AIA, the first Act of its kind in the world.Footnote 73 It aimed to lay down harmonised rules on the development, deployment, and use of AI systems in the EU,Footnote 74 based on the values and fundamental rights of the EU.Footnote 75 Through a risk-based approach and by imposing proportionate obligations on all participants in the value chain, the proposal aimed to ensure a high level of protection of fundamental rights in general and a positive impact on the rights of special groups, including children.Footnote 76 More specifically, a risk-based categorisation of AI systems was introduced, where different levels of risk correspond to a different set of requirements. The intensity of risks determines the applicability of the requirements, and as such, a lighter regime applies to AI systems with minimal risks, while unacceptable risks are prohibited. The idea is that (groups of) individuals at risk and vulnerable to health, safety, and rights infringement by new AI developments need a higher level of protection.Footnote 77
One category of prohibited practices as proposed by the Commission that is relevant for children are ‘practices that exploit vulnerabilities of specific vulnerable groups such as children or persons with disabilities in order to materially distort their behaviour in a manner that is likely to cause them or another person psychological or physical harm’, because such systems contradict Union values, for instance, by violating fundamental rights (emphasis added).Footnote 78 This is confirmed in Recital 16 and Article 5.1 (b) of the proposal, although the latter does not explicitly refer to children but to ‘a specific group of persons due to their age’. As an example of such an exploitative AI system, the EC referred to a doll with an integrated voice assistant, which, under the guise of a fun or cool game, encourages a minor to engage in progressively dangerous behaviour or challenges.Footnote 79 While this is an extreme example, children’s rights advocates argued that a number of persuasive design features often found in AI systems used by children could fall under this prohibition.Footnote 80 They cited, for example, the autoplay features of social media companies that aim to increase user engagement, and which could be said to affect children’s sleep and education, and ultimately their health and well-being.Footnote 81 Moreover, when such recommender systems promote harmful content, they might even lead to sexual exploitation and abuse.Footnote 82 Nevertheless, the prohibition was criticised by various stakeholders for its limitations, in particular the limitation to physical and psychological harm,Footnote 83 the requirement of malicious intent,Footnote 84 and the lack of references to fundamental rights.Footnote 85
The Commission proposal also mentions children and their rights in the context of the classification of AI systems as high risk and the related requirements for the provision or use of such systems. According to Recital 28 of the proposal, the extent of the adverse impact on fundamental rights caused by AI systems is of particular relevance when classifying them as high risk.Footnote 86 In this regard, Recital 28 contains an explicit reference to CFEU Article 24, which grants children the right to such protection and care as is necessary for their well-being. Moreover, it mentions the UNCRC and the recently adopted General Comment no. 25 on children’s rights in the digital environment,Footnote 87 which sets out why and how State parties should act to realise children’s rights in a digital world. On reflection, however, this does not mean that the proposal considers AI systems that are likely to be accessed by children or impact upon them to be considered high risk by default. The Commission proposal also does not impose any real obligation on providers or users of high-risk AI systems to carry out and publish an assessment of the potential impact on children’s rights. Instead, providers of high-risk AI systems will have to conduct a conformity assessment,Footnote 88 to demonstrate compliance with a list of essential requirements, before placing the system on the market or putting it into service.Footnote 89 These requirements include setting up a risk management system;Footnote 90 ensuring that the data sets used comply with data quality criteria; obliging providers to guarantee accuracy, robustness, and data security; preparing technical documentation; logging; and building in human oversight to minimise risks to health, safety, and fundamental rights.Footnote 91 Regarding the latter, human–machine interface tools and measures should be integrated into the design of the AI system.Footnote 92 Users of high-risk AI systems must use the systems in accordance with the provider’s instructions for use. While this seems like a solid set of requirements, how the full implementation of the risk management as described in Article 9 of the AIA proposal can be ensured without a real obligation to first identify and evaluate risks to children could still be questioned. In addition, the Commission proposal lacks individual rights and remedies against infringements by the provider or user, in contrast to, for instance, data subject rights under the GDPR.Footnote 93
Finally, the Commission proposal also contains transparency requirements that apply to some specific limited-risk AI systems. This category essentially covers systems that can mislead people into thinking they are dealing with a human (e.g., automated chatbots such as the ‘My AI’ tool used by Snapchat).Footnote 94 The proposal requires AI providers to design their systems in such a manner that individuals interacting with these systems are informed that they are interacting with a bot (i.e., ‘bot disclosure’) unless it is contextually obvious. Second, the proposal requires users of emotion recognition systems to inform exposed persons of this and users of AI systems that generate deep fakes to disclose the AI nature of the resulting content. These transparency requirements raised a number of questions (e.g., does this mean that there is a right to an explanation?),Footnote 95 including about the standard of transparency when children are involved. More specifically, do providers have an obligation to offer information in a child-friendly manner – similar to the GDPR transparency obligations – when their AI systems are likely to be accessed by a child? This remained unclear in the Commission proposal.
18.4.3 Amendments Proposed by the IMCO and LIBE Committees
The discussions in the EP were led by the Committee on Internal Market and Consumer Protection (IMCO) and the LIBE under a joint committee procedure.Footnote 96 Additional references to children were added in the IMCO-LIBE draft report.Footnote 97 The first was amendment 208, which proposed a requirement for the future EU advisory body on AI, the so-called AI Board, to provide guidance on children’s rights, in order to ‘meet the objectives of this Regulation that pertain to children’. Second, and perhaps more interesting, was amendment 289, which added to the list of high-risk AI systems ‘AI systems intended to be used by children in ways that have a significant impact on their personal development, including through personalised education or their cognitive or emotional development’. Amendment 23 specified in this context that children constitute ‘an especially vulnerable group of people that require additional safeguards’. Depending on how broadly this category is interpreted (e.g., does it go beyond an educational context?), this could lead to stronger protection. The draft report also did not contain a general obligation for specific protection for children in the context of AI.
Furthermore, in a public event following the publication of the Commission proposal, one of the shadow rapporteurs of the IMCO Committee on the proposal for an AIA openly criticised the fact that the Commission proposal contained no obligation to carry out fundamental rights impact assessments.Footnote 98 In this regard, amendment 90 of the draft report specified that the obligation of risk identification and analysis for providers of high-risk AI systems should also include the known and reasonably foreseeable risks to the fundamental rights of natural persons.Footnote 99 In addition, the shadow rapporteur argued that the Commission proposal overlooked the fact that AI systems that are transparent and meet the conformity requirements – and can thus move freely on the market – could still be used in violation of fundamental rights. This criticism was reflected in the draft report, which underlined that ‘users of high-risk AI systems also play a role in protecting the health, safety and fundamental rights of EU citizens and EU values’,Footnote 100 and placed more responsibilities on the shoulders of said users.Footnote 101
18.4.4 Amendments by the Council and the EP
Both the general approach of the Council and the amendments adopted by the EP introduced a number of noteworthy changes.
The Council adopted its common position (General Approach) on 6 December 2022, which includes several noteworthy elements concerning children and their rights.Footnote 102 First, as Malgierie and Tiani argue, it adopted a wider and more commercially relevant definition of vulnerability.Footnote 103 More specifically, the Council proposed to prohibit not only the exploitation of age, but also of vulnerabilities based on disability, and the social or economic situation of the individual. This was an improvement in light of children’s rights concerning accessibility and protection from economic exploitation. The Council also deleted the malicious intent requirement, and included the possibility that harms may be accumulated over time (Recital 16), thereby resolving some of the criticisms mentioned earlier. In addition, more attention was given to fundamental rights more generally in the context of the requirements for providers of high-risk AI systems. Regarding classification, the Council proposed that AI systems that are unlikely to cause serious fundamental rights violations or other significant risks should not be classified as high risk. Regarding the requirements for providers of high-risk systems, the Council text included a requirement for the ‘identification and analysis of the known and foreseeable risks most likely to occur to health, safety, and fundamental rights in view of the intended purpose of the high-risk AI system.’Footnote 104
Following lengthy discussions on the more than 3,000 amendments tabled in response to the draft report by the IMCO-LIBE committees, the EP plenary session adopted its negotiating position (Compromise Text) on 14 June 2023.Footnote 105 However, despite numerous amendments being tabled with the potential to directly impact children and their rights, none of these child-specific amendments were included in the Compromise Text of the EP. Consequently, these amendments were not part of the trilogue negotiations.Footnote 106 In relation to this, children’s rights organisations raised concerns about the level of consideration given to children’s rights during the legislative process.Footnote 107 Nevertheless, the Compromise Text does include several notable amendments that could impact children and their rights. First, it included a ban on AI systems inferring the emotions of a natural person in education institutions, which has implications for school children.Footnote 108 Second, the EP proposed to include as part of the risk management system for providers of high-risk AI systems a requirement to identify, estimate, and evaluate known and reasonably foreseeable risks to fundamental rights (including children’s rights). Third, the introduction of general principles applicable to all AI systems under the proposed Article 4a is noteworthy. This article requires operators of AI systems to make their best efforts to develop and use these systems in accordance with these principles. The principles encompassed various aspects, including the preservation of human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, as well as social and environmental well-being. To foster the voluntary application of these principles to AI systems other than high-risk AI systems, the EP proposed the establishment of codes of conduct. These codes should particularly ‘assess to what extent their AI systems may affect vulnerable persons or groups of persons, including children, the elderly, migrants and persons with disabilities or whether measures could be put in place in order to increase accessibility, or otherwise support such persons or groups of persons’.Footnote 109 Finally, a new Article 4d outlined requirements for the EU, its Member States, as well as providers and deployers of AI systems to promote measures fostering AI literacy, which could be beneficial for children. This included teaching basic notions and skills regarding AI systems and their functioning.
18.4.5 The Final Text of the AIA
The final text of the AIA was adopted by the EP on 13 March 2024 and endorsed by the Council on 21 May 2024.Footnote 110 The specific references to children and their rights have remained, with noteworthy changes.
First, with regard to the prohibited practices, Article 5b now states:
the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm [emphasis added].
Thus, the final text does not contain a malicious intent requirement (i.e., ‘with the objective, or the effect’), and adopts the broader concept of vulnerability (‘disability, ‘social or economic situation’). Furthermore, Article 5 now covers ‘significant harm’ to ‘that person or another person’, extending beyond physical or psychological harm (infra).Footnote 111 However, a lack of clarity regarding the actual scope of the prohibition remains. A crucial point that needs clarification concerns the threshold for significant harm. For instance, would a violation of children’s rights meet this threshold? According to Recital 29, this may include harms accumulated over time. In addition, this provision is rather broad regarding who suffers harm and seems to cover third-party effects as well.Footnote 112
Second, the references to children’s rights in the context of the classification (AIA Recital 48) and requirements for (AIA Article 9.9) high-risk AI systems have also remained, with some subtle changes. At first glance, these references give the impression that the EU considers that children’s rights and their best interests play an important role in the regulation of high-risk AI systems. Article 9.9 of the AIA, for example, states that ‘when implementing the risk management system as provided for in paragraphs 1 to 7, providers shall give consideration to whether in view of its intended purpose the high-risk AI system is likely to have an adverse impact on persons under the age of 18’ (emphasis added). However, as mentioned, this does not mean that AI systems that are likely to be accessed by children or impact them are considered high risk by default. Notably, the word specific was omitted from the final text, arguably reducing the emphasis compared with the EC proposal. The AIA classifies all systems used within a list of predetermined domains as high risk.Footnote 113 A distinction is made between two sub-categories of AI systems: (a) AI systems that are products or safety components of products that are already covered by EU health and safety legislation, and (b) standalone AI systems used in specific (fixed) areas.Footnote 114 Regarding the latter, one of the areas included that is particularly relevant for children is educational and vocational training – both in terms of determining access to such training and evaluating individuals.Footnote 115 This could include, for example, AI-enabled online tracking, monitoring, and filtering software on educational devices, which could have a chilling effect on children’s right to freedom of expression or violate their right to privacy. This is reminiscent of the Ofqual algorithm debacle in the United Kingdom, where an automated decision-making system was employed to calculate exam grades, leading to discriminatory outcomes for children from lower socio-economic backgrounds.Footnote 116 Such systems can clearly violate children’s right to education, as well as their right not to be discriminated against and perpetuate historical patterns.Footnote 117 Another area where AI systems are likely to present high risks to children (as well as adults) is in the allocation of public assistance benefits and services.Footnote 118 Recital 37 specifies that, owing to the vulnerable position of persons depending on such benefits and services, AI systems used in this context may have a significant impact on the livelihoods and rights of the persons concerned – including their right to non-discrimination and human dignity. A concrete example is the so-called benefits scandal in the Netherlands, where an AI-enabled decision-making system withdrew and reclaimed child benefits from thousands of families, disproportionally affecting children from ethnic minority groups.Footnote 119 Aside from these two areas, Annex III lists biometric identification, categorisation, and emotion recognition; management and operation of critical infrastructure; employment; law enforcement; migration; and administration of justice and democratic processes as high-risk areas. The EC can also add sub-areas within these areas (subject to a veto from the EP or Council).Footnote 120 However, other domains where AI systems and automated decision-making are employed would not be considered high risk, even if they are likely to be accessed by children or impact them or their fundamental rights. This leaves out a whole range of AI systems that could affect the daily lives of children, such as online profiling and personalisation algorithms, connected toys, and content recommender systems on social media.Footnote 121
The final text includes only voluntary commitments for low-risk AI systems. Given the rapid development of AI technologies and how difficult it is at this stage to imagine the future impact on children and their rights, this feels like a very light regulatory regime. A more cautious approach – based on the precautionary principle – could have been to include a binding set of general principles (supra), including the best interests of the child (similar to recital 89 of the DSA), fairness, and non-discrimination for all AI systems in the AIA.Footnote 122
With regard to the transparency requirements for certain AI systems, the final text includes a specific reference to children – or at least to ‘vulnerable groups due to their age’. Article 50 of AIA states that AI systems should be designed so that individuals are informed that they are interacting with an AI system, unless it is contextually obvious. In relation to this, Recital 132 of AIA specifies that when implementing this obligation, the characteristics of vulnerable groups owing to their age or disability should be taken into account, if these systems are intended to interact with those groups as well.Footnote 123
Finally, the final text also grants rights to individuals (including children) affected by the output of AI systems, including the right to lodge a complaint before a supervisory authority,Footnote 124 and the right to an explanation in certain instances.Footnote 125 However, children are not specifically mentioned in these provisions.
18.5 Discussion: A Children’s Rights Perspective on the DSA and the AIA
It can only be welcomed that both instruments refer to children and their rights. The question is, however, whether the proposals have the potential to ensure that children’s rights will be effectively implemented. For the DSA, the answer is quite clear: it holds great promise for advancing children’s rights, depending on the actual implementation and enforcement. Where references to children in the proposal were few and far between in the Commission proposal, the final text appears to take children and their interests (finally) seriously by imposing obligations on VLOPs that could make a difference. Moreover, in addition to the provisions that directly refer to children that have been discussed earlier, there are of course other provisions that will indirectly have an impact on children as a subgroup of individuals in general. Examples are the provisions regarding recommender systems (Articles 27 and 38) and the design of online interfaces (Article 25). While the text of the law provides indeed many opportunities, the actual realisation of children’s rights will depend on the implementation and enforcement. The DSA does create enforcement mechanisms and oversight bodies that are responsible for ensuring this.Footnote 126 In 2024, the Commission already launched formal proceedings against, among others, TikTok and Meta (Facebook and Instagram) related to the protection of minors.Footnote 127 The Commission expresses concerns about, for example, age verification, default privacy settings, and behavioural addictions that may have an impact on the rights of the child. It thus seems that the enforcement of the DSA will move forward faster than, for instance, the enforcement of the GDPR.
For the AIA, it remains to be seen whether it will succeed in guaranteeing children’s rights. Children’s rights are mentioned in the Preamble and provisions of the Act – which is laudable – and it clearly acknowledges the potential negative impact of AI systems on their rights. However, whether these acknowledgements are sufficient for protecting and promoting children’s rights in an increasingly AI-driven world is questionable. First, the prohibition of AI systems that exploit vulnerable groups leaves questions about the threshold for significant harm and its interplay with other instruments. Second, while the final text mentions that the impact of AI systems on children’s rights is considered ‘of particular relevance’ when classifying them as high risk,Footnote 128 As a final consideration, the AIA does not explicitly introduce a general obligation for ‘specific protection’ for children when AI systems and automated decision-making infiltrate their lives – in contrast to, for instance, the GDPR when it comes to the processing of their personal data (recital 38) or – arguably – the DSA requirement to ensure a high level of privacy, safety, and security for minors. By introducing an obligation to ensure the best interests of children for all AI systems that are likely to impact children, this could have led to more effective rights implementation in practice.
From a children’s rights perspective, a few questions remain. First, the adoption of any legislative initiative that affects children should be accompanied by a thorough Children’s Rights Impact Assessment (CRIA).Footnote 129 Although both proposals have been preceded by an impact assessment, it can hardly be said that they would qualify as CRIAs. A true CRIA would assess the impact of the proposed measures on the full range of children’s rights and would balance conflicting rights (both where children’s rights would conflict with each other and where children’s rights would conflict with the rights of adults, businesses, or other actors). A CRIA is also the way to evaluate whether a proposed measure takes the best interests of the child as a primary consideration. The best interests of the child is a key children’s rights principle, which is laid down in Article 3 of UNCRC and Article 24 of CFEU. This principle should guide all actions and decisions that concern children. This is also very closely linked to another children’s rights principle, laid down in Article 12 of UNCRC, which is the right to be heard. Children’s views must be taken on board and must be given due weight. Although in the preparatory steps leading towards the adoption of the proposals, children’s rights organisations have had opportunities to share their views and suggestions, this does not replace the actual engagement of children in that process. This is – again – a lesson to be learnt. Moreover, CRIAs might also be helpful for the addressees of the legislative acts, when assessing risks that their services or systems pose for children and their rights.
Second, experiences with other legislative instruments, such as the GDPR, have shown that the vague wording in the legislative instruments leaves addressees often at a loss for how to implement obligations (notwithstanding their often-well-meant intentions). Hence, fast and concrete guidance,Footnote 130 for instance, by means of Commission Guidelines (Article 28.4 DSA), codes of conduct or guidelines by the newly established European Board for Digital Services or the European Artificial Intelligence Board will be essential. In addition, whereas enforcement by Data Protection Authorities of the GDPR has been argued to be slow, lacking, or not prioritising children, it will be up to Member States to ensure that the DSA and AIA oversight bodies are well resourced, and it will be up to the Commission to take up its supervisory role when it comes to the VLOPs and VLOSEs.
Finally, both instruments adopt an approach that predominantly focuses on safety and risks. There are few to no obligations for platforms to take measures to support children, to enable them to use the services to fully benefit them, and explore the opportunities that such services offer. Although this is for instance something that the BIK+ Strategy has more attention for, it is perhaps a missed opportunity to put into practice some of the more positive measures the General Comment no. 25 requires States to take.
18.6 Conclusion
The EU legislator is determined to minimise risks that are posed by platforms and AI-based systems by imposing various obligations on a range of actors. While many of those obligations are not child-specific, some of them are. Children who grow up today in a world where platforms and AI-based systems are pervasive might be particularly vulnerable to certain risks. The EU legislator is aware of this and pays increasing attention to the risks to children and their rights, although this is not necessarily the case to the same extent across different legislative initiatives. While the DSA emphasises the protection of minors quite heavily, the AIA is less outspoken. Both during the legislative process, and in the stages of implementation and enforcement, the rights and principles contained in the UNCRC should be duly taken into account in order to effectively realise children’s rights in the digital environment.
19.1 Introduction
How can we define linguistic rights and their role in human rights protection? According to the United Nations (UN) Special Rapporteur on minority issues: ‘Linguistic rights can be described as a series of obligations on state authorities to either use certain languages in a number of contexts, or not interfere with the linguistic choices and expressions of private parties.’Footnote 1 Linguistic rights include the right to use one’s language in private and public and not to be discriminated against for it. In respect to education, it is usually the competence of the state to organise this and set up rules for public but also for private schools. The school curricula prescribe the basic values and goals pursued by education and also the language of instruction. Could it be claimed that states have a duty to provide education in the minority languages used by portions of its population?Footnote 2
The protection of the linguistic rights of minorities, including in education varies. We shall compare this protection at the level of the UN and the Council of Europe and look into some more recent developments in international law including during the COVID-19 pandemic.
19.2 UN: A House Much Divided
Linguistic rights are endorsed in a number of international human rights treaties and standards, especially regarding the prohibition of discrimination, the right to freedom of expression, the right to a private life, the right to education, and the right of linguistic minorities to use their own language with others in their group. These treaties and standards are both universal and regional. The most significant contributor at the universal level is, of course, the UN with its many declarations and opinions. In that respect, the most important is the Declaration on the Rights of Persons Belonging to National or Ethnic, Religious or Linguistic Minorities, adopted in 1993.Footnote 3 Even though the General Assembly’s declarations are not legally binding, they still have a certain legal authority. According to Article 1 of the Declaration, ‘States shall protect the existence and the national or ethnic, cultural, religious and linguistic identity of minorities within their respective territories.’ Consequently, linguistic identity is recognised to be an integral part of the identity of minorities and must be protected. Nevertheless, in Article 4 that speaks about education, provisions for learning and instruction in the mother tongue (para. 3) ‘are qualified and ambiguous’ when it comes to the teaching of or in the minority language.Footnote 4
However, perhaps not so surprisingly, today there is still no binding treaty at the UN level on the rights of minorities nor is there a universally accepted definition of minorities other than the famous Article 27 of the 1966 International Covenant on Civil and Political Rights (ICCPR).Footnote 5 Article 27 refers to the duty of state parties ‘not to deny’ minorities some fundamental (collective and individual) human rights. The monitoring body under this treaty – the Human Rights Committee – provided an interpretation of this article as requiring also positive measures ‘to ensure that the existence and the exercise of this right are protected against their denial or violation. Positive measures of protection are, therefore, required….’Footnote 6 Positive measures by state parties may also be necessary to protect the identity of a minority and the rights of its members to enjoy and develop their culture and language.Footnote 7 The ICCPR does not deal with the right to education but it left this to the complementary International Covenant on Economic, Social and Cultural Rights, also adopted by the UN General Assembly in 1966.Footnote 8 This treaty recognises the right of everyone to education. Its Article 13 paragraph 4 reserves ‘the liberty of individuals and bodies to establish and direct educational institutions’, which may be construed (!) as including the right of minorities to run their own schools and teach (in) their own language, especially when read in conjunction with Article 2.2 (non-discrimination clause).Footnote 9 State parties are obliged to establish ‘minimum educational standards’ to which all educational institutions established in accordance with Article 13 (3) and (4) are required to conform. UNESCO – a specialised UN agency for education, science and culture – is more specific about the right to have education in one’s own language. The Convention against Discrimination in Education provides for the ‘rights of members of national minorities’ in Article 5.1.c., including the use or the teaching of their own language.Footnote 10 This right, nevertheless, depends on some requirements:
(i) That this right is not exercised in a manner which prevents the members of these minorities from understanding the culture and language of the community as a whole and from participating in its activities, or which prejudices national sovereignty; (ii) That the standard of education is not lower than the general standard laid down or approved by the competent authorities; and (iii) That attendance at such schools is optional.Footnote 11
In this context, it is also necessary to look into the UN Convention on the Rights of the Child (CRC), adopted in 1998 and in force in an extraordinary number of 196 states.Footnote 12 Article 30 of the CRC, following Articles 28 and 29 on education, follows the formulation of Article 27 of the ICCPR so children belonging to a minority ‘shall not be denied the right, in community with other members of his or her group, to enjoy his or her own culture, to profess and practise his or her own religion, or to use his or her own language’.
Finally, among other relevant UN instruments, we have to mention the International Convention on the Elimination of All Forms of Racial Discrimination, 1966 (CERD).Footnote 13 This refers to ‘the right to education and training’ as among the rights to be protected against discrimination (Art. 5 (e)v).Footnote 14
CERD provides for the jurisdiction of the International Court of Justice (ICJ), and it was used by Ukraine to bring a case against the Russian Federation concerning, among other issues, education in Ukrainian and Crimean Tatar languages on the occupied territory of Crimea.Footnote 15 Ukraine alleged that the measures taken by the Russian Federation in education since the invasion in 2014 led to a dramatic decline in accessibility and quality of education in these two languages. The Court started from the premise that:
even if Article 5 (e) (v) of CERD does not include a general right to school education in a minority language, the prohibition of racial discrimination under Article 2, paragraph 1 (a), of CERD and the right to education under Article 5 (e) (v), may, under certain circumstances, set limits to changes in the provision of school education in the language of a national or ethnic minority. (para. 354)
The Court then considered that:
Language is often an essential social bond among the members of an ethnic group. Restrictive measures taken by a State party with respect to the use of language may therefore in certain situations manifest a ‘distinction, exclusion, restriction or preference based on … descent, or national or ethnic origin’ within the meaning of Article 1, paragraph 1, of CERD. (para. 355)
Recognising the right of every state to decide on the primary language of instruction in its public schools, the Court warns against a discriminatory adverse effect of such decisions so ‘as to make it unreasonably difficult for members of a national or ethnic group to ensure that their children, as part of their general right to education, do not suffer from unduly burdensome discontinuities in their primary language of instruction’ (para. 357). Finally, the Court established that there was an 80 per cent decline in the accessibility of education in Ukrainian during the first year after 2014 and a further decline of 50 per cent by the following year.Footnote 16 The Russian Federation was not able to provide convincing reasons for such a decline, so the Court concluded that it violated the relevant articles of CERD.Footnote 17
In conclusion, it seems that there has been certain progress at the level of the UN in recognising education for members of linguistic minorities. However, the majority of the analysed instruments either limits its scope to the non-discrimination obligation or imposes certain limitations and requirements on such education.Footnote 18
19.3 Council of Europe: General Human Rights versus Special Treaties
The protection of national minorities and their languages is among the core activities of the Council of Europe as part of the protection and promotion of human rights. Today, the Council of Europe has forty-six Member States after expelling the Russian Federation in March 2022 owing to its aggression against and invasion of Ukraine.Footnote 19 All forty-six Member States are bound by the European Convention for the Protection of Human Rights and Fundamental Freedoms, the Council’s flagship treaty, and the jurisdiction of the European Court of Human Rights (ECtHR).Footnote 20 However, the Convention has a somewhat limited scope with regard to minorities’ rights and freedoms.Footnote 21
Post-Second World War Europe has been a champion in many aspects of the protection and promotion of human rights. Case in point, the European Convention for the Protection of Human Rights and Fundamental Freedoms (ECHR) was already adopted in 1950, only two years after the adoption of the Universal Declaration on Human Rights by the General Assembly of the UN.Footnote 22 Nevertheless, the ECHR and consequently the case law of the ECtHR has limited application in the field of the protection of minority rights and education in minority languages.
The right to education is protected by Article 2 of Protocol No. 1 to the Convention. In 1968, the ECtHR ruled in the Belgian Linguistic case that states have no obligation to ensure education in one’s own language.Footnote 23 The right to education implied the right to be educated in the national language (in public schools).
This firm position was seemingly challenged in the interstate case between Cyprus and Turkey before the Grand Chamber in 2001.Footnote 24 The case concerned Greek-language education in the occupied part of Cyprus where the Turkish Republic of Northern Cyprus (TRNC) was in power. Primary education was available but if children wanted ‘to pursue a secondary education through the medium of the Greek language’ they were obliged to transfer to schools in the south or to continue education at a Turkish or English-language school in the north. The Court admits that:
In the strict sense, accordingly, there is no denial of the right to education, which is the primary obligation devolving on a Contracting Party under the first sentence of Article 2 of Protocol No. 1. Moreover, this provision does not specify the language in which education must be conducted in order that the right to education be respected (see the above-mentioned Belgian linguistic judgment, pp. 30–31, § 3).Footnote 25
However, and here comes the twist:
the option available to Greek-Cypriot parents to continue their children’s education in the north is unrealistic in view of the fact that the children in question have already received their primary education in a Greek-Cypriot school there. The authorities must no doubt be aware that it is the wish of Greek-Cypriot parents that the schooling of their children be completed through the medium of the Greek language… [T]he failure of the “TRNC” authorities to make continuing provision for it at the secondary-school level must be considered in effect to be a denial of the substance of the right at issue. (para. 278)
Finally, the Court concluded ‘that there has been a violation of Article 2 of Protocol No. 1 in respect of Greek Cypriots living in Northern Cyprus in so far as no appropriate secondary-school facilities were available to them’ (para. 280).
More recently, the ECtHR adopted two judgments concerning schools in a minority language (Russian) in Latvia. Both cases concerned legislative amendments from 2018 increasing the proportion of subjects to be taught in the state language (Latvian) in public and private schools, so the use of Russian as the language of instruction was consequently being reduced.
The first case, Valiullina and Others v. Latvia,Footnote 26 concerned the use of Latvian in public schools. The Court returned to its conclusions reached in the Belgian Linguistic case that Article 2 of Protocol No. 1 does not include the linguistic preferences of the parents and explained that this judgment, as well as the one in the Cyprus v. Turkey case, and some others, dealt with education in (one of) the national language(s).Footnote 27 Accordingly, ‘the right enshrined by Article 2 of Protocol No. 1 does not include the right to access education in a particular language; it guarantees the right to education in one of the national languages or, in other words, official languages of the country concerned’ (para. 135).
In Džibuti and Others v. Latvia,Footnote 28 the Court examined the complaints by Russian speakers concerning the effect of the same 2018 legislation reform with respect to private schools. Prior to the reform, education in the state language had been mandatory only in public schools, but the authorities claimed that private schools form part of the state education system and should therefore respect the need to have a certain proportion of classes in the national language.Footnote 29 The Court would not accept that Article 2 of Protocol No. 1 covers education in minority languages and concluded that the claim must be rejected as incompatible ratione materiae with the provisions of the Convention.
In both Latvian cases, the Court also dismissed allegations of discrimination and concluded that there was no violation of Article 14 taken together with Article 2 of Protocol No. 1.Footnote 30 Furthermore, the applicants tried to rely on the Framework Convention for the Protection of National Minorities (FCNM).Footnote 31 This convention refers to education in several articles, but the Court ‘emphasized that the principle of instruction in one’s mother tongue, which the applicants referred to, is far from being the rule among the Member States of the Council of Europe’.Footnote 32 Ironically, this conclusion was extended with praise for Latvia for ensuring the protection of minority rights in their Constitution.Footnote 33
Almost simultaneously, in October 2023, the Advisory Committee (AC) of the FCNM adopted their fourth monitoring report on Latvia. The AC adopted a very stark warning about the second education reform legislation following the one from 2018 adopted in 2022 that:
will result in the phasing out of teaching in minority languages in most public and private preschools, schools and universities by 2025. [T]he termination of teaching in Belarusian and Russian will affect about 20% of all children of schooling age. With plans also underway to discontinue the teaching of Russian as a foreign language, the offer will be reduced to extracurricular courses of language and culture. Should all these measures be implemented as planned, Latvia’s system of minority education will no longer comply with the Framework Convention’s provisions regarding equal access to education, the right to set up private minority educational establishments, and the right to being taught the minority language or for receiving instruction in this language.Footnote 34
With these two judgments, the Court further confirmed that the ECHR has very limited application concerning minority rights and that their protection is better ensured through the treaties especially dedicated to the protection of minorities, namely, the FCNM and the European Charter for Regional or Minority Languages (ECRML).Footnote 35 However, these treaties apply only to state parties. So far, the Court has refused to apply some of their principles as customary law or general principles of international law extending their application beyond state parties.Footnote 36
The ECRML has not been invoked in these two cases, since Latvia is not a state party to it. In general, the ECRML is hardly ever discussed by the Court. In one of the more recent cases, the Court simply established that the ECRML expressly recognises that the protection and encouragement of minority languages should not be to the detriment of official languages and the need to learn them.Footnote 37
Nevertheless, the two treaties of the Council of Europe dealing specifically with national minorities (FCNM and ECRML), in force since 1998, have their own peculiarities. The former has thirty-eight state parties,Footnote 38 and the latter only twenty-five. They both have specific monitoring mechanisms that include the Committee of Ministers of the Council of Europe. The Framework Convention monitoring body is the AC and the Charter has the Committee of Experts.Footnote 39 The system is based on recommendations by all three bodies. Both special instruments of the Council of Europe dedicated to the protection of national minorities provide for the need and indeed the obligation to offer education in minority languages. The state parties of the Framework Convention ‘undertake to promote equal opportunities for access to education at all levels for persons belonging to national minorities’ (Art. 12.3). The Charter is even more precise as one of its objectives includes ‘the provision of appropriate forms and means for the teaching and study of regional or minority languages at all appropriate levels’ (Art. 7.1.f). In addition, state parties may choose to provide education at all levels or a substantial part in the selected languages. Finally, the last option is to provide for the teaching of the relevant regional or minority languages (Art. 8).Footnote 40
19.4 COVID-19 Pandemic and Education
During the COVID-19 pandemic, both monitoring mechanisms issued statements and recommendations to state parties that were also valid for a wider circle of states. The statements identified and focused on specific problems that minorities faced during the COVID-19 pandemic and that the majority population was not. Furthermore, as part of their monitoring processes, both bodies now routinely ask questions about the measures taken during the pandemic that affected the national minorities and the use of their languages.
In addition, a newly established intergovernmental body within the Council of Europe called the Committee on Anti-discrimination, Diversity and Inclusion (CDADI) specifically carried out an investigation to shed light on this worrying problem.Footnote 41 It seems that education in minority languages has been most affected.
From the start of the pandemic, the Committee of Experts as the monitoring body included the question of the measures taken during COVID-19. The pandemic had a terrible impact on all aspects of our lives – social, mental, financial, political, educational, and, of course, it affected our health. It was often heard that the virus does not discriminate; that we are all in it together. Nevertheless, research shows that this is not necessarily true and that there are some social groups that were more affected than others.
These social groups may be called vulnerable, and they represent different kinds of minority populations – mostly ethnic, racial, national, or sexual minorities. Evidence shows that these groups suffered the consequences of the pandemic disproportionally to their share in the population. The consequences ranged from a lack of access to sanitary information and vaccines to exposure to hate speech and physical violence. The latter especially affected LGBTQI people but also Asian-looking people at the beginning of the pandemic, as they were perceived as responsible for the virus. Roma and travellers also suffered from hate speech, and a trend known as anti-gypsism was exacerbated during this period.
These trends were confirmed, for example, by the European Network of Equality Bodies (EQUINET) and ombudsmen across Europe. They noted increases in xenophobia and racism against pretty much every minority group. The people responsible for leadership – politicians at different levels – were marked as those who often incited hate speech and discrimination.Footnote 42
The usual targets, at least in Europe, where such trends are concerned are the Roma, a community that were already marginalised and discriminated against even before the start of the pandemic. According to a study by the Council of Europe for the CDADI,Footnote 43 in addition to general measures that affected the general population, Roma were targeted by some measures directly discriminating against them. For example, in Bulgaria and Slovakia there were militarised quarantines imposed on Roma settlements as, allegedly, they were the people who would not respect the sanitary restrictions. In Bulgaria, they were even using drones with thermal sensors to take the temperature of the residents of these settlements and to monitor their movements.
Restrictions on free movement during the pandemic affected the Roma also in a more indirect form. Their way of life and making a living often includes travelling and collecting materials for sale. That was obviously not possible during the lockdowns, but they were not entitled to financial assistance for losing their jobs because, technically, they did not lose their jobs as they did not have jobs to lose. A real Catch-22.
Finally, there is data in many states that minorities, including the Roma, suffered more infections and COVID-19 related illnesses. Namely, belonging to a minority ethnic group was an additional health risk factor. As far as the Roma were concerned, most of them did not have any health insurance and were not entitled to free health services, so many stayed away from hospitals for as long as they could.
Let us think back to those first few weeks of the pandemic. We were receiving hourly and daily information and instructions on how to behave and what measures we had to take to stay safe. How much of it was given in minority languages, in real time? According to research carried out by the Committee of Experts for regional or minority languages, which I am a member of, many states failed to provide early information in regional or minority languages. Put that in the context of the non-available sanitary facilities in Roma settlements, for example, then add the lack of sanitary material such as disinfectants or facial masks together with big families living together in small houses, and the result was quite devastating for these communities. In the early days of the pandemic, in March 2020, the Committee of Experts warned about the need to use regional or minority languages in providing information to the population about the sanitary measures required to prevent the spread of the disease.Footnote 44 In the early days of the COVID-19 pandemic, people were confused and scared and the lack of proper information made it even worse. The Committee of Experts invited states to take language-related issues into account when developing further policies and instructions to address this exceptional medical crisis. In its view, the authorities should not forget that national minorities are an integral part of their societies, and in order for the measures adopted to have full effect, they should be made available and easily accessible to the whole population.
It is very common to say – well, they all speak the official language. However, this is not always true. There are parts of societies in some countries where a minority language is so dominant that many people do not master the official language of the state. This is especially the case with the Roma, whose social isolation also results in poor language skills in the official language.
A positive example in this context comes from Croatia, where the Institute for Public Health published sanitary instructions in Romani on their webpage in April 2020 along with other minority languages.Footnote 45 There are, however, at least two problems with this commendable example: (a) How many Roma had access to the internet? (b) Romani is not the only language spoken by Roma in Croatia, as the majority speaks Boyash Romanian, a completely different language. However, the Red Cross organisation in one of the counties where the Roma community in Croatia lives distributed sanitary packages with instructions in Croatian and Boyash.Footnote 46
As far as education is concerned, at the start of the pandemic the first measure across the world was the lockdown affecting all aspects of public life, including schools at all levels of education. According to UNESCO data, 1.6 billion learners were affected by school closures in more than 190 countries.Footnote 47 Therefore, realising the importance of education, one of the first measures undertaken by public authorities to counter the lockdown and the closing of schools was the shift to online education and/or TV classes.Footnote 48 This was a quick fix for an unprecedented situation.Footnote 49 However, the Committee of Experts realised the downsides of online education that quickly became a dominant model of education with respect to regional or minority languages. Many states took care of the education in the majority official language only, neglecting sometimes the needs of the speakers of regional or minority languages to also access quality education. The Committee of Experts made a public statement in July 2020 highlighting this potentially discriminatory treatment.Footnote 50 In addition, and bearing in mind the insufficient availability of teaching materials in regional or minority languages noticed during several monitoring cycles in many states, the Committee of Experts encouraged public financing of the development of quality open access textbooks in all languages protected under the Charter. Such textbooks, registered under open licences, should be made accessible online for the use of pupils, students, teachers, and the larger public. This was done for the majority languages, so the same standard was necessary also for regional or minority languages.
In view of the AC under the Framework Convention, the suspension of classes in schools and pre-school education during the COVID-19 pandemic has regrettably often resulted in unequal access to education and discrimination of children belonging to national minorities, particularly those who were not proficient enough in the official languages to be provided with appropriate educational content. As a result, children of national minorities were at risk of learning delays and dropping out.Footnote 51
Although some of these inequalities were justified by the sudden onset of the pandemic and the need to deal with such an unforeseen public health situation, it is still not acceptable to treat members of ethnic or national and linguistic minorities as second-class citizens. Furthermore, it is also clear that some aspects of COVID-19-style education will remain, and it is therefore important to be aware of the needs of the members of national and linguistic minorities.
Let us pause here for another best practice example from Croatia. The Ministry of Education realised the problem of Roma children’s inability to follow classes online or on television. The classes were in Croatian since Croatia did not provide for education in Romani or Boyash. In fact, the school authorities are obliged to give Roma children enhanced teaching of Croatian to improve their language skills so that they can follow classes in Croatian.Footnote 52 Living in settlements meant that the vast majority of children had no access to the internet; television sets, if available, were not at the disposal of the children or, even more often, had to be shared by many children in the household. The Ministry of Science and Education provided tablets and vouchers for the internet for all children who could not afford it otherwise in order to be able to follow the online classes. Most of them were Roma. However, some comments suggested that there was no real control or assistance for these families so the vouchers were often misused. Almost 60 per cent of Roma pupils barely participated in online education. Among the reasons given, lack of support by the family and inadequate conditions at home were predominant, but lack of tablets or no access to the internet were high on the list.Footnote 53 It is clear that the consequences will be difficult to remedy since Roma children have a very high drop-out rate from school even at the best of times.Footnote 54
In Slovakia, on 28 April 2020, the Ministry of Education issued the first guidelines on the content and organisation of education for primary school pupils during that time and released material by the State Pedagogical Institute entitled ‘Content of education at primary schools during the extraordinary interruption of teaching at schools’ including minority languages: German, Hungarian, Romani, Russian, Ruthenian, and Ukrainian. However, the Committee of Experts noted that, as in other state parties, there were significantly more alternative audiovisual education materials available in the official language from official and unofficial sources.Footnote 55
In the UK, the authorities undertook a number of measures considering the needs of regional or minority languages, in particular when it was the remit of regional authorities.Footnote 56 The Welsh Government has taken measures in education by ensuring that children and parents/carers received support during school closures and their reopening. The Scottish Government, through Bòrd na Gàidhlig, also provided funding for resources to help students with distance learning in Scottish Gaelic during the closure of schools. Many resources have been produced to help with distance learning. Additionally, the BBC published resources for ‘lockdown learning’ in Irish, Scottish Gaelic, and Welsh.
In Austria, a study was carried out on the perception of a possible decrease in knowledge of the languages of instruction (Slovene and German) in three minority secondary schools in Carinthia. After the overnight shift to online teaching, one favourable fact was that the majority of pupils were from economically solid backgrounds and were able to follow online lessons. Nevertheless, there seems to be a decline in knowledge of both languages in monolingual speakers owing to a lack of contact with the languages at school.Footnote 57
19.5 Conclusions
The right to education in minority languages has been challenged recently in the proceedings of several international courts with not so satisfying results for minority language speakers. The arguments accepted by the courts were limited to non-discrimination. However, wider arguments should have been propagated in favour of the protection and promotion of minority languages in education. While not every language can be catered for in the school system, states should be persuaded that they have more positive obligations to preserve and promote languages spoken by parts of their population.
The COVID-19 pandemic has shown just how easy it is to fall into discriminatory patterns in the treatment of minorities and how easy it is to find excuses for this. Instead, the authorities should always keep in mind their responsibility to manage social trends and correct those with negative potential.
Council of Europe treaties specifically dedicated to the protection of minorities and their languages, in particular the ECRML, seem to be better suited to promoting a different approach to education at least for their state parties. It makes sense, therefore, to promote their ratification even more.
In the words of the former UN Special Rapporteur Mr de Varennes:
Minority language education is more of an asset than it is a liability when understood and applied properly. It is not a right which is applicable in every situation where one individual simply demands it. It is rather a result which is consistent with the values of respect for diversity, tolerance and accommodation – rather than rejection – of differences which have become cornerstones of most democratic societies.Footnote 58
20.1 Introduction
This chapter aims to provide the reader with a comprehensive overview of contemporary discussions surrounding labour and human rights in the context of technological acceleration using the context of Brazil. In times of profound digital transformation, labour relations have been shaped by growing platformisation, precarisation, and inequality, demanding a critical analysis of the new forms of exploitation and the social and political responses to these challenges.
The chapter is structured around a bibliographical review of authors who address the intersection of labour, technology, and human rights, including Rafael Grohmann, Ricardo Antunes, Ludmila Costhek Abílio, Trebor Scholz, Paul Singer, and Renato Dagnino. These scholars provide a solid foundation for understanding how platformisation processes and algorithmic management have redefined labour dynamics, exacerbating social and economic inequalities. In particular, it discusses how the gig economy and the ‘uberisation’ of work have transformed labour relations, increasing worker vulnerability while reducing their protections and rights.
Throughout the chapter, we investigate how the regulation of digital work and the recognition of labour rights, especially on digital platforms, have become topics of debate in Brazil, particularly in a scenario of increasing automation and social exclusion. We analyse initiatives such as Fairwork, which has emerged from the fight to promote decent working conditions in the context of digital platforms. Additionally, we highlight the difficulties faced by workers involved in this digital economy, such as the lack of security, income instability, and the opaque control exercised by companies through algorithmic management.
Finally, we bring environmental education to these discussions, showing how it can significantly contribute to addressing the socio-economic and environmental inequalities associated with platform capitalism. Environmental education, especially from the perspective of environmental justice, offers a critical lens that not only addresses ecological issues but also reflects on the social and political conditions that affect the most vulnerable workers. The connection between education, human rights, and technology thus becomes an important avenue for building more just socio-environmental relations.
20.2 Contextualising the Discussion: Digital Transformations and Labour Inequalities in Brazil
In recent years, the rapid advancement of digital technologies has profoundly transformed labour markets worldwide. As Brazil navigates this wave of digitalisation, a critical issue arises: that of the inequalities faced by vulnerable workers in this evolving landscape. Although digitalisation offers remarkable opportunities for efficiency and innovation, it also introduces new forms of disparity and exclusion. Vulnerable workers – often those in low-paying jobs, informal sectors, or with limited access to technology – are particularly susceptible to the adverse effects of this transformation, which directly impacts their well-being and economic stability.
Globally, and especially in Brazil, the intensification of labour exploitation has been used as a measure to revitalise and stabilise capitalist accumulation.Footnote 1 According to Neves, processes of precarisation, outsourcing, and informal labour are essential for the expansion of capitalism. The shift in the labour organisation model, which makes it increasingly flexible, is strongly marked by the platformisation of work.Footnote 2 The accelerated advancement of digital technologies and the growing automation of production processes have generated profound transformations in the world of work, while simultaneously accentuating socio-economic inequalities.
In this context, workers face the risk of alienation and exclusion as technological development advances at an ever-increasing pace without adequate social protection mechanisms and adaptation to the new labour realities. According to the Economic Commission for Latin America and the Caribbean, technological progress has been accompanied by a series of socially negative outcomes, such as the exclusion of a significant portion of the population from the benefits of digitalisation, mainly owing to insufficient incomes that limit access to quality connectivity, suitable devices, and reliable domestic connections. Additional problems include the proliferation of fake news, the increase in cyberattacks, growing privacy risks, and the issue of electronic waste.Footnote 3
The COVID-19 pandemic exacerbated these issues, bringing negative impacts on jobs, wages, and efforts to combat poverty and inequality, especially in countries such as Brazil, where structural constraints, such as connectivity problems and social inequalities, further limit the benefits of digital technologies.Footnote 4 While common elements can be identified in the digitalisation process across different countries, it is crucial to recognise the particularities of each location and region. The social dynamics and inequalities that characterise each context are accentuated by digitalisation, which often does not allow collective struggles to take shape or rights to be strengthened.
Rafael Grohmann, in his book Os Laboratório do Trabalho Digital (The Digital Labor Laboratories), argues that contextualising the geopolitics of platform labour also means understanding the different meanings of work in the economies of each country, distinguishing experiences between the global North and South.Footnote 5 Grohmann and Abílio et al. highlight that while the term ‘gig economy’ emerged in the global North to describe the platform work landscape, in Brazil, this nomenclature does not apply in the same way, as the Brazilian economy has always been characterised by a management of survival for the working class, now intensified by the transition to the digital under a liberal rationality.Footnote 6 Thus, platform-mediated subordinate labour is embedded in contemporary dilemmas related to mapping and recognising worker exploitation and its centrality in current forms of capitalist accumulation.Footnote 7
Ricardo Antunes, one of Brazil’s most prominent labour sociologists, notes that before 2020, more than 40 per cent of the Brazilian working class was in informal situations, a situation that worsened further with the COVID-19 pandemic.Footnote 8 According to him, ‘we are living in a new level of real subordination of labour to capital under algorithmic governance, with the working class living between the disastrous and the unpredictable’.Footnote 9 In our view, this scenario reinforces Grohmann’s analysis of the gig economy where workers, placed in precarious and unstable conditions, struggle to secure only the bare minimum for their survival.Footnote 10 These workers, constantly pressured by low wages and volatile conditions, cannot surpass the subsistence barrier, leaving them with only the effort to cover basic expenses, without the possibility of reaching an income that provides any kind of stability or economic progress.
Despite the over-exploitation of labour being a constant in Brazil and Latin America, it is evident that technological advances are transforming the ways in which the working class faces precarisation and exploitation.Footnote 11 In this context, Fairwork emerges as a relevant initiative both in Brazil and globally.Footnote 12 This project, based at the Oxford Internet Institute and the WZB Berlin Social Science Centre, works closely with workers, platforms, lawyers, and legislators in various parts of the world to think about and develop a fairer future for work. The Brazilian team is co-ordinated by professors Rafael Grohmann, Julice Salvagni, Roseli Figaro, and Rodrigo Carelli. Additionally, we highlight the efforts of researchers, such as Ludmila Costhek Abílio, Abílio et al., Grohmann, Rebechi et al., and many others who are fighting for the recognition and defence of the labour rights of workers within the context of digital platforms.Footnote 13
In 2023, the second Fairwork Brazil report was published, continuing to examine how the major digital labour platforms in the country align with Fairwork’s decent work principles amid intense disputes and debates about platform labour regulation.Footnote 14 The document highlights that after the election of Luiz Inácio Lula da Silva for his third term as president of Brazil, a working group was established to discuss the regulation of platform labour in the country, involving the participation of companies, workers, and government representatives. Another important fact concerns how digital platforms use lobbying practices to influence legislation and public policies, often using subtle tactics and data manipulation to distort and contest notions of decent work.
In this regard, the phenomenon of uberisation exemplifies the adverse conditions faced by digital platform workers in Brazil, as described by Abílio and Santiago in the ‘Dossiê das violações dos Direitos Humanos no Trabalho Uberizado’ (‘Dossier on Human Rights Violations in Uberised Work’).Footnote 15 Uberisation, as the dossier defines it, refers to the growing precarisation of labour relations promoted by digital platforms such as Uber and iFood. In Brazil, workers face structural problems, including the lack of basic labour rights, unsafe working conditions, and the absence of the formal recognition of their activities as regular employment. Platform-mediated work has given rise to various terms worldwide and in Brazil that attempt to describe the specific forms of precarisation associated with this reality.
Besides the terms ‘gig economy’ previously mentioned, and ‘uberisation’, we highlight new vocabularies that have been incorporated into research on the world of work. Among them is the concept of the ‘just-in-time worker’,Footnote 16 ‘crowdwork’, and ‘work on demand’.Footnote 17 These terms reflect the different dimensions of the gig economy, which encompasses both crowdwork (work performed through online platforms) and on-demand work managed by apps (‘work on demand via apps’).
Based on De Stefano’s contributions, we understand that crowdwork involves the performance of tasks through online platforms that connect clients and workers globally, ranging from simple microtasks to more complex jobs.Footnote 18 On-demand work via apps includes traditional activities such as transportation and cleaning, managed by apps that set quality standards. While crowdwork has global characteristics and on-demand work responds to local aspects, both share characteristics such as payment and management methods. We know these terms are more complex than presented here, but the goal is not to exhaust the concepts, but rather to highlight the reflections of these practices in the world of work. Additionally, differences between platforms can impact legal issues, such as the validity of contracts and applicable legislation.
Therefore, the problems faced by platform workers include income instability, where earnings vary significantly and often do not cover living costs. Additionally, these workers do not have access to benefits such as health insurance, pensions, or protection against work-related accidents, and they bear the full cost of work tools, such as vehicles and smartphones, exacerbating their financial vulnerability.Footnote 19 Another critical aspect is algorithmic management, which subjects workers to opaque control, meaning there is no transparency in this relationship.Footnote 20 Abílio et al. add that algorithmic management is based on automated instructions that process large volumes of data, influencing both the workers’ daily actions and consumer dynamics.Footnote 21 This work organisation model generates instability and a lack of clarity in the rules.
20.3 Forms of Resisting the Deepening of Platform Capitalism Inequalities
The book Platform Cooperativism: Challenging the Corporate Sharing Economy by Trebor Scholz, translated and commented on in Portuguese by Rafael Zanatta, emphasises that platform capitalism deepens labour precarisation, offering unstable and rights-deprived conditions while concentrating wealth and power in the hands of a few, thereby intensifying economic inequality.Footnote 22 The lack of regulation allows these companies to operate without social responsibility, exploiting workers under the false promise of autonomy and flexibility. Furthermore, this model contributes to the erosion of traditional labour rights, such as paid vacation and health insurance, aggravating de-regulation and worker vulnerability – a point already addressed by other authors throughout this publication. However, Scholz’s main contribution, in our view, lies in his proposal for platform cooperativism, which offers a fairer and more democratic alternative to this exploitation.Footnote 23
Scholz’s proposal opposes the logic of the sharing economy, advocating for the creation of worker-controlled labour platforms, offering a more equitable and democratic alternative to the exploitation inherent in current corporate models.Footnote 24 In this same book, Rafael Zanatta discusses the origins of cooperativism in Brazil, which is linked to the early days of the Republic and the immigration process aimed at replacing slave labour and adapting to urbanisation and changes in productive structures. However, the dimension of cooperativism in Brazil followed a business logic, only giving way to more solidarity-based proposals during the Lula administration (2002–10) with the creation of the National Secretariat for Solidarity Economy within the Ministry of Labour and Employment.
In this context, it is essential to highlight the importance of the solidarity economy and social technologies, which have been gaining strength in Brazil since the 1980s and 1990s. These initiatives, whose names are associated with Paul Singer and Renato Dagnino, aim to promote solidarity-based and democratic forms of labour organisation, serving as resistance to capitalist exploitation methods. As previously mentioned, Brazil faces a legacy of social inequality and the exploitation of the working class since its origins. Therefore, we believe that platform cooperativism shares the same goals as social technology and the solidarity economy.
Social technology emerges as a tool that promotes collaboration, inclusion, and social transformation, designed to meet the specific needs of communities. By characterising technology as ‘social’, we recognise that it is not neutral and that its applications can have varying impacts.Footnote 25 This understanding challenges the traditional view of technology, which often prioritises profit over social and environmental well-being. It is within this context that the movement for social technology arises, which, according to Dagnino, constitutes a rejection of conventional technology, seeking alternatives that prioritise the collective and sustainability.Footnote 26
Paul Singer, in turn, argues that the solidarity economy presents itself as an alternative to the neoliberal model, seeking fairer forms of production and trade.Footnote 27 Singer discusses the solidarity economy as a response to the context of inequality, highlighting its capacity to generate income and empower communities.Footnote 28 Singer argues that ‘there is no way to ignore that the solidarity economy is an integral part of the capitalist social formation, in which capital concentration incorporates technical progress and thus determines the conditions of competitiveness in each market’.Footnote 29
Singer adds that the formation of cooperatives or cooperative complexes reveals an organisational strategy aimed at strengthening cooperativism in the face of capitalist dynamics.Footnote 30 In a scenario where capital is concentrated in the hands of a few, technological advances tend to favour this concentration, resulting in growing inequalities. Therefore, while the solidarity economy tries to mitigate the negative effects of capital concentration, it is also influenced by these conditions, revealing the interdependence between the two systems.
In this context, platform cooperativism emerges as an alternative that enhances this logic of collaboration and coordination among cooperatives. Social technology aligns with this perspective of platform cooperativism by proposing technological alternatives to facilitate collaboration between cooperatives and their members, promoting an organisational model that values community participation and autonomy. These platforms not only offer tools for management and commercialisation but also foster the exchange of knowledge and experiences, essential for strengthening cooperativism and the solidarity economy. Alvear et al. argue that:
Among the numerous difficulties faced by cooperatives and solidarity economy enterprises, one of them is access to technology, particularly technologies that are suited to their organizational structures and values. Authors such as Dagnino (2004; 2019) and Varanda and Bocayuva (2009) emphasize how conventional technologies reinforce capitalist values and organizational forms, and thus, Social Technology would be the appropriate technology for solidarity enterprises.Footnote 31
Social technology and the solidarity economy, when integrated into platform cooperativism, can help build more robust networks where cooperatives from different sectors can come together and develop solutions adapted to their local realities. This is especially relevant in contexts of vulnerability, where communities need support to overcome economic and social challenges. All these theoretical and methodological efforts share a common denominator – building foundations to achieve decent work.
20.4 Human Rights and Worker Protection in the Era of Technological Acceleration
The concept of fair and decent work has its roots in the labour struggles of the twentieth century, formally defined in 1999 by the International Labour Organization (ILO). Even in the twenty-first century, this remains a central demand within the United Nations (UN) 2030 Agenda, as part of the Sustainable Development Goals (SDGs).Footnote 32 The UN recognises that decent work is essential for eradicating poverty and promoting prosperity. This concept encompasses working conditions that ensure fundamental rights, social protection, and equal opportunities. SDG 8 emphasises the importance of decent work and sustainable economic growth, acknowledging that promoting proper working conditions is essential not only for eradicating poverty but also for fostering prosperity and social well-being.
Achieving this goal involves ensuring labour rights, combating unemployment, promoting job security, and encouraging social dialogue. In an increasingly globalised and constantly changing world, the challenge of ensuring fair and dignified working conditions becomes even more relevant, requiring collective efforts from governments, businesses, and civil society. Amid these changes, human rights emerge as a crucial anchor for safeguarding the dignity and working conditions of millions of people around the world.
The right to work, enshrined in Article 23 of the Universal Declaration of Human Rights (UDHR), constitutes a central principle to ensure that, even in times of intense technological transformation, everyone can have access to dignified employment opportunities. The growing automation of jobs – especially in industries and service sectors – puts millions of jobs at risk. This results in a paradox between technological progress and increasing precarisation of work.
Hartmut Rosa, in Alienation and Acceleration: Towards a Critical Theory of Late-Modern Temporality, describes ‘time compression’ and ‘technical expansion’ as central features of a world driven by the imperatives of growth and speed. As the economy accelerates, technology not only transforms production dynamics but also redefines social relations and the experience of time and space.Footnote 33 Rosa argues that we are living in a ‘late modernity,’ marked by a process of social acceleration in three dimensions: ‘technological acceleration, acceleration of social changes, and acceleration of the pace of life’.Footnote 34
According to Rosa, technological acceleration ‘constantly displaces the spaces of security’ that were once guaranteed by stable jobs and continuous careers, creating new forms of insecurity and alienation.Footnote 35 This acceleration intensifies the pressures on workers, who face both the insecurity of losing their jobs to machines and algorithms and the difficulty of adapting their skills to new contexts.Footnote 36 In this sense, it is crucial to ensure that workers have not only access to jobs but also fair and equitable working conditions, along with opportunities for reskilling and professional development.
The ILO advocates that the protection of fundamental labour rights must include job security and access to decent working conditions. Technological acceleration, while bringing innovation, also exacerbates social inequalities. As Rosa points out, the dynamics of acceleration tend to benefit those already in privileged positions, widening the gap between the rich and the poor.
This issue is reflected in the Fairwork Report in ‘Life Stories’, where we see workers such as João.Footnote 37 João’s story clearly illustrates that, in scenarios of labour precarisation, such as those he faces, the principles of Article 23 of the UDHR, which establishes the right to decent work, are violated. This, in turn, impedes the fulfilment of what is guaranteed by Article 25, which ensures an adequate standard of living. In light of this scenario, human rights, such as the right to a safe working environment, fair wages, and protection against unemployment, must be reaffirmed in the contexts of digitalised labour. The regulation of platforms and the inclusion of informal workers in social security networks are necessary measures to combat exploitation and ensure that technology is used to promote social well-being, rather than deepening inequalities.
20.5 Environmental Education as a Response to Inequalities and the Defence of Human Rights
Environmental education, according to Reigota, emerges as a response to the need to address the environmental problems generated by the capitalist economic model, which is predatory and unsustainable.Footnote 38 The starting point for the environmental discussion occurred at the First World Conference on the Human Environment, held in Stockholm, Sweden, in 1972. This meeting resulted in agreements between the UN signatory countries, emphasising the importance of educating people to solve environmental issues. From this conference onwards, global environmental concern gained prominence, followed by other significant events, such as the Belgrade Conference (1975), Tbilisi (1977), Moscow (1987), Rio (1992), and Rio+10 (2002) in Johannesburg, all of which contributed to the implementation of public policies on environmental education at the international level.
The concerns that gave rise to environmental education were primarily conservationist in nature and often resembled a ‘manual of etiquette’,Footnote 39 with proposals that were more behavioural than critical of the capitalist system. Initially, environmental education was a concern of ecological movements; however, as the debate deepened, works by authors such as Layrargues and Carvalho became essential in expanding the field’s discourse.Footnote 40
In this regard, it is worth noting that environmental education is a constantly evolving field, shaped by socio-environmental issues: ‘Refounding the historical, anthropological, philosophical, sociological, ethical, and epistemological foundations of Environmental Education means providing new representations to the signs that these sciences come to symbolise within the horizon of a plurality of knowledge within a unity of meanings.’Footnote 41 Considering that we live in a time when crises seem to overlap at a frenetic pace – something we can call polycrises, as reported by Pinheiro and Pasquier – we understand that the necessary debate in the field of environmental education is one that seeks to comprehend the conditions surrounding the emergence of epidemics and pandemics, particularly COVID-19, climate catastrophes, and wars spreading across different parts of the planet.Footnote 42 In all cases, there is always a segment of society disproportionately affected by the damages and negative consequences of these processes.
From this perspective, Isabelle Stengers, in her book In Catastrophic Times, discusses the relationship between the economic crisis in the US and the devastation caused by Hurricane Katrina.Footnote 43 The author asserts that economic and climate crises share a common denominator. Similarly, Henri Acselrad argues that the COVID-19 pandemic, which emerged in 2020, cannot be understood in isolation, but rather as an intrinsic product of neoliberal capitalism.Footnote 44 The health crisis emerged in a context already marked by impending financial crisis, resulting in a general collapse of economic activities. For Acselrad, the notions of environmental crisis and disaster must be analysed in light of the processes of capitalist reproduction and crisis.
Carvalho and Ortega support Stengers and Acselrad by pointing to the intertwining of the pandemic, environmental issues, and the climate crisis.Footnote 45 In the same paragraph, the authors reflect on the war between Russia and Ukraine. Therefore, the pandemic, environmental issues, and geopolitical tensions are deeply intertwined, revealing an interconnected global system in which crises not only accumulate but mutually intensify.
Based on Abílio et al., Grohmann, and the Fairwork Report, we add that platform capitalism is an emergent factor within these crises, interconnected within a complex global system in which each crisis amplifies the others.Footnote 46 This reality, described as a polycrisis, demands a relational approach, which has already been addressed and to which we aim to contribute through the field of environmental education. From the perspective already emphasised, environmental education must not only address environmental degradation but also the social and political conditions that contribute to these emergencies. Therefore, environmental education must include a critical analysis – based on critical thinking – of epidemics, climate disasters, and conflicts, recognising that their consequences disproportionately affect vulnerable groups.Footnote 47 This is an environmental education for environmental justice.Footnote 48
Environmental education provides a conducive space for strengthening the fight against the socio-economic and environmental inequalities faced by workers, especially in the context of environmental crises and technological acceleration. By expanding its boundaries beyond environmental preservation, environmental education becomes a field of study aimed at promoting educational projects in which individuals are engaged in the fight for life in its entirety.
In this sense, Carlos Frederico Loureiro, in his book Environmental Education: Questions of Life, places life at the centre of the debate, highlighting the urgency of a utopia that allows the overcoming of the limiting situations imposed by an exclusionary, oppressive system that destroys nature, including humans.Footnote 49 Loureiro’s view of environmental education is anchored in a broad understanding of the struggle for life. For him, this struggle is not limited to the environmental field but involves transforming the social structures that perpetuate inequalities and the exploitation of workers. He emphasises the need for hope and the imagination of other possible worlds, resisting the logic of a system that generates human misery and environmental destruction.Footnote 50
This conception resonates with discussions on the centrality of life as a fundamental right within the framework of human rights. Rossane Bigliardi and Ricardo Cruz, in their article, reinforce this view by stating that the right to life, beyond mere biological existence, includes access to the minimum conditions necessary for the constitution of a healthy and dignified subjectivity, allowing human beings to fully and equitably develop their potential.Footnote 51 In this sense, environmental education aligns with human rights education, promoting a praxis oriented towards justice, equality, solidarity, and freedom.
Amorim et al. provide another key reading for environmental education by discussing the need for environmental educators to reflect on the temporal dynamics of contemporary society.Footnote 52 In their article ‘A resonance of time: the contemporary challenges of environmental education’, the authors point out that contemporary time is marked by an acceleration imposed by neoliberal dynamics, which creates challenges for the full development of humanity. According to the authors, environmental education should engage in a resonance of time, rescuing the importance of educational practices that consider the multiple and complex temporalities of human existence and life on the planet, challenging the utilitarian view of time promoted by a society focused on consumption and productivity.
Moreover, they suggest that environmental education needs to reformulate its foundations, taking into account temporal dynamics and how they affect human and environmental relationships. They propose a critical analysis of social temporalities, articulating individual and collective time, and point to the need for new ‘synchronisers’ that enable formative practices more suited to the complexity of life. This critical reflection on time is also directly connected to the inequalities faced by Brazilian workers in a context of technological hyper-acceleration.Footnote 53
New technologies, by accelerating production rhythms, impose increasing demands on workers, deepen labour precarisation, and exacerbate social inequalities. To address this challenge, environmental education must adopt a perspective that problematises the impact of temporal dynamics on labour relations and society as a whole, promoting a critique of neoliberal models that alienate individuals and fragment their temporal experiences.
Articulating environmental education with human rights is not only possible but also necessary given that both fields share fundamental principles such as human dignity, social justice, and the right to life. Bigliardi and Cruz argue that environmental education, when oriented towards human rights, fosters a civic education that promotes solidarity and cooperation, essential elements for building a more just and sustainable society.Footnote 54 Moreover, this education provides workers with a critical understanding of structural inequalities and prepares them to face the challenges imposed by a system that commodifies life and destroys nature.
By incorporating human rights principles and placing life at the centre of the debate, environmental education becomes a field of reflection capable of building, together with workers, the necessary tools to confront socio-environmental inequalities and participate in constructing a more just society. As Loureiro, Bigliardi, and Cruz, and Amorim et al. argue, this education goes beyond environmental preservation and involves transforming social structures that perpetuate exploitation and the destruction of life in all its forms.Footnote 55 Therefore, environmental education is an education for life and human rights, preparing us for the struggle for a more just and equitable world.
20.6 Conclusion
This chapter has sought to provide a critical analysis of technological transformations and their implications for labour, as in Brazil, particularly for the most vulnerable workers. Throughout the chapter, we have discussed how platformisation, precarisation, and algorithmic management – topics emphasised by authors such as Rafael Grohmann, Ricardo Antunes, and Ludmila Costhek Abílio – are reconfiguring labour dynamics, exacerbating inequalities, and excluding millions of people from basic conditions of dignity at work. The phenomena of the gig economy and the uberisation of work have emerged as symbols of this new landscape, in which workers face financial instability, lack of legal protections, and the invisible control of digital platforms.
The chapter has also highlighted the importance of initiatives such as Fairwork, which aim to regulate platform labour and promote fairer working conditions. Regulation and the recognition of digital workers’ rights are essential in a context where technological acceleration has deepened exploitation, necessitating new forms of protection and worker participation in decisions that affect their lives.
However, the discussion is not limited to formal labour rights. By connecting labour issues with environmental education, this text broadens the reflection to encompass the right to life in its entirety, integrating social, economic, and ecological dimensions. The struggle for environmental and social justice is intimately connected to the fight for decent working conditions, as both involve the right to a full and sustainable life. Environmental education, when grounded in principles such as justice and solidarity, proposes a critical perspective that goes beyond environmental preservation, addressing the roots of the inequalities that perpetuate the exploitation of workers and the destruction of the environment.
In addition, one of the central issues discussed throughout the chapter has been the acceleration of time, a theme explored by Hartmut Rosa. Late modernity is characterised by acceleration in technological, social, and life dimensions, transforming not only labour relations but also workers’ experience of time. This acceleration, driven by the neoliberal logic of productivity, fragments temporal experiences and intensifies pressures on individuals, calling for a response that embraces more human and balanced rhythms. In this sense, environmental education can also be seen as a proposal to reclaim a different rhythm of life, one that considers the complexity of natural, social, and individual temporalities, in contrast to the alienation promoted by technological acceleration.Footnote 56
Consequently, this chapter aims to contribute to discussions about technological transformations in the workplace by proposing an integrated approach that links the right to decent work with the right to life and environmental justice. By recognising that the accelerated pace of contemporary life affects not only work but also human and ecological relations, this chapter suggests that any response to the challenges of digitalisation must include a critical analysis of the foundations of environmental education. Only through a broad perspective that comprehends life in its fullness will it be possible to build alternatives that promote a more just, equitable, and sustainable future.