We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Almost forty Brazilian cities have begun to deploy facial recognition technology (FRT) in a bid to automate the public safety, transportation, and border control sectors. Such initiatives are frequently introduced in the context of ‘Smart City’ programmes, which exist in a sort of legislative vacuum. Despite the numerous bills recently discussed in the Brazilian Parliament, there is still no legislation that addresses artificial intelligence in general or FRT use specifically. Only minimal and incomplete guidance can be found in general frameworks and sectoral legislation, such as the Brazilian General Data Protection Law (LGPD), the Brazilian Civil Rights Framework for the Internet, the Civil Code, and even the Federal Constitution. This chapter provides an overview of the current status of FRT regulation in Brazil, highlighting the existing deficiencies and risks. It discusses whether LGPD rules allowing the use of FRT for public safety, national defence, state security, investigative activities, and the repression of criminal activities are reasonable and justified.
Scholarly treatment of facial recognition technology (FRT) has focussed on human rights impacts with frequent calls for the prohibition of the technology. While acknowledging the potentially detrimental and discriminatory uses that FRT use by the state has, this chapter seeks to advance discussion on what principled regulation of FRT might look like. It should be possible to prohibit or regulate unacceptable usage while retaining less hazardous uses. In this chapter, we reflect on the principled use and regulation of FRT in the public sector, with a focus on Australia and Aotearoa New Zealand. The authors draw on their experiences as researchers in this area and on their professional involvement in oversight and regulatory mechanisms in these jurisdictions and elsewhere. Both countries have seen significant growth in the use of FRT, but regulation remains patchwork. In comparison with other jurisdictions, human rights protections, and avenues for individual citizens to complain and seek redress remain insufficient in Australia and New Zealand.
Protest movements are gaining momentum across the world, with Extinction Rebellion, Black Lives Matter, and strong pro-democracy protests in Chile and Hong Kong taking centre stage. At the same time, many governments are increasing their surveillance capacities in the name of protecting the public and addressing emergencies. Irrespective of whether these events and/or political strategies relate to the war on terror, pro-democracy or anti-racism protests, state resort to technology and increased surveillance as a tool to control the masses and population has been similar. This chapter focusses on the chilling effect of facial recognition technology (FRT) use in public spaces on the right to peaceful assembly and political protest. Pointing to the absence of oversight and accountability mechanisms on government use of FRT, the chapter demonstrates that FRT has significantly strengthened state power. Attention is drawn to the crucial role of tech companies in assisting governments in public space surveillance and curtailing protests, and it is argued that hard human rights obligations should bind these companies and governments, to ensure that political movements and protests can flourish in the post-COVID-19 world.
This chapter discusses the current state of laws regulating facial recognition technology (FRT) in the United States. The stage is set for the discussion with a presentation of some of the unique aspects of regulation in the United States and of the relevant technology. The current status of FRT regulation in the United States is then discussed, including general laws (such as those that regulate the use of biometrics) and those that more specifically target FRT (such as those that prohibit the use of such technologies by law enforcement and state governments). Particular attention is given to the different regulatory institutions in the United States, including the federal and state governments and federal regulatory agencies, as well as the different treatment of governmental and private users of FRT. The chapter concludes by considering likely future developments, including potential limits of or challenges to the regulation of FRT.
State actors in Europe, in particular security authorities, are increasingly deploying biometric methods such as facial recognition for different purposes, especially in law enforcement, despite a lack of independent validation of the promised benefits to public safety and security. Although some rules such as the General Data Protection Regulation and the Law Enforcement Directive are in force, a concrete legal framework addressing the use of facial recognition technology (FRT) in Europe does not exist so far. Given the fact that FRT is processing extremely sensitive personal data, does not always work reliably, and is associated with risks of unfair discrimination, a general ban on any use of artificial intelligence for automated recognition of human features at least in publicly accessible spaces has been demanded. Against this background, the chapter adopts a fundamental rights perspective, and examines whether and to what extent a government use of FRT can be accepted under European law.
In 2015, the US Senate passed a resolution recommending the adoption of a national strategy for IoT development (IoT Resolution).1 Currently, the proposed Developing Innovation and Growing the Internet of Things Act (DIGIT) would establish a federal working group and a steering committee within the Department of Commerce.2 If the act is adopted, the working group, under the guidance of the steering committee, would be charged with evaluating and providing a report containing recommendations to Congress on multiple IoT aspects.3 These areas include identifying federal statutes and regulations that could inhibit IoT growth and impact consumer privacy and security.4
For over a century, the field of forensic science has been applying contemporary technology to the investigation of crime. The imperative to identify offenders, particularly in relation to serious offences, has meant that governments are willing to invest in new technologies to achieve this objective. Fingerprinting, first developed in the late 19th century to identify individuals based on the unique patterns on the fingertips, led the way as one of the earliest means of identifying people, and is still used today in a digitised format.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.