In recent years, the use of AI has skyrocketed. The introduction of widely available generative AI, such as ChatGPT, has reinvigorated concerns for harm caused to users. Yet so far government bodies and scholarly literature have failed to determine a governance structure to minimize the risks associated with AI and big data. Despite the recent consensus among tech companies and governments that AI needs to be regulated, there has been no agreement regarding what a framework of functional AI governance should look like. This volume assesses the role of law in governing AI applications in society. While exploring the intersection of law and technology, it argues that getting the mix of AI governance structures correct-both inside and outside of the law-while balancing the importance of innovation with risks to human dignity and democratic values, is one of the most important legal-social determination of our times.
Loading metrics...
* Views captured on Cambridge Core between #date#. This data will be updated every 24 hours.
Usage data cannot currently be displayed.
This section outlines the accessibility features of this content - including support for screen readers, full keyboard navigation and high-contrast display options. This may not be relevant for you.
Accessibility compliance for the PDF of this book is currently unknown and may be updated in the future.