Skip to main content Accessibility help
×
Hostname: page-component-5b777bbd6c-kmmxp Total loading time: 0 Render date: 2025-06-22T14:33:30.077Z Has data issue: false hasContentIssue false

Chapter 9 - Practice Makes Human: Why We Can’t Understand Black-Box Artificial Intelligence

Published online by Cambridge University Press:  14 June 2025

Brian Ball
Affiliation:
Northeastern University - London
Alice C. Helliwell
Affiliation:
Northeastern University - London
Alessandro Rossi
Affiliation:
Northeastern University - London
Get access

Summary

Introduction to the Black-Box Problem in Artificial Intelligence

The word ‘hallucinate’ was chosen as Cambridge dictionary's word of the year for 2023, reflecting this word's recent extension as a label for AI models producing false data. This is not only telling of AI's increasing importance in our lives: it also says a lot about how often its behaviour is unexpected. This unexpected behaviour is related to the fact that, nowadays, many AI models are designed in such a way that humans can't know how what algorithm is being implemented in the machine. This is often called Black-Box AI (BBAI hereafter). While the function telling the machine what to do may not be untraceable in the training stages, the purpose of machine learning is for the machine itself to adapt and change its parameters so as to perform its task more efficiently – while the programmers become ‘largely ignorant of quite what is going on inside’ (1950, 458), to put it in Turing's own words. Due to the interconnectedness of the values of the potentially billions of parameters making up the functions that describe how such machines work, modifying a single value in a complex black-box function – say, when it ‘learns’ how to improve outputs during training – can affect the value of millions of others. This means that after a few iterations of the initial training model, the actual function being implemented in the machine – and any variant thereof – is essentially unknowable to humans.

This is different from, say, a very large decision tree, whose size makes it unsurveyable at a single glance, but where each step is easily inspected. It is also different from cases where the technology is black-boxed in practice, but not in principle, such as proprietary BBAI, where corporations who own the machines hide how they work in order to prevent others from stealing their technology, even though they might know how its insides work.

Importantly, such BBAI models are not outliers: most deep-learning models rely on BBAI. This means that, in many cases, we end up relying on machines that are implementing an algorithm that is completely unknown to us – in some cases, for high-stake decisions. Given the large-scale deployment of machines relying on BBAI in many critical domains, our lack of epistemic access to BBAI has many real-life consequences, both practical and ethical.

Type
Chapter
Information
Wittgenstein and Artificial Intelligence
Values and Governance
, pp. 183 - 202
Publisher: Anthem Press
Print publication year: 2024

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×