1. Introduction
I write this contribution from a computer science department, where my title is “Professor of Interdisciplinary Design.” For 12 years before that, I worked as a software developer in engineering companies, where my title was “Artificial Intelligence [AI] Engineer.” Since becoming a professional academic, my work has in many ways been a campaign to attain and establish an interdisciplinary perspective on professional work. Interdisciplinary Design research, and Artificial Intelligence engineering, are terms that I am happy to explain and defend on intellectual grounds, and often do. But in this more reflective piece, I draw on what I have learned while exploring AI in other cultures. Importantly, I start by addressing the question of whakapapa – a Māori language term from my native Aotearoa New Zealand. Whakapapa is central to Māori epistemology and ontology. It asks: Where did this entity come from? Who are its ancestors? What land does it belong to? I speak of AI as a way of making, a kind of craft, with specific practices developed in specific places. This story draws on the cultures of two cities called Cambridge – in England and in Massachusetts – but I start on the other side of the world.
My early education and professional experiences were typical of 20th-century New Zealand, a late, and remote, frontier of global colonialism, where practicality was encouraged, and diversity of belief tolerated. This was a “farthest promised land” (Arnold, Reference Arnold1981) largely settled by political outcasts and religious non-conformists, in which the rifle and steam-era economies from whaling to “slash and burn” farming had reached an uneasy 19th century accommodation with the indigenous Māori whose entitlements and practices were constitutionally recognised in the Treaty of Waitangi. Two cultures were always at hand, in an ecology of syncretism and sectarianism. In my own youth, following a narrow escape from a Taiwanese cult after leaving home to start my engineering studies, reflection on those experiences led to a second undergraduate major in philosophy and comparative religion, and then to a research degree in AI, once I discovered a field of enquiry where engineering and philosophy might co-exist.
2. AI engineering 40 years agoFootnote 1
While late 20th-century Aotearoa unambiguously carried the legacies of a British colony, AI might be better described as an American colonial project. My research supervisor Peter Andreae had returned to New Zealand after 12 years at the MIT AI Lab, carrying a suitcase of ARPA-funded technical memos and a magnetic tape of Richard Stallman’s GNU EMACS free software distribution. Despite the fact that Peter’s father John was a provincial pioneer in machine learning and robotics (Andreae, Reference Andreae1977), this was a time when the global business of advanced computing research was driven largely by Cold War investments in American universities (McCorduck, Reference McCorduck2004, p.131). I understood AI as reflecting American ideals, with the Japanese 5th Generation, British Alvey, and European ESPRIT computing initiatives all hoping to regain the initiative from the USA, and with the American libertarians inspired by Stallman offering technical freedom underpinned by military-industrial investment.
For my part, fascinated on one hand by the engineering opportunities arising from the economic boom of the microprocessor era (Evans Reference Evans1979), and on the other by perennial but less marketable questions of meaning and faith, I had inadvertently become the first AI engineer in a country that had no clear use for one. My wife’s ancestry visa offered opportunities in the UK, where a small number of British companies did carry out AI contracts, although often as local branches of foreign corporations or (in my case) working on defence projects in collaboration with teams in the USA. I was employed as an AI Engineer, first at Cambridge Consultants Limited (CCL – a branch of the American technology consultancy Arthur D Little (Dale, Reference Dale1981)), and then at the Hitachi Europe Advanced Software Centre, but the medieval university that happened also to be located in Cambridge had little impact on the work we did. It was engineering pragmatism, rather than pure curiosity, that led me to a PhD in cognitive neuroscience, after it became clear that the main obstacle to the success of our AI products was lack of understanding of our users and customers, (Blackwell, Reference Blackwell and Ireland1996) rather than any deficiency in technical innovation.
There is a distinctive and recognisable Kiwi pragmatism that motivates an approach to disciplinary traditions as tools in a box, or perhaps ingredients from the stalls of a market bazaar (Raymond, Reference Raymond1999). The technician/engineer addresses theoretical problems via the practical arts, in the contributions of eminent New Zealand scientists such as Rutherford’s experiments (Kapitza, Reference Kapitza1966), Phillips’ hydraulic computer for economic modelling (Bissell, Reference Bissell2007), or Wilkins’ contributions to the decoding of DNA structure (Friedberg, Reference Friedberg2004). Although fascinated by both science and philosophy, my own more mundane work as an AI Engineer was likewise pragmatic. CCL applied the latest technology trends, but was essentially an engineering consultancy, where my day-to-day responsibilities were little different from my earlier work at New Zealand’s first software company, or that of my father in the Wellington engineering consultancy he managed. The job of a consultant engineer is to make things work, bringing together specialist teams as necessary. At CCL, these included designers, ergonomists, mathematicians and marketers, each with their own box of theoretical and methodological tools, delivering solutions such as the real-time AI expert system that I designed 30 years ago (I believe still in use) for responding to emergencies on London Underground.Footnote 2
What does it mean to be an “artificial intelligence engineer,” drawing so broadly on theoretical concepts and methodological pragmatism to construct useful machinery? The purpose of this paper is to draw a contrast between the making of AI, as a practical craft drawing on diverse intellectual tools, and two other ways of framing two cultures – one originating from Cambridge England, and the other from Cambridge, Massachusetts.
3. Two cultures in Cambridge
The first of these frames, most often associated with the phrase Two Cultures (James, Reference James2016), is the invention of C.P. Snow, whose valuable government service during the second world war involved maintaining a directory of scientists, in Cambridge and elsewhere, to be called on for specialist advice on military matters (Cole, Reference Cole2016). After the war, Snow’s status not only as a gatekeeper of science, but a novelist who wrote entertainingly of political intrigue in Cambridge colleges, made him a celebrated public intellectual. During the post-war period when the USA and Russia were establishing their technical leadership in the space race, nuclear armaments and computing, Snow’s radio lecture on the Two Cultures argued that the problem with British public life was insufficient appreciation of the value of science. Snow’s career as a novelist was more successful than he had been as a scientist, but the main problem with his argument was that it conflated natural science with the sciences of the artificial, implicitly claiming that numerical aptitude and familiarity with laboratory apparatus would unproblematically deliver the benefits of practical engineering (Gosling, Reference Gosling2016). In reality, there is a huge gulf between the skills needed to achieve a laboratory demonstration, and the challenges of applying, replicating, deploying and integrating any conceptual advance into useful products or infrastructure.
A celebrated satire of the Two Cultures (although more popular as an historical fantasy thriller rather than commentary on research funding and science policy) is Susannah Clarke’s award-winning novel Jonathan Strange & Mr Norrell, written while working in Cambridge, and following her own education at Oxford. Stuffed with fictional footnotes and citations, the novel reports the struggle between the respectable academic work of theoretical magicians, and the upstart practical magician Mr Norrell, who (as with ARPA-funded AI) secures government funding to defeat Napoleon by creating magical weapons (Byrne, Reference Byrne2009). Clarke’s plot may have been more influenced by the Manhattan Project than the then-nascent world of AI, but today’s AI-boosters are not shy to use the word “magic” when claiming the potential of Artificial General Intelligence. An example was when broligarch Elon Musk told British Prime Minister Rishi Sunak, with a straight face, that AI was a “magic genie” that will end the need for work (Patrick, Reference Patrick2023). (Coincidentally, the letters AGI are literally at the centre of the word magic, and could perhaps form a useful acronym when combined with “manipulative” or “charlatan”).
4. Two cultures in AI
A little later than Snow’s lecture, and around the same time that I was exercising my technical curiosity in the style of MIT, an alternative framing of two cultures was being developed by MIT AI Lab student Philip Agre. As later recounted in his Lessons on Trying to Reform AI (Agre, Reference Agre, Bowker, Star, Gasser and Turner1997), Agre had become concerned that much of the day-to-day work of AI researchers consisted of writing computer code in which the names of functions and variables were used to support unjustifiable philosophical claims in the resulting research publications. In compiled computer programs, the identifier names used in the source code make no difference to the behaviour of the software, meaning that AI researchers could freely attach labels to define their program as “thinking” or “learning” or “reasoning,” no matter how far from human experience the construction might be. Whereas CP Snow had complained that British intellectual elites had too little respect for scientists, Agre’s concern was that the more technologically oriented founders of AI – anticipating the governing elites of our 21st centuryFootnote 3 – had not properly characterised questions from the humanities that they claimed to address. He proposed a critical technical practice, in which those building AI systems should be better educated in relation to those questions (Agre, Reference Agre, Bowker, Star, Gasser and Turner1997).
With one foot in each of these centuries, having been an AI engineer at the same time Agre was working in MIT, and having followed his journey toward the social sciences and critical analysis since then, where do I stand? Uncomfortably. As described in the coming sections, I find myself straddling, not only two centuries and two hemispheres, but also two cultures. Having campaigned for greater recognition of technology as art, and also of the arts, humanities and social sciences as essential in technical design, the bridge between these continents has offered a vantage point to see the development in Cambridge of digital humanities, technology ethics, and the adoption of many other international trends. While ancient institutions may not be the most agile innovators, they are still able to creak toward investor demand.Footnote 4 When not distracted by the creaking and groaning, what lessons might we hear for new cultural pioneers of AI?
5. Descriptive specification versus reactive imitation
The first dichotomy I draw from this long perspective is two technical cultures within AI itself. Described in the 1980s as “symbolic” versus “connectionist,” in the 2020s this distinction is now seen as teleologically inevitable, a discipline supposedly evolving from the quaintly-named “good old-fashioned AI,” to the success of so-called “neural” statistical machine learning. At the time of writing, the pendulum is still swinging, with many computer scientists excited by recent developments in “neurosymbolic,” lately called “reasoning,” models. After 40 years in the industry, I am relatively unimpressed by claims on one side or the other that the eternal grail is about to be discovered. A more interesting question is to contrast the dynamics of an engineering strategy that is primarily descriptive, versus one that is primarily reactive. (In computer science, these “cultures of programming” are sometimes called neat vs scruffy (Green, Reference Green, Hoc, Green, Samurçay and Gilmore1990), while in psychology the descriptive vs reactive orientation might be described as Freudian vs Skinnerian). The descriptive strategy emphasises critical analysis, formal notation, and organisational discipline, for example, in the 1980s boom in expert systems, and the central focus of AI research on representation as classically described by Bobrow and Seely Brown (Reference Bobrow, Brown, Daniel and Collins1975). The reactive strategy engages directly with the buzzing blooming bazaar of data stream inputs, arguing that intelligence can emerge naturally through observation and imitation, not requiring explicit symbolic representation, as argued in an early formulation by Brooks (Reference Brooks1991), and directly advocated as a methodological agenda by Leo Breiman in his essay on Statistical modeling: the two cultures (Reference Breiman2001).
Critical currents in AI have swung between these disciplinary poles, with BrooksFootnote 5 becoming a mentor for Philip Agre’s investigations of situated action in videogames at MIT (Agre and Chapman, Reference Agre and Chapman1987), and Seely BrownFootnote 6 sponsoring ethnomethodological critique at Xerox PARC (Blackwell et al., Reference Blackwell, Blythe and Kaye2017; Suchman, Reference Suchman, Barry and Born2013). We might ask whose interests are served by the adoption of one or another of these views. While purely reactive machine learning systems may appear democratically objective (perhaps economic instruments of Adam Smith’s invisible hand), reflecting a free speech market of social media rationality that liberates the masses from the symbolic descriptions constructed by unaccountable elites, it is becoming apparent that the new statistical business models being created are in fact extractive, concentrating authority and wealth in the hands of those who own the data infrastructure (Couldry and Mejias, Reference Couldry and Mejias2019, Zuboff, Reference Zuboff2019). Simultaneously, standardised systems of description can result in epistemic injustice (Bidwell, Reference Bidwell2021; Fricker, Reference Fricker2007), including metaphorical deforestation of the diversity of indigenous and other knowledges, spreading a global monoculture that appears to be fragile, rather than resilient, when threatened by populism and oligarchy (e.g. Kerr, Reference Kerr2025).
Just as Philip Agre concluded from his experiences decades ago, it seems that serious reflection, and engagement between these two cultures, is essential. While the extractive business model of machine learning AI currently has the upper hand, the technocratic impulse to hegemonic specification remains, for example in the perennial attempts to realise Umberto Eco’s perfect language (Reference Eco1997) – whether the knowledge representation languages of the 20th century, or the “Wikilambda” projectFootnote 7 recently sponsored under the umbrella of the Wikimedia Foundation (Falk, Reference Falk2025). My own attempts to mitigate those harms have involved advocating more egalitarian programmes of commonsense reasoning (Blackwell, Reference Blackwell1989), proposals for indigenous and decolonial approaches to AI (Bidwell et al., Reference Bidwell, Arnold, Blackwell, Nqeisji, Kunta, Ujakpa, Costa, Lange, Haynes and Sinanan2022; Lythberg et al Reference Billie, Wolfgramm, Refiti and Blackwell2025; Blackwell et al., Reference Blackwell, Damena and Tegegne2021), and accessible interactive notations for end-user development as an alternative to AI (Blackwell, Reference Blackwell2024). Perhaps one of the more dangerous recent trends is that the essentially reactive architecture of large language models such as ChatGPT, which at a technical level are purely imitative, are presented as offering descriptive specification through their imitation of human language in the form of apparently symbolic text. Buzzwords such as “agentic AI,” which proposes that LLM prompts should be interpreted as program instructions, could easily cede even greater power to totalitarians, while hiding behind a façade of statistical consensus.
6. The interdisciplinary logics of making and critique
The second dichotomy that I draw regarding the interdisciplinary cultures of AI comes from a more personal journey of lessons learned – also while trying, in my own way, to reform AI. Agre’s prescription for a critical technical practice has been widely adopted, especially in the large computer science sub-field of human–computer interaction (HCI) that provides the academic foundations for user-interface innovation and user experience design (Dourish et al., Reference Dourish, Finlay, Sengers and Wright2004). While the field is long-established, and closely engaged with the decades of critique I have already described, HCI researchers are perennially nervous that, with no single theory of quantifiable human behaviour, they will be unable to prevail against the onslaughts of STEM-based technocentrism. As in other areas of public life in the 2020s, the diversity and liberalism of HCI is sometimes taken to be a weakness. Successive “waves” of HCI (Bødker, Reference Bødker2015) have developed from human factors and ergonomics, through a turn to the social, cultural and critical theories of the digital, and many forms of reflexivity, from contextualised understanding of design to the sociology of scientific knowledge, postcolonialism, conceptual art, Marxist critique, feminist new materialism and many others (Bardzell et al Reference Bardzell, Bardzell and Blythe2018). Critics newly arrived to the field find it difficult to make contributions not already anticipated, while positivist technocentrists struggle to acknowledge any validity in the core tenets of the field. My own counsel, that teachers of HCI should stand their ground as resident critics and irritants to the discipline (Blackwell, Reference Blackwell2015), is hardly comforting at a time when being on the wrong side of the culture wars might easily lose you your job.
I find Agre’s manifesto, aspiring to a critical technical practice, an especially helpful formulation. Perhaps in part because he and I come from the same generation, undoubtedly with many shared experiences, ambitions and inclinations. We may not have reformed AI research, but the world today is being reformed by the extractive business models now marketed as “AI” technology. While every invocation of the phrase “two cultures” has been associated with shifting sands of status, power and influence, the stakes appear higher than ever. Agre rightly charged dilettantish engineers (probably including myself at that time) with ignorance of fundamental philosophical and social theories. He also noted that philosophers could make little contribution to AI unless they were actually building it. In principle, both deficiencies can be addressed through collaborative engagement, although the “service-subordination” mode identified by Barry et al. (Reference Barry, Born and Weszkalnys2008) as one of the “logics” of interdisciplinarity could easily be unreflective or even extractive (Thrift, Reference Thrift2006).
For me, the cultures underlying these alternative understandings of knowledge technology represent the most fundamental challenge to universities of the future, perhaps a challenge that has always been inherent in these institutions. But rather than arts versus sciences, my experience has been that there is a far more fundamental divide between the two cultures of makers and critics. Every working academic has some craft and practice in association with their theory. Is yours a practice of making, or a practice of critique? Is it feasible that one person could be both the maker and the critic, especially in a recently emergent and rapidly evolving field such as AI? Many aspire to that status (perhaps including myself), and the term “transdisciplinary” (Nicolescu, Reference Nicolescu2002) is often invoked by those who aspire, perhaps unrealistically, to be simultaneously both maker and critic (Pohl, Reference Pohl2011).
This is the challenge invoked by Agre’s use of the term practice, and one that Snow’s proto-culture war scarcely acknowledged. While the 19th and early 20th century arts and crafts movement sought a reconciliation of the head and the hand, as a corrective to the uniformity of industrial manufacture, 20th century modernism followed by digitisation paved the way for a globalised knowledge economy, with its attendant epistemological injustices on even larger scales. Abstracted knowledge is more easily accumulated and traded, encouraging economic policies that (for example) convert skill-based institutions like the British polytechnics into more conventionally text-generating universities, ranked by evaluation of their research outputs, and asking special treatment for “practice-led” research to be recognised as valid enquiry (Rust et al., Reference Rust, Mottram and Till2007).
7. The craft practice of AI
Technology – tékhnē plus lógos – has always had this tension at its heart. Herbert Simon’s special pleading for the “sciences of the artificial” (Reference Simon1969) hardly resolved the tension between making and critique, but at least recognised the challenges of trying to draw the two together, for example in his own post at Carnegie Mellon University, where Robert Doherty’s Carnegie Plan had merged two institutions in an attempt to bridge the gulf between the cultures of engineering and arts (Laughlin & Garrett Reference Patricia and Garrett2005). Simon was writing during (and closely associated with) the birth of AI, when a certain idealism still prevailed with regard to the benefits of computational giant brains (Bagrit Reference Bagrit1965; Berkeley, Reference Berkeley1949; Bowden, Reference Bowden1953). But despite the analytic clarity promised by the idealised abstractions of software (Dijkstra, Reference Dijkstra1982), experiences of making this stuff inevitably suffer from Pickering’s (Reference Pickering2010) “mangle of practice,” laboratory craft prevailing over theoretical elegance (Blackwell, Reference Blackwell2018), and generations of AI developers celebrated more as artisanal “hackers” than philosophers (Levy, Reference Levy1984).
The current surge of enthusiasm and anxiety associated with generative AI seems to demand a return to some of these fundamental questions. Is statistically generated text fundamentally descriptive, prescriptive, imitative, or reactive? Is there a fundamental problem in assuming that the future can be specified by statistically transforming observations of the past? When students with supercharged word processors can extrude the text of an essay in 10 minutes, we might ask the extent to which the craft of writing will be transformed. What kind of craft involves carving with a chainsaw? What thoughts can be shaped amid the flying chips and sawdust? When should one pause for reflection or critique? Elon Musk’s recent performance brandishing a chainsaw-as-critic (BBC, 21 February 2025) seems as frightening as it is crude. Powerful tools are of course attractive to gardeners, who enjoy making rapid progress in tidying up the weeds of recalcitrant diversity. But when nurturing intellectual diversity, we should ask what else might be tidied away, during the large-scale statistical modelling, summarisation and imitation of human text. Furthermore, if text generation is to become both the tool and the product, does this mean that large language models must become self-critiquing works of art? These artefacts are representations of what it means to be human, just as much as painting or sculpture. All of these works can be made self-referential, perhaps ironic or allusive, but no painting or sculpture has written its own criticism.
Whatever the disciplinary tools being used, whatever data are collated and plans notated, surely the two interdisciplinary cultures of making and critique will remain? The professional academy, although taking a variety of institutional forms across the centuries and within different cultures, has been humankind’s organ for reflection. As each generation sets out to describe and construct their own new world, the abstract vocabulary and tools of critique shape and guide in relation to the past. As Polynesian navigators say, “The first and most important lesson of celestial navigation is that you must always know where your island is. In other words, to know where you are going, you have to know where you have come from.” (Chitham et al., Reference Chitham, Māhina-Tuai and Skinner2019, p.32).
Funding statement
None declared.
Competing interests
None declared.
 
 