1. “Now is the time of monsters”
Monsters continue to fascinate us. They exert an ambivalent attraction, sometimes frightening and often confusing, but always tempting us to find out what is behind the uncanny mixture of familiarity and utter strangeness. One century ago, Antonio Gramsci observed that “the old world is dying, and the new world struggles to be born; now is the time of monsters.” Today, we find ourselves again amidst the struggles of a new world to be born, from the increase of geopolitical tensions to the dramatic impact of climate change, from a worldwide shift toward more authoritarian regimes to enthusiasm and worries about the potential harms of the capabilities of digital technologies. Monsters take different forms. Some are big and others small, some horrible and others just weird. They all behave in strange and unfamiliar ways that we do not yet understand. They are outside the norm and defy our expectations.
AI fits the description. Most of us have no clear idea what is subsumed under the label of AI, and some historians have recast it as the history of a brand (Haigh, Reference Haigh2024). The dazzling rate of progress releases a bewildering amount and variety of applications, which rapidly diffuse into our daily lives and increase the pressure of being caught in the latest wave of technological change. Even the monsters shape and content are hard to delineate and to pin down. They consist of hardware components, foremost the accelerator chips that are used to train and run AI models, connected to huge infrastructures where the chips are networked together in layers imitating the neural networks of the brain, housed in huge data centers that consume an inordinate amount of energy and water. On top of hardware and infrastructures comes the design and the proliferation of AI models. Under fancy new names and with many promises, they seep into our laptops and smartphones. They populate numerous digital applications and devices, expected to yield profits for the firms that continue to pour enormous financial investments into their development. In all these computational layers and between them, huge competition rages among a few big tech corporations, which makes it difficult to keep up with the latest developments. The most visible embodiment of the enormous concentration of economic power in their hands is the digital platforms, which exert an outsized influence on the strategic agendas of business and governments alike. Big Tech has reached the point of rivaling the power of the state, claiming that they are the only ones who know how to keep the monster AI in check, while touting the numerous benefits it will bring to society.
As the omnipresence of the monster AI increases, so does its astuteness to engage with us in seemingly ever more human-like ways. This leaves few places where we can hide from it and few ways to ignore it. We must learn to live with it, and the first step in doing so consists of getting to know it better. This means to overcome fear and leave ignorance behind. Nor should one be impressed by the hype about the superpowers it allegedly has. When some of the powerful Big Tech firms warn that the next AI models might carry an “existential risk” – meaning that some future AI might wipe out humanity – they continue to work on it. This ought to be interpreted as what it is: a willful distraction from the real concerns and issues that need to be addressed now. Humanity is unlikely to be wiped out by AI. Alas, it is still humans who continue to kill each other and to cause the unbearable suffering we witness around the world today. AI is not an omniscient superpower, but a powerful tool, created by human ingenuity that follows the trajectories of previous radical innovations. The crucial difference is that it deals with human cognitive faculties and skills and thus deeply affects us.
Previous technological surges also induced systemic change of the economy and society, but none of them were implicated in imitating, improving and attempting to surpass human cognitive abilities and skills. Generative AI, which introduced itself to the world as ChatGPT, has deliberately been designed in ways to get to know us better, mimicking our ways of communicating with each other, even in the most intimate ways (Elliott, Reference Elliott2022, Reference Elliott2024). Cleverly exploiting our inbred anthropomorphic tendencies, which make us attribute human features and agency to things or phenomena that do not possess them, the monster AI has learned to pretend to empathize with us while it manipulates and deceives. During evolution, animals as well as humans have developed ingenious ways for deceiving each other as a means of survival and one should not be surprised that AI can do the same. Getting to know it better implies realizing that we are open to ever greater manipulation and deliberate deception by Generative AI empowered agents (Park et al., Reference Park, Goldstein, O’Gara, Chen and Hendrycks2023).
Another well-known feature of the monster is that it “hallucinates.” It makes things up rather than admitting that it does not know. The reasons for doing so are intrinsic to its operations. It needs to be fed an immense amount of data taken from the Internet and other sources, stealing from authors (and from researchers who publish Open Access) who neither know nor receive compensation. The sheer amount of data makes it inevitable that errors creep into the processes of inference through which answers are generated in their responses to prompts. This makes GenAI models unreliable and untrustworthy, despite all efforts undertaken to remedy this inherent flaw.
In getting to know the monster better, one is soon confronted with the avowed goal of the enthusiastic promoters, owners and designers of AI in Silicon Valley. They are adamant that Artificial General Intelligence, AGI, will soon be reached, even if the definition remains unclear. Some claim that they have already spotted “sparks” of AGI when AI is at the same level, if not superior, to all human cognitive performance. The pursuit of AGI has become a veritable obsession, tied in obscure, even mystical ways, to the fantasy of a “transhumanism,” which promises to be able to leave behind the body and human mortality.
In practice, the dream of an AI superintelligence pushes the digital technology sector to ignore the real potential of AI as an information technology that can assist workers in upgrading their skills and raise productivity by integrating AI in a complementary way. Instead, what lies ahead is the substitution of workers through relentless AI automation. Daron Acemoglou, the 2024 Nobel laureate in economics, criticizes that Big Tech’s current business model systematically undervalues human talent. It exaggerates human limitations by taking as the only yardstick the measurable performance of AI, against which the whole richness of human experience and depth of knowledge is pitted and found wanting. Instead of celebrating the superiority of AI by continuously shifting the goal posts of performance, another business model is needed, based on the true potential of AI to augment and expand human capacities (Acemoglu, Reference Acemoglu2024). Alas, such a change is nowhere in sight. Huge investments continue to be poured into the digital tech sector, which, undeterred, continues with its focus on achieving AGI and on digital applications that will substitute and manipulate human beings rather than augment their abilities.
Not surprisingly, despite the amazing opportunities that AI offers, anxieties about a digital future abound. The potential is huge, but worries and uneasiness persist about the disruptive social harm AI may cause, which threatens to undermine liberal democracies further. One of the persistent concerns about the future is whether it will be dominated by the predictive algorithms of AI. We leverage AI to increase our control over the future and uncertainty, while at the same time, the performativity of AI, the power it has to make us act in ways it predicts, reduces our agency over the future (Nowotny, Reference Nowotny2021). Current levels of complexity are beyond our understanding. Reluctantly, we must admit that even our intellectual, let alone our action-oriented grasp of the intended and unintended consequences of human action, defies all efforts. This leads to the legitimate question of whether we are still in control of the digital technology created by us. The idea that AI will enable us to control our lives and the challenges we face is an illusion – and a particularly potent one nourished by all the talk about super-intelligence (Nowotny, Reference Nowotny2024a, Reference Nowotny2024b).
The monster AI is with us and will not go away. Humanity, it seems, has embarked on an open-ended co-evolutionary trajectory with the digital entities and machines it has created. AI will expand further, occupying more space and functions in our societies and in our lives. It will continue to disrupt and transform, while we need to learn to live and work with it. This implies putting restraints on it beyond the existing regulatory initiatives, ethical guidelines, attempts at value alignment and similar efforts. We need to find solutions to avert, contain and compensate the social harms it causes that include loss of jobs and social cohesion, exposure of the younger generation to digital addiction, the vast requirements of data centers in water and energy, massive appropriation of data without consent or remuneration of those who own it and much more. We need to learn how to rein in the monster by letting it grow up in a culture that embodies humanistic values.
2. “We only truly understand what we make”
Giambattista Vico, the 18th-century Italian thinker’s insight that “one truly understands only what one can make” (verum – factum) opened the path toward a humanistic Enlightenment. Seemingly, it was vindicated by the triumphant progress of modernity that created a world made by humans in ways we call “artificial.” The origins of AI and its ongoing evolution in science and research are revealing how scientific ideas and processes of inquiry are transformed by technological innovation and the impact they have on society. In the late 1960s, Herbert Simon, one of the leading figures in computer science with an impressive interdisciplinary background in economics, decision-making, organization theory, problem-solving and information processing, published a book entitled “The Sciences of the Artificial.” Simon analyzed the “architecture of complexity” behind the design of “artificial things,” juxtaposing it with the sciences that explore “natural things and phenomena.” The former were “not about how things are, but how they might be.” By this, he meant the design of things according to functions, goals and adaptations, taking place at the interface with the environment in which they were intended to act (Simon, Reference Simon1969).
Despite his optimistic outlook, the prospects for AI became overshadowed by the first “AI winter,” a period in the US and UK when official criticism of the unfulfilled promises of AI research had a dampening effect on the enthusiasm for its potential and led to a reduction of funding. This did not stop the rise of what eventually came to be seen as the epitome of the Sciences of the Artificial: Artificial Intelligence. Unanimous agreement exists today that the term artificial “intelligence” is unfortunate, as it juxtaposes an un- or ill-defined “natural” or “human” intelligence with the moving target of something called “artificial.” The term was used for the first time in a proposal to the Rockefeller Foundation to fund a “Summer Research Project on Artificial Intelligence” in 1955, and was accepted by the participants of the famous Dartmouth conference one year later. It stuck and ever since AI has been with us, filled with much hype and unfulfilled promises, disappointments as well as uncontested and amazing achievements.
A major breakthrough occurred in the early decade of this century, when AI research shifted from a largely logical-symbolic approach to artificial neural networks, ANN, consisting of many layers and connected through nodes that mimic in a simplified way the working of the brain. The approach used in Machine Learning, ML, and based on the transformer architecture in Deep Learning, DL, feeds an enormous mass of data (“tokens”) into algorithms, enabling them to learn even without supervision. Neural network models turned out to be surprisingly efficient, although the detailed steps and mechanisms by which the output is obtained are not completely known. But we still grapple with the notion of “the artificial” and are far from having reached a fruitful symbiosis with it. The vision of the Sciences of the Artificial left out essential questions, the answers to which determine the direction of where to go in the future. Who decides which functions and goals are to be designed into an AI agent? Who monitors and controls the interface when artificial things interact with their environment and the impact they create? And given the undiminished quest for AGI, who will be in control once it is reached?
Although AI research remains firmly grounded in the scientific foundations of physics, mathematics and the power of abstract symbols mainly carried out in universities, much of the most spectacular advances today take place in industrial corporate labs. The need to build and maintain the vast, costly and largely invisible infrastructure that underpins the Sciences of the Artificial led to this decisive shift in location. Hardware, depending on ever smaller electronic chips, and increasingly also software with its complex algorithmic architecture, forms the infrastructure for the extraction and processing of data. They require huge computational power, which Big Tech readily supplies, and companies are eager to invest in. It is from the corporate labs that “artificial things” are pushed onto markets and transform the world as we know it, from healthcare to warfare. When ChatGPT was released by its parent company, OpenAI, in November 2022, a huge experiment was conducted with millions of users within a short period of time. It was an experiment without asking anyone’s consent. Generative AI landed amid society that was unprepared and continues to react with a mixture of astonishment and concern.
Huge investments overwhelmingly from the private sector continue to be poured into AI research, with public funding lagging hugely behind. The marked gap between public and private investments disadvantages public universities and research entities. With only 10% overall funding coming from governmental sources, the public AI research sector is far behind in having access to the computational power the private sector enjoys, which also can keep secret the most advanced algorithms it develops. This has serious consequences for developing AI as a public good whose benefits should be available for all.
In science, AI continues to be deployed with great success. The impact it has on scientific research affects the “what” (the object of research and research questions) and the “how” (the conduct and organization of research). AI is used along the entire gamut of research, from hypothesis generation to designing experiments, from automated labs to assist in searching scientific literature and drafting scientific publications. Early advances in diagnostics due to its high accuracy compared to human experts were followed in drug discovery and in the search for new materials with specific properties. Progress is achieved through the discovery of patterns that the human eye does not detect, based on huge amounts of data on which predictive algorithms are trained. The tool AI lends itself well in science for very specific purposes while being part of a larger, systemic transformation that changes the entire organization of research.
3. The acceleration of scientific research: the case of AlphaFold2
The transfer of much of AI research to corporate labs highlights the difference in working conditions offered by industry compared to universities. One of the most visible effects of the use of AI in science so far is the undisputed acceleration of research it enables. The increase in speed toward obtaining results is eagerly taken up everywhere and especially coveted in industrial corporate settings. But it is more than the acceleration of many of the processes that underpin research that have enabled one of the most spectacular and impressive scientific-technological feats recently accomplished with the help of AI, the prediction of protein folding structures. It is worth taking a closer look at the example of AlphaFold2 and the conditions for its success in the corporate lab of Google DeepMind.
It must have been a coincidence that independently from each other, the physics and the chemistry committees of the Swedish Academy of Science awarded the Nobel Prizes in 2024 for breakthrough discoveries in the field of AI. The Nobel Prize in physics went to John Hopfield and Geoffrey Hinton “for foundational discoveries and inventions that enable machine learning with artificial neural networks.” Half of the chemistry Nobel Prize 2024 was awarded to David Baker “for computational protein design,” while the other half went to Demis Hassabis and John Jumper, both from Google DeepMind, “for protein structure prediction.”
The decade-long quest for the well-known problem of protein folding had come to a successful end at Google’s DeepMind lab, largely solved by its AlphaFold2. The story behind this feat offers an interesting glimpse into the research conditions in an industrial lab. An interdisciplinary research team took inspiration from their previous success achieved by DeepMind in beating the world’s GO champion, Lee Sodal. If algorithms could mimic the intuition of a GO master, then they could also mimic the intuition of thousands of citizens who had been involved for years in playing FoldIt, an online game where citizen players did not need to know anything about biology but could fold proteins. This previous experience was brought to bear in designing the algorithms of AlphaFold2 that profoundly changed how biologists study proteins. While it took a Ph.D. student to study one protein during the entire time of his or her Ph.D. thesis, AlphaFold2 has, in principle, unveiled the structure of approximately 200 Mio. Proteins. It is difficult to imagine that something similar in scale and scope could have been carried out in academia in such a short time.
AI is not an omniscient agent, but it turned out to be a powerful predictive tool that succeeded in solving what seemed an intractable scientific problem. Since then, it has inspired the design of new proteins that cannot be found in Nature, although, like all algorithmic predictions, it needs careful validation through biological experiments. Among the many favorable conditions present at the corporate lab that contributed significantly to the unparalleled feat was the interdisciplinary nature of the research team of AlphaFold2. It included researchers coming from statistics, structural biology, computational chemistry, software engineering and other fields, working closely together on solving the protein folding problem. They had access to unlimited computing power and to the training data from the Protein Data Bank. This is a public database for which thousands of scientists have uploaded their findings as Open Access data over many years. In addition, there was the vision of a leader, John Jumper, who was willing to take risks. Upon receiving the Nobel Prize, he modestly reflected that it might just have been the right time and place to solve a long-standing scientific problem.
Maybe the success of AlphaFold2 was just a lucky hit, but some of the conditions of working with AI in science stand out in an exemplary way. The focus is on a well-defined problem, which is accorded top priority, however elusive its solution may initially seem. Routine computation tasks that would take months can now be performed in a few days or hours. The team, consisting of dedicated researchers with different disciplinary backgrounds and skills working together, significantly accelerates the process. They do not hesitate to work with a black box, which tells you, for instance, the folded states of the protein, but not how you get there. In fact, lack of specific disciplinary expertise, in this case in protein science, may free the team members creatively (Saplakoglu, Reference Saplakoglu2024). Seen from such a perspective, AlphaFold2 is the vindication of a technocratic dream that shows the way how, given sufficient resources and other favorable conditions, a preset and clearly defined goal can be reached through AI-based research strategies in the future.
This raises the question whether the case of AlphaFold2 exemplifies a different way of doing Sciences of the Artificial, where research is not primarily about understanding how Nature works but about finding solutions to problems. Or is this simply an example of good engineering? Does it foreshadow how science will be done in the future? In the age of the techno-sciences, where science and technology have become closely intertwined a long time ago, these questions are beside the point. There has never been one way of doing science. The “scientific method” has always been practiced in the plural and keeps evolving by inventing and incorporating new instruments and methods. Fundamental research will continue to be the basis for exploring phenomena and processes in the natural and social world, while at the same time, AI rapidly diffuses into scientific practices.
So far, the main result of using AI in science demonstrates a marked acceleration of research. This follows in the footsteps of the industry with the automation of work. Automation uses technology to achieve output with minimal human input, resulting in gains of speed, efficiency, and may also minimize human errors. In addition, it requires a reorganization of work in larger and mostly interdisciplinary teams, which are held together by a specialization of tasks. The use of AI greatly reinforces all these components. In industrial research laboratories, this mode of working is symptomatic of the ongoing “industrialization of research” which is already underway to spread to academic research labs. AI acts as an accelerating factor, altering the organization and practices of research. AI-assisted technologies obliterate not only tedious routine tasks in automated labs, but they also permit fast leaps forward. More importantly, however, they change the nature of interdisciplinary research cooperation.
4. Bringing back friction: two kinds of interdisciplinarity
One of the effects of using AI in science is its disregard for existing disciplinary boundaries, which gives rise to the emergence of a specific AI interdisciplinarity. Offering possible solutions for scientific problems often requires input from different disciplines. AI encourages a specific AI-based interdisciplinary, as subtasks for certain purposes can be rapidly combined and reconfigured. The flexibility of these artificially generated combinations allows interdisciplinary connections and cooperation to be designed according to different functions, goals and adaptations. It assures that scientific domains can be easily crossed in order to achieve what Herbert Simon meant by “how things might be.” As the fields of application expand, so does the usefulness of AI in linking different domains of knowledge and disciplinary expertise. The resulting AI interdisciplinarity supersedes the necessity for experts to patiently learn each other’s disciplinary language or the meaning of certain concepts or methods. Machine learning based on artificial neural networks and the enormous amount of data with which these models are trained, generates a subtle affinity with complex systems that are constituted by the dynamics of networks that continuously change. It matters far less what each agent “knows,” but with which other agents it is connected and how.
Being unencumbered by academic disciplinary boundaries opens the space for this AI interdisciplinarity to unfold. Notwithstanding the hype about an “AI scientist,” human researchers will not be replaced by AI in the foreseeable future. Instead, AI is vigorously pushed to be included in the research team as a “co-working” artificial member. For instance, LLMs can be given specific tasks to act as “AI scientific agents” that will collaborate with each other in view of achieving a pre-set goal. Each of them is trained in a particular discipline and programmed to engage with each other in “team meetings,” leading to a “decision” which steps to take next and how to implement the chosen approach, all under the guidance of a human researcher (Kudiabor, Reference Kudiabor2024).
Thus, AI acts as a catalyst in the ongoing industrialization of research, leading to its acceleration, whereby an important part in this process is played by an AI-based “artificial” interdisciplinarity. The combination of knowledge from different domains and disciplines is achieved in goal-directed ways, seemingly obliterating the arduous and often failed attempts by humans to contribute their disciplinary knowledge to a shared goal, whose definition as a problem to be solved often takes a frustratingly long time. These are the outlines of a new culture of research in the making, an industrial culture of an AI interdisciplinarity. However, this does not signal “the end of research.” On the contrary, it opens many new opportunities, especially in view of exciting new research questions that arise from deeper engagement with AI.
Changes in the organization of science and how to do research will not spare the social sciences and humanities. They are part of the system, from competitive funding to evaluation practices. AI offers numerous new opportunities also for them, beginning with the spectacular ways in which AI assists scholars of ancient texts found on crumbling tablets to decipher and reconstruct material otherwise beyond reach. For the social sciences, novel ways of collecting high-quality data in far greater numbers and at unprecedented scale become possible with the help of AI and advanced AI methods for processing and analysis, as the Socioscope project demonstrates. Exploring the links between local initiatives that strive for greater sustainability in the domain of food and the meso-level, which offers needed resources but also imposes regulatory constraints, the project aims to understand the mechanisms leading to transition. Although not deploying AI interdisciplinarity as such, working at a larger scale requires major changes in the organization and management of how to conduct the research process (Lahlou et al. Reference Lahlou, Nowotny, Yamin, Thurner and Cordelois2024).
Problems arise when artificial interdisciplinarity leaves the controlled space of a laboratory or another research site and enters the messiness of the social world. The success of AI achieved in the lab inspires the transfer of the methods and assumptions that underlie its applications to the outside world. The attraction is understandable: A technocratic dream has come true – a world without friction. Why not carry forward the high level of efficiency that has been achieved in research into the many contexts of application in society that could benefit from it? An efficiency which sometimes seems “unreasonable” as it is based on processes and operations that are not fully understood, yet they produce amazing results for everyone to see. Based on the conviction that the processes and products of digitalization are unstoppable in changing the world we live in, what could possibly go wrong by wanting to shape it in its digital image? If one believes that achieving AGI is just around the corner, taking over all human tasks and replacing workers (except those who perform menial physical tasks or work that is emotionally too demanding) will society not be better off when its smooth and efficient functioning can be guaranteed by using a much desired interdisciplinary approach?
Much of this sounds familiar and echoes similar claims that have been at the core of technocratic imaginaries before. This time, their realization is more realistic. The artificial interdisciplinarity of AI already operates in multiple apps and digital systems and forms the basis for the AI-based support for decision-making in business and public institutions. Their efficiency is driven by the vision of overcoming disagreements and the slowness of human deliberation in trying to solve them. It offers a technology that replaces these cumbersome and often annoying human features effectively. AI interdisciplinarity brings together all available expert knowledge by calibrating and smoothing out existing differences. It offers attractive shortcuts and greatly enhances the speed of decision-making and action. Behind it is the assumption that the machine knows better, as its information base transcends what a single human or a social group may know. Since each of them has a somewhat different perspective, AI excels in summarizing and synthesizing and becomes the ultimate and only objective arbiter. In fact, it has an uncanny resemblance to the old Leviathan. We transfer human agency to it, and its enhanced, powerful version invites us to trust it.
The imagination of a world run efficiently based on an artificial interdisciplinarity assumes a world without friction. This is fully in line with how computing operates. It is dedicated to reducing friction, eliminating latency and introducing ubiquity as a universal trait. Reducing friction in computing is easy, Moshe Vardi argues, but having the right amount of friction, for the right application, in the right context, remains a major challenge (Vardi, Reference Vardi2013). But a world without friction cannot function. Friction is a necessary feature of the physical and the social world alike. It is a powerful metaphor for one of the most basic human conditions, a Janus-faced concept which offers a fresh perspective to go beyond post-modernity in trying to make sense of the dodecaphonic voices around us (Nowotny, Reference Nowotny and Åkerman1998).
Friction is one of the crucial junctions where an artificial interdisciplinarity encounters the messiness of the social world, which is full of tensions, ambivalence, conflicts and contradictions. In his Introduction to an edited volume on the necessity of friction, Nordal Åkerman cites the example of war. The cool planning is suddenly tested against a fast-moving reality where only one thing is certain: nothing will proceed precisely as decided beforehand. Friction is that which resists, is inert and recalcitrant. It keeps one from realizing one’s goals. Friction and inertia act as an impasse through which one must go to achieve what one has set out to do, but they also stop one from careening past the tangent points, as they attach one to the ground and help to sort things out. Without friction, there is no movement, as nothing gets going if it cannot push off something else. Friction provides essential contact with the world and together with the energy of motion, it defends life (Åkerman, Reference Åkerman1998).
Yet, the acceptance of friction and a readiness to engage with it is not strong. Åkerman, writing at the end of the last century, is critical of the digital “machine culture.” In line with a long-standing tradition of cultural critique, he notes that from the perspective of machine logic, innumerable human attributes are irrelevant and indeed disturbing. Putting a premium on the quantifiable at the expense of the intuitive, the computer acquires the force of everyday existence, while we are building systems that make us abdicate our own intentionality or turn it into a tool as part of the machines. Since then, the advances of AI have pushed us considerably further along the frictionless slope. We are much closer now to leap from actual friction to the fiction of a frictionless world.
But it is not too late. The reason is that plenty of friction still exists in society, in undiminished form and force. Geopolitical tensions are mounting and getting dangerously close to the point of unleashing yet another war. Many societies are bitterly fragmented. With growing awareness that different groups in society face different futures, the readiness to compromise further declines. The fantasy of a frictionless world may soon be confronted with more friction than we are prepared and equipped to handle. Clearly, a change in culture is needed to recapture common ground, which is indispensable for a common future.
In the messiness of the social world, contrary to an industrial lab, many problems are ill-defined or fall into the category of “wicked” problems. In highly fragmented societies, even reaching an agreement on the existence of a problem faces obstacles in the form of vested interests that show no willingness to become involved in any of the proposed solutions. Standpoints, interests and resources of stakeholders may clash. Compromises must be found through protracted negotiations and trade-offs are part of the game. Unless imposed from the top down, which carries its own costs, reaching consensus takes time and remains brittle.
This is manifest even in some of the goals for AI, which most of us want to be fair, trustworthy and in alignment with human values. We largely agree that bias and discrimination should be shunned, but the hurdles to implementing such goals remain. Partly, they are of a technical nature, as the design of an algorithm calls for precise instructions that defy the specific context in which they arise. Ethics is not a checklist and remains highly context dependent. Even if we agree on a definition of “fairness” and find ways to navigate the ambivalence of language, which allows for flexibility and subtle ways of conflict management, one way to guard against errors and misjudgment is to install the possibility to appeal to human judgment. But the more automated the system becomes, the greater the likelihood that appeals will also be automated, further shrinking the space for human intervention.
Among the challenges that every new technology brings with it is how to regulate it. Overregulation stifles innovation, but if regulation is lacking, social harm is likely to occur. Despite the obvious need for putting normative and practical guardrails into place and implementing them, an overarching framework for the governance of AI is lacking. The contempt shown by Big Tech for regulation and their refusal of content control, citing it as curtailing “freedom of expression” allows AI to run wild as long as it generates profit for the digital platforms based on their addiction-generating strategies. All this renders the solution space for societal problems more complex. Torn between the seemingly simple solutions offered by populism and the effective solutions promised by AI interdisciplinarity, a process akin to the automation of society proceeds.
Yet, the complexity of the social world and its problems refuse to go away. They are the core of the repeated calls for more interdisciplinarity within the sciences that have been met with mixed success so far. Now, as AI unexpectedly generates an interdisciplinary space for linking otherwise separated knowledge and expertise, new challenges arise, including new opportunities. In many ways, AI holds up a mirror for us, urging us to redefine what it means to be human. It should be equally clear that the future development of how AI will be shaped, constrained and further deployed cannot be left solely to the pressures of commercialization of AI technology that emanates from the economic power of a few large corporations. We need to invent a future for AI to make it more relevant for a human-centered evolution.
5. Toward a humanistic culture of AI interdisciplinarity
The uncanny ability of AI to cross borders – be they academic disciplines, institutions or geographies – has led to the emergence of an artificial interdisciplinarity which enables advancing research at breakneck pace and to bring forth new configurations of knowledge from different scientific domains. They converge in a seemingly effortless way, flattening differences in perspectives and approaches, standardizing how to obtain results and proposing new syntheses. These advances enable big leaps forward in the acceleration of research. An AI-based interdisciplinarity offers a frictionless way to cooperate, rendering lengthy deliberations between researchers and efforts in trying to learn to understand each other obsolete, as there are more efficient and faster ways to reach consensus and a tangible output of a common endeavor.
Once the algorithms enabling AI-based interdisciplinarity leave the research lab and operate in digital applications designed for the social world, they will do as instructed. They will seek to reduce friction, downplay conflicts and eliminate them through absorption in the automation of AI-based decision-making, which promises speed, efficiency and lower costs. The dream of the old scientific quest for interdisciplinarity seems to have come true, although in a different guise than intended. The old lament that “the world has problems, while the university has departments” is replaced by the upbeat slogan: “the world has problems, and AI will solve them,” quickly and efficiently. It no longer takes time and patience, nor to becoming familiar with what others know and how they see the world. No new language needs to be learned. Instantaneously, the articulation of the world’s problems is fed into an AI model through all available data, making deliberation and struggling for agreement a thing of the past. Given the optimal mixture of the best AI agents connected in the most efficient way – what can go wrong?
Not much for those who will profit from it – except that the social world does not function like a virtual automated laboratory. The power relationship between governments and digital platforms has already tilted dangerously in favor of the latter. Huge sums continue to be poured as investments into digital infrastructures and their hardware by Big Tech, and 90% of funding for AI research comes from private sources. This leaves universities and the public sector funding at a big disadvantage, constraining their access to high computing power and to the most advanced algorithms, which can be kept secret. Ensuring that the potential benefits of AI will not only accrue to those who own, control and market them, but also to citizens and society at large, becomes more remote than ever before.
While the industrial culture of AI-based interdisciplinarity strives toward a world without friction, we need to acknowledge that friction is part of what makes us human. Friction is part of the physical world, explored and exploited for practical applications by a subfield of physics and engineering called tribology. It deals with the wear and tear of interacting surfaces in relative motion to each other and is itself highly interdisciplinary. In the social world, we experience friction amongst ourselves and at different levels of societal organization. Only the virtual world seems to have excluded friction from its operations, at least according to the promises of efficiency and effectiveness in its operations. Yet, even virtuality remains tethered to a material infrastructure, which depends on the extraction of raw materials for its components, on enormous amounts of energy for its functioning and innumerable other techno-material and organizational connections. Despite the supercilious claims of being able to surpass humans, it still needs them. Any autonomous system can only be autonomous in a relative and restricted sense. The monster AI, in its multiple manifestations, can change its appearance, but it cannot exist without us.
If friction is an essential feature of the human condition and of living together, it is an essential part of cooperation and – in contrast to an AI-based interdisciplinarity – enables a humanistic interdisciplinarity. Humanistic, in the sense that it will slow us down, but force us to reconsider what we wish to accomplish together. It may ignite conflicts but also opens ways for resolving them, setting free creative energy. Acknowledging the necessity of friction may save us from epistemic digital monocultures and guard against the fallacy that an automated life is a life worth living.
It may render us vigilant when observing the seemingly frictionless merger between AI platforms and governments or when a non-elected CEO undermines the constitutional separation of powers of a government that once was proud to be a liberal democracy.
Thus, bringing back friction is a good starting point. If properly harnessed, it is indispensable for creativity, right at the birthplace of digital technologies in science and its humanistic core. A humanistic culture of AI interdisciplinarity can bridge the gap between the natural and social sciences and the humanities. It was only after the Second World War that the Anglo-American term “science” became restricted and includes only the natural sciences, thereby shattering the previous unity of Wissenschaft (wetenshap, vetenskap, les sciences, le scienze). To exclude the humanities and social sciences from Science hollows out its humanistic core. It denies that Science itself is based on humanistic values and to the value accorded to free inquiry and that it remains deeply intertwined with the fabric of society of which it is a part.
The hallmark of the cultural evolution of humanity that has brought us to where we are now rests upon a deeply humanistic impulse that pervades everything that scientists do in the name of Science. It begins with the ingrained curiosity to systematically explore the world and to generate new knowledge with passion and persistence. These are the driving forces that go beyond the application of existing knowledge and the wish to find “solutions” for well-defined problems. Exploring the yet unknown is the creative spark that ignites multiple and diverse trajectories of discovery and embraces the uncertainty that is inherent in basic research. Fundamental research means not knowing where it will lead and what will be found. These humanistic tenets must be rendered visible, treasured and safeguarded, especially at this critical junction of our chaotic present.
Delegating the formation of collaborative links and interactions to an AI speeds up automation and the industrial culture of AI interdisciplinarity, but this cannot be the only goal nor the only way in which science is done. A culture of humanistic interdisciplinarity is needed to keep the creative spark alive. This is at the origin of scientific discovery and exploration and makes us human. It is the core of the scientific enterprise and aligns with the curiosity, passion and persistence without with science becomes a hollow and superficial enterprise, deprived of meaning. Without such humanistic and epistemic values, the pursuit of science would degenerate into a mere instrument for economic growth and would quickly exhaust itself. If efficiency and acceleration dominate everything else, no time is left for ideas to mature, nor is there space to reflect and to consider alternatives. Evolution has demonstrated the value of diversity as the prerequisite for adaptability and robustness that is needed to cope with environmental changes and to avoid getting stuck in sub-optimal solutions. Future AI research as a scientific field must maintain epistemic diversity.
Every human interaction with AI is part of an ongoing co-evolutionary process, which the human species has entered with the digital machines it created. Evolution has no telos, and the outcome remains open. This does not prevent humans from striving for goals or trying to make sense of whatever they do. Digital machines follow the instructions and functions that have been designed into them, even if they can learn without human supervision, come up with rules, imaginings and patterns that humans never thought of or saw. Whether one day they will be capable of becoming “autonomous” in the sense of no longer needing human support, design, monitoring and maintenance, remains speculation.
Deeper questions arise about the purpose and impact AI has on science and on society. Even a “responsible science” that equips AI with positive values and societal benefits cannot guarantee that the desired outcome will follow, as every technology has unintended and unexpected consequences. Science is the human activity where spaces for experimentation exist, and by taking up the challenge that AI poses, we will articulate anew what constitutes genuine human creativity and imagination. Our uniquely human ability enables us to create inner representations of the world, based on our experience and our imagination. The future is uncertain, but we can imagine how it might be. We can put ourselves into the place of others, the famous theory of the mind, and imagine what others might see or feel.
Nothing like this is – yet – possible for an AI to achieve and most probably never will be. AI’s potential for dividing us is in full, painful view. It continues to contribute and influence the fissures that run through our societies, leading to polarization and fragmentation. The time has come to more systematically exploit its equally existing potential to bring us together. It allows us to build bridges and create links between knowledge from different cultural and social backgrounds, to bring to light the often invisible interfaces that crisscross different parts of society. It combines and reconfigures information that pulsates through the many connecting networks and changes continuously what is known or not yet known. Instead of the shallow and mediocre feed we get from social media today, other combinations and an intellectually nourishing diet are possible. AI is already a collaborative tool, but it is heavily tilted in favor of prioritizing efficiency and frictionless operations, as this is what AI-based interdisciplinarity is designed to excel in right now. As with every tool, there are functions and places where it should operate exactly as intended. But if AI wants to become a “companion” or “co-worker,” it needs to meet certain, explicitly humanistic criteria and standards, pointing the way toward a genuine culture of AI interdisciplinarity.
The world we live in can no longer be neatly separated into separate knowledge domains. It remains tightly interconnected with the “complexity of the whole.” We can use AI to pierce into this complexity, knowing that complex systems have emergent properties that are unpredictable. We can use predictive algorithms to see further ahead, but we should never forget that these predictions are based on data from the past and probabilities. Ultimately, a humanistic culture of AI interdisciplinarity will enable us to better cope with uncertainty (Nowotny, Reference Nowotny2015). It will strengthen the confidence in a future that remains open, as long as we hold on to what makes us human. This entails to defining it anew in the space created by the AI interdisciplinary. Learning to live with the monster AI means to “cultivate” it, so that it becomes part of our, a humanistic, culture. Instead of opposing an industrial interdisciplinarity with a humanistic interdisciplinarity, a humanistic AI culture must emerge with humanistic values guiding their fruitful and pluralistic exchange. The monster AI opens a vast space of possibilities. It is up to us to explore it. Bringing back friction may be the flashlight to help us find the way.
Funding statement
The author received no financial support for the research, authorship, and/or publication of this article.
Competing interests
The author declares none.
Helga Nowotny is Professor emerita of STS, Science and Technology Studies, at ETH Zurich, Switzerland, founding member and Former President of the European Research Council. She has held teaching and research positions at universities and research institutions in several European countries and continues to be actively engaged in research and innovation policy. Among others, she is a member of the Board of Trustees of the Falling Walls Foundation, Berlin; member of the Council IEA de Paris; member of the Austrian Council for Sciences, Technology, and Innovation and chair of the Scientific Advisory Board of the Complexity Science Hub Vienna. Currently, she is co-principal investigator of the research project “The Socioscope: A Pioneering Methodology for Understanding Societal Transitions,” funded by the NOMIS Foundation. She has received numerous awards and honorary doctorates, among others from the University of Oxford and the Weizmann Institute of Science, Rehovot, Israel.