Hostname: page-component-5b777bbd6c-w9n4q Total loading time: 0 Render date: 2025-06-18T15:30:31.857Z Has data issue: false hasContentIssue false

The materiality of AI’s immateriality: Can unsustainable technologies promote sustainable management?

Published online by Cambridge University Press:  30 May 2025

Sérgio Nunes
Affiliation:
CIAEGT – Centro de Investigação Aplicada em Economia e Gestão do Território, IPT, Tomar Portugal DINÂMIA ́CET – Instituto Universitário de Lisboa-IUL, Tomar Portugal CIRIUS-ISEG, Universidade de Lisboa, Tomar Portugal
Vanessa Ratten*
Affiliation:
Department of Management and Marketing, La Trobe Business School, La Trobe University, Melbourne, VIC, Australia
*
Corresponding author: Vanessa Ratten; Email: v.ratten@latrobe.edu.au
Rights & Permissions [Opens in a new window]

Abstract

Life in society is a function of the tension between the tangible and the intangible, although human beings have a natural tendency to give more attention and meaning to what their senses directly perceive. If there is smoke, we believe there is fire. AI, understood in a broad sense, is becoming the new electricity or even the new oxygen. A technology deified in such a disruptive, omnipotent, and omnipresent way that it will revolutionize all dimensions of society, from work, mobility, teaching, health, business, and the very nature of life. However, the deep and structurally unsustainable material dimension of this technology has received less attention and without smoke no one looks for the origin of the fire, and it spreads at a speed never seen before. The objective of this article is to identify the main layers of AI’s materiality, questioning its apparent benevolent relationship with management.

Type
Editorial
Copyright
© The Author(s), 2025. Published by Cambridge University Press in association with Australian and New Zealand Academy of Management.

The transformative potential of AI: our sky is no longer the limit

In today’s techno-digital capitalism, more is always better and the faster it is possible, the better. This is the behaviour we observe around us. More production, more profits, more mobility, more energy, more data, more … of everything. Innovations need new innovations to work. Climate change seems to be solved with more economic growth and now the pinnacle of the ongoing technological revolution – digitalization and AI, – which is neither artificial nor intelligent (Crawford, Reference Crawford2021; Pemberton, Reference Pemberton2024), presents a set of problems that can only be solved with more AI … powered by nuclear energy (META, 2024), because there is no time to wait for renewables. This is the current mantra: we must increase the power of AI because its power will help us solve the problems that existed before (poverty, inequalities, economic growth, and climate change) and that have been deepened with its development.

Recently, the latest developments in quantum computing have shown the ‘quantum face’ of these economies of scale. Google’s quantum computing department has unveiled the new ‘Willow’ chip, which took just minutes to complete a task that would have taken one of the fastest supercomputers in the world, Frontier, 10 quadrillion years to complete. Researchers have shown that adding more ‘qubits’ to a quantum computer can make it more resilient. (…) They transformed a group of physical qubits into a single logical qubit, and then showed that as they added more physical qubits to the group, the logical qubit’s error rate dropped sharply (Brubaker, Reference Brubaker2024). The corollary is honestly put by the pioneers themselves: Even so, many challenges remain ahead of us. Although we might in principle achieve low logical error rates by scaling up our current processors, it would be resource intensive in practice. (Google Quantum AI and Collaborators, 2024:5).

The world’s leading politicians and ‘tech-gurus’ are no longer thinking about the sustainability, regardless of the concrete criteria used, of our planet. We have quickly gone from being ‘Prisoners of Geography’ to ‘The Power of Geography’ and inevitably to ‘The Future of Geography’ (Marshall, Reference Marshall2015, Reference Marshall2021, Reference Marshall2023). The thought can be summarized as follows: planet Earth is explored, exhausted and lost … the next planet, please.

What started out as just another available technology is becoming the structural driver of the expansion of current economic, social, and institutional boundaries. Toffler (1970) posed the question clearly more than 50 years ago: society (today certainly the planet) has a limit to the quantity (and quality) of technological acceleration that it can incorporate without compromising its sanity. The possibilities of AI as outlined in the literature are too numerous to describe here, but its impacts are transversal to the way we will live, work, see ourselves and the relationships we will establish with others (human or otherwise) (Brockenbrough, Reference Brockenbrough2024; Sigman & Bilinkis, Reference Sigman and Bilinkis2023). However, one area that has not yet been explored, or at least announced with the same mastery, is its material impacts on the planet (Crawford, Reference Crawford2021). Let’s remember some facts that sometimes a large part of the developed world is forced to forget, or even realize, but which are part of the daily lives of billions of people on the planet who continue to have to survive in the material world.

The material nature of the world we live in

The evolution of life on Earth has interesting constraints: the distortion of space-time, the movement of particles and requires, without exception, a model of extraction, production and consumption of energy (EPCE) (Nunes & Cooke, Reference Nunes and Cooke2021) on a finite planet with apparently infinite needs and constantly fed by ‘trends’ and ‘influences’. The result of these three constraints is humanity, our contradictions and their consequences. René Girard’s ‘mimetic desires’ explain our aversion to ‘stopping in time’. Human beings, once their basic needs have been satisfied, do not know what they want. They respond to stimuli and learn by imitation, where trial and error is a good example of a particular case of this mechanism. On the other hand, complex systems, of which life is an example, generally go through three phases, with an increasing tendency to depth and diversity of options: minimum thresholds – existence and viability; cumulative effects – specialization and sustainability; and phase transition – competitiveness and innovation.

The territorial integration of these principles has materialized daily in the concept, socially transformed into an objective, of quality of life, that directly depends on proximity, access and enjoyment of goods and services provided by companies and public administrations. In this perspective, the territory is the overlapping of multiple space dimensions: a physical space (geographical scale), an interaction space (players, networks, and interaction dynamics), and a political-institutional space (Nunes & Sousa, Reference Nunes, Sousa, Ratten, Álvarez-Garcia and Rio-Rama2019). Then we must consider the dynamics related to the evolution of integration in each territory of these three dimensions and the related tensions. In this sense, territory is a result of complex interdependencies between the size of the market, the dynamics of interactions and a favourable political-institutional framework for economic and social achievements.

It is easy to logically derive from this analysis that there is no quality of life detached from a specific territory. It is not possible to argue that improvements in quality of life can be achieved outside of the interdependencies between housing, employment, education, health, justice, leisure and mobility, as well as the infrastructures associated with them. The quality of life of a human life is, in terms of existence and survival, materially/territorially determined. In this sense, it is also not possible to undervalue the territorial dimension that supports the current EPCE model: the techno-digital capitalism that supports the evolution of the actual quality of life.

Additionally, the EPCE model that supports today’s quality of life is bringing the planet closer to all its sustainability frontiers like no other in the past. For a quick, cold and raw view, according to Conway (Reference Conway2023), life in society as we know it is directly made up of six materials: lithium, iron, salt, copper, sand, and oil; and without them it would not be possible to build the entire material world in which we live, supported by the four pillars of modern civilization: cement, steel, plastics, and ammonia (Smil, Reference Smil2022). Of course, it would also not be possible for us to type on a computer or a smartphone and use any dimension of AI without bauxite, cobalt, silver, gold, lead, platinum, nickel, uranium, zinc and, of course, the rare earths (lanthanum, neodymium, promethium, europium, dysprosium, yttrium, scandium among others) which are produced, essentially, by the decomposition of uranium (Zeihan, Reference Zeihan2022) and making them usable requires going through the dangerous process of dissolving them in vast quantities of sulfuric acid and nitric acid that produce toxic waste deposits that the world prefers to pretend do not exist (Hird, Reference Hird2013). The AI lifecycle does not begin with algorithms, deep learning, neural networks, but with natural resources, energy, human labour, public and private infrastructure, logistics, data, and classifications (Crawford, Reference Crawford2021); and all these dimensions, without exception, depend on and produce material/territorial impacts. Only then can we ask the ChatGPT, or any other of its clones, what the computational and planetary costs of its ‘existence’ are.

The (hidden) materiality of AI: how building AI is deepening the unsustainability of the material world

When ICT made it possible to begin to dissociate individuals from their actions in terms of territorial embeddedness, the narrative quickly began to develop that territory was losing its explanatory value for human actions. Dematerialization, ICT, digitalization, and AI would lead to the immateriality of life in society. Just like the untimely announced death of Samuel Langhorne Clemens, the death of the territory’s relevance – and its materiality – is also clearly exaggerated.

The procession that announced the immateriality of the territory was supported by several intellectual pillars. Some speak personally as the ‘end of history’ (Fukuyama, Reference Fukuyama1992) or the ‘death of distance’ (Cairncross, Reference Cairncross1997). Others, of a collective nature, institutionally assuming and propagating a myth that has never had empirical support, such as the Washington Consensus or the Treaty of Lisbon. However, blindness has never been so clear and so dangerous as it is today with AI. Most of the world population alienated by the virtuality’s of AI do not realize how the most immaterial (digital, dematerialized) technology that humanity has ever created has the greatest material impacts ever produced by the human species on itself and in the territories on which we depend for the survival of our lives. Let us briefly identify the main material pillars of AI. To get to the materiality (and its consequences) of AI we must lift the immaterial veil (algorithms, mathematical models, and the metaphors we use every day: wireless, apps, cloud, digital transition …) and peek into the AI’s burrow. We must have the courage of Alice.

The first layer is mineralogical. There is no advanced, large-scale computing without a set of minerals that took millions of years to accumulate on the planet. There is no AI without semiconductors, computers, smartphones, the Internet, servers, data centres, rechargeable batteries, digital assistants, large language model (LLM), and adjacent technologies with reduced useful life and without great ideas on how to treat the waste that accumulates. This entire computational infrastructure is based, first, on the minerals that allow its construction and constant replacement. On a scale of rarity, value for maintaining current computational capacity and worsening global conflicts and environmental and social problems, we have lithium, nickel, copper, and tin, as well as minerals found in rare earths that already we mentioned previously. These materials are irreplaceable for electronic, optical, and magnetic uses and are essential for various systems, such as smartphone sound systems, electric vehicle engines, military infrared devices, drones, batteries, etc. However, mining, smelting, processing, exporting, assembly, and transport have strong impacts on the environment, as well as on communities throughout the value chain of extracting and exploiting these resources (Crawford, Reference Crawford2021).

The second layer is logistics, whether associated with the mineralogical layer, or with the needs of the production chains of the activities of large technology companies or, finally, the entire public infrastructure which supports the existence of AI, also demystifying the fable that the magic of AI is the result of a group of entrepreneurial geniuses in a garage who essentially depend on the private sector and risk capital.

If we think of the companies without which AI could not exist, Intel quickly comes to mind. Intel is a company at the heart, brain, and every circulatory system of the ongoing technological revolution. Like Chvatal and Varhadkar (Reference Chvatal and Varhadkar2017) claims Intel’s supply chain reflects the company’s global operations – Intel does business in more than 100 countries, with over 450 supplier factories and 16,000 suppliers. In addition, Intel fulfils over 1 million orders a year from several factories and 30 warehouses. Intel processes over a terabyte of supply chain and manufacturing data every day.

But other companies immediately appear with supply chains (Amazon, Anthropic, Anduril, Google, IBM, META, Microsoft, NVIDIA, or OpenAI) that are equally complex to manage and ensure the transparency of their production processes. The complexity of the supply chains of these organizations’ production processes is of such magnitude that no one dares suggest that their profits and dividends can be free from conflicts (wars, human exploitation, environmental, and carbon footprint).

The third layer is human. Human beings are still material creatures embedded in a specific territorial context: they seek housing, education, health care, security, and work. We are territorially and culturally determined human beings. It will certainly not be by chance that the history and self-determination of peoples has always been (and continues to be) the result of the possession of territory and its resources. The human dimension has two fundamental aspects in the existence and future of AI.

On the one hand, human activities can be encoded into data that organized in large databases – big data – allows AI to learn: how much data can be collected. Data, and its rapid and massive accumulation, will be the decisive element of the evolution of AI. Nowadays, the bottleneck, the real difficult and very expensive element to build, is model training. The first GPT model, launched in 2018, integrated 120 million parameters and was trained with 4 GB of text. GPT-4, launched in 2023, is estimated to have used 100 billion parameters, and it is also estimated that the volume of data used in its training was one petabyte (Sigman & Bilinkis, Reference Sigman and Bilinkis2023: 49). Obtaining this data has been anything but uncontroversial, as have the results of training these learning models. From problems of privacy violations, use of images without authorization, use of xenophobic and racist content with highly biased results, discriminating against genders and ethnicities (Ashwini, Reference Ashwini2024; Cameron, Reference Cameron2020; Raya & Herrera-Navarro, Reference Raya and Herrera-Navarro2020; Yang, Qinami, Fei-Fei, Deng & Russakovsky, Reference Yang, Qinami, Fei-Fei, Deng and Russakovsky2020)

On the other hand, human work is crucial in organizing these huge databases that can be chewed by AI LLMs and natural language models. The central operating logic of an AI, sold to the consumer, is that of clear, fast, and efficient intelligence, with coherence, consistency, and effectiveness as efficiency criteria. The truth is, however, a little more uncomfortable. AI is structurally dependent on low-paid workers (crowd-workers; crowd-sourced) performing repetitive, psychologically disturbing tasks performed remotely, far from the apparent purity of AI (Taylor, Reference Taylor2018). The data collected requires thousands of hours of labelling, classifying, checking, and moderating content, reviewing, and editing text and images, and often transferring it to the error checking consumer (Irani, Reference Irani2015). These are the Ghost Workers (Gray & Suri, Reference Gray and Suri2019). How Crawford (Reference Crawford2021) summarizes the issue, large-scale computing is deeply rooted in the exploration of human bodies, and it is thanks to this exploration that it works, whether in a way that is consented to and perceived by its users (Heaven, Reference Heaven2023).

Transversal to the previous layers and supporting the existence of all current and future computational capacity emerges the energetic layer and its vital and inevitable force continues to be electricity. Servers are far from the public eye, hidden in anonymous data centres, and their polluting features are much less visible than the smokestacks of coal plants (Crawford, Reference Crawford2021; Tung-Hui, Reference Tung-Hui2016). Despite the industry’s efforts to increase its energy efficiency, the truth is that both what is known and what is expected is truly worrying (Belkhir & Elmeligi, Reference Belkhir and Elmeligi2018; de Vries & Alex, Reference de Vries and Alex2023; Strubell, Ganesh & McCallum, Reference Strubell, Ganesh and McCallum2019). Data centres use an estimated 200-terawatt hours (TWh) each year. That is more than the national energy consumption of some countries (…) That puts ICT’s carbon footprint on a par with the aviation industry’s emissions from fuel (Jones, Reference Jones2018: 163–164). Other authors, such as Ligozat and de Vries (Reference Ligozat and de Vries2024), explicitly mention the increase in electricity consumption by data centres, cryptocurrencies and AI between 2022 and 2026 could be equivalent to the electricity consumption of Sweden or Germany. AI’s carbon footprint is far from negligible, with scientists estimating that training the BLOOM AI model emits 10 times more greenhouse gases than a French person in a year.

Finally, another structuring dimension of a material nature without which AI could not function is water. Water is needed, for example to server cooling, electricity generation and supply-chain water for server manufacturing. The magnitude of consumption of this scarce and life-essential resource is simply overwhelming. The combined global water withdrawal of Google, Microsoft, and Meta reached an estimate of 2.2 billion cubic meters in 2022, equivalent to the total annual water withdrawal (including municipal, industrial, and agricultural usage) of two Denmark (Li, Yang, Islam & Ren, Reference Li, Yang, Islam and Ren2023:2). Although the relevance of the public sector and public goods for the creation of wealth and economic and social well-being has long been demonstrated (Mazzucato, Reference Mazzucato2013), the role of public goods and public infrastructure does not appear in the literature with sufficient relevance. Despite the immaterial good intentions expressed in their ethics and sustainability reports, the truth is that they all need public infrastructure to carry out their activities. Nothing that is understood and practiced as AI today would be possible without bridges, antennas, dams, high voltage lines, and submarine cables (Crawford, Reference Crawford2021; Mao, Wei & Wang, Reference Mao, Wei and Wang2024).

AI is a technology and, as such is not a manna from heaven, has multiple material dimensions, although society is unaware of this ‘invisible weight’. Transparent recognition of this fact is a first step towards incorporating different rationalities into the production and use of this powerful tool.

Is it rational for management to destroy the basis of its existence?

How do companies and public administrations, as well as those responsible for management, face this reality? It’s not a scenario, it’s a reality. What is the economic and business rationality of promoting the viability and sustainability of management through factually unsustainable technologies?

It now seems clear that it is not technically possible to achieve immaterial efficiency gains from AI without a more than proportional increase in terms of materiality costs, in all their complexity. This is a good way to assess, in a first analysis, the degree of ‘ecological maturity’ (Odum, Reference Odum1969) of any technology. This fact makes the defence of the idea that AI is a clean, green technology capable of solving truly important problems faced by societies today even more incoherent and incomprehensible. It seems more coherent and consistent to understand the efforts to deepen AI models from objectives defined by a dozen companies and ‘techno-entrepreneurs’ guided by a combination of concentration of power with a vision of short-term financial profitability. From this point of view, by making AI just another technology in the natural evolution of the technological package available to consumer society, it becomes easier to evaluate it and make rational decisions about its evolution.

These questions are quite hard to answer due to the complexity of both management and artificial intelligence practices (Ratten, Reference Ratten2024). Initially in 2019 before the COVID-19 pandemic there was an emphasis on sustainability and with-it green management practices. This coincided with the increased importance by managers being placed on the United Nations Sustainable development goals. Whilst these goals are still important, the rapid health and economic crisis resulting from the COVID-19 pandemic shifted managers attention to more urgent and pressing issues. This included how to deal with multiple concurrent global crisis that were unparalleled in their impact on the global society and economy. This means innovation and entrepreneurship were again placed at the centre of a manager’s existence and the reason for businesses existence (Jayamohan, Moss, McKelvie & Hyman, Reference Jayamohan, Moss, McKelvie and Hyman2024). After the COVID-19 pandemic subsided the resulting effect was a digital transformation for most business practices and a rapid upsurge in interest in artificial intelligence. In the past the rate of change from artificial intelligence was negligible. It was mostly evident in movies and considered more science fiction than an actual business reality. With the advent of TESLA cars and robots becoming more ubiquitous in society, the reality of artificial intelligence is now evident in society. Moreover, generative artificial intelligence such as ChatGPT has altered how and why people communicate. Moreover, voice assisted technology and chatbots are common in daily business activities. So, now we are in a place where sustainability is revered but at the same time artificial intelligence is emphasised. This is somewhat a contradiction but does not have to be so as both can coexist.

Conclusion

The interrelationship between sustainability and artificial intelligence is complex but interesting. Despite the economic and social advantages arising from sustainable management practices, the increased interest in artificial intelligence has meant more need for energy sources. This creates a dilemma. Can we be both sustainable and technologically advanced? The answer to this is in the usage of entrepreneurial and innovative management practices that harness the advantages of both.

References

Ashwini, K. (2024). Contemporary forms of racism, racial discrimination, xenophobia and related intolerance. United Nations. Human Rights Council. Fifty-sixth session, 8 June–14 July. https://documents.un.org/doc/undoc/gen/g24/084/20/pdf/g2408420.pdfGoogle Scholar
Belkhir, L., & Elmeligi, A. (2018). Assessing ICT global emissions footprint: Trends to 2040 & recommendations. Journal of Cleaner Production, 177, 448463. doi:10.1016/j.jclepro.2017.12.239CrossRefGoogle Scholar
Brockenbrough, M. (2024). Future Tense: How We Made Artificial Intelligence – And How It Will Change Everything. New York: Feiwel & Friends.Google Scholar
Brubaker, B. (2024). Quantum computers cross critical error threshold. Quantum Computing. Retrieved December 12, 2024, from https://www.quantamagazine.org/quantum-computers-cross-critical-error-threshold-20241209/.Google Scholar
Cairncross, F. (1997). The death of distance. How the Communications Revolution is Changing Our Lives. Boston, MA: Harvard Business Review Press.Google Scholar
Cameron, D. (2020). Detroit Police Chief Admits Face Recognition Doesn’t Work ‘95-97% of the Time’. Gizmodo, June 20. https://gizmodo.com/detroit-police-chief-admits-face-recognition-doesnt-wor-1844209113Google Scholar
Chvatal, C., & Varhadkar, A. (2017). Transforming Intel’s supply chain with real-time analytics. IT@Intel White Paper.Google Scholar
Conway, E. (2023). Material World – A substantial story of our past and future. London: Ebury Publishing.Google Scholar
Crawford, K. (2021). Atlas of AI: Power, Politics, and Planetary Costs of Artificial Intelligence. New Haven: Yale University Press.Google Scholar
de Vries, , & Alex, . (2023). The growing energy footprint of artificial intelligence. Joule, 7(10), 21912194.CrossRefGoogle Scholar
Fukuyama, F. (1992). The End of History and the Last Man. New York: Free Press.Google Scholar
Google Quantum AI and Collaborators. (2024). Quantum error correction below the surface code threshold. Nature. doi:10.1038/s41586-024-08449-yGoogle Scholar
Gray, M., & Suri, S. (2019). Ghost Work: How to stop Silicon Valley from building a New Global Underclass. Boston: Houghton Mifflin Harcourt.Google Scholar
Heaven, D. (2023). Google just launched Bard, its answer to ChatGPT—and it wants you to make it better. MIT Technology Review. Cambridge, MA. March 21 , 2023Google Scholar
Hird, M. (2013). Wast, landfills, and an environmental ethics of vulnerability. Ethics and the Environment, 18(1), 105124. doi:10.2979/ethicsenviro.18.1.105CrossRefGoogle Scholar
Irani, L. (2015). Difference and dependence among digital workers: The case of Amazon Mechanical Turk. South Atlantic Quarterly, 114(1), 225234. doi:10.1215/00382876-2831665CrossRefGoogle Scholar
Jayamohan, P., Moss, T., McKelvie, A., & Hyman, M. (2024). The influence of managerial attributions on corporate entrepreneurship. Journal of Management & Organization, 30(1), 1839.CrossRefGoogle Scholar
Jones, N. (2018). How to stop data centres from gobbling up the world’s electricity. Nature, 561(7722), 163166. https://www.nature.com/articles/d41586-018-06610-yCrossRefGoogle ScholarPubMed
Li, P., Yang, J., Islam, M., & Ren, S. (2023). Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models. other: [cs.LG]. https://other.org/pdf/2304.03271Google Scholar
Ligozat, A.-L., & de Vries, A. (2024). Generative AI: Energy consumption soars. Polytechnique Insights. Paris: Institut Polytechnique de Paris. https://www.polytechnique-insights.com/en/columns/energy/generative-ai-energy-consumption-soars/Google Scholar
Mao, F., Wei, Y., & Wang, Y. (2024). Impact of computing infrastructure on carbon emissions in China. Scientific Reports, 14(1), . doi:10.1038/s41598-024-81677-4CrossRefGoogle ScholarPubMed
Marshall, T. (2015). Prisoners of Geography Ten Maps That Tell You Everything You Need to Know About Global Politics. UK: Elliott & Thompson Limited.Google Scholar
Marshall, T. (2021). The Power of Geography: Ten Maps That Reveal the Future of Our World. UK: Elliott & Thompson Limited.Google Scholar
Marshall, T. (2023). The Future of Geography: How Power and Politics in Space Will Change Our World. UK: Elliott & Thompson Limited.Google Scholar
Mazzucato, M. (2013). The Entrepreneurial State: Debunking Public vs. Private Sector Myths. London: Anthem Press.Google Scholar
META. (2024). Accelerating the Next Wave of Nuclear to Power AI Innovation. December 3. https://sustainability.atmeta.com/blog/2024/12/03/accelerating-the-next-wave-of-nuclear-to-power-ai-innovation/Google Scholar
Nunes, S., & Cooke, P. (2021). New global tourism innovation in a post-coronavirus era. European Planning Studies, 29(1), 119. doi:10.1080/09654313.2020.1852534CrossRefGoogle Scholar
Nunes, S., & Sousa, V. (2019). Scientific Tourism and Territorial Singularities: Some Theoretical and Methodological Contributions. In Ratten, V., Álvarez-Garcia, J. & Rio-Rama, M. (Eds.), Entrepreneurship, Innovation and Inequality: Exploring Territorial Dynamics and Development (pp. 2851). New York: Routledge, Routledge Frontiers of Business Management.CrossRefGoogle Scholar
Odum, E. (1969). The strategy of ecosystem development: An understanding of ecological succession provides a basis for resolving man’s conflict with nature. Science, 164(3877), 723731. doi:10.1126/science.164.3877.262CrossRefGoogle Scholar
Pemberton, S. (2024). There´s no I in AI. Future Technologies Conference (FCT). 14-15 Nov. London, UK.Google Scholar
Ratten, V. (2024). Management trends: Artificial intelligence, Q-day, soft skills, work patterns, diversity, and sustainability initiatives. Journal of Management & Organization, 30(2), 219222.CrossRefGoogle Scholar
Raya, C., & Herrera-Navarro, A. (2020). Mitigating gender bias in knowledge-based graphs using data augmentation: WordNet case study. Research in Computing Science, 149(10), 7181.Google Scholar
Sigman, M., & Bilinkis, S. (2023). La nueva inteligencia y el contorno de lo humano. Madrid: Penguin Random House Grupo Editorial España.Google Scholar
Smil, V. (2022). How the World Really Works: A Scientist´s Guide to Our Past, Present and Future. London: Penguin Books.Google Scholar
Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and Policy Considerations for Deep Learning in NLP. other: . https://other.org/abs/1906.02243v1Google Scholar
Taylor, A. (2018). The Automation Charade. Logic Magazine, Aug. 1. https://logicmag.io/failure/the-automation-charade/Google Scholar
Tung-Hui, H. (2016). A prehistory of the Cloud. Cambridge, Massachusetts: MIT Press.Google Scholar
Yang, K., Qinami, K., Fei-Fei, L., Deng, J., & Russakovsky, O. (2020). Towards fairer datasets: Filtering and balancing the distribution of the people subtree in the ImageNet hierarchy. FAT* ‘20: Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency. 547558. doi:10.1145/3351095.3375709CrossRefGoogle Scholar
Zeihan, P. (2022). The End of the World is Just the Beginning – Mapping the Collapse of Globalization. New York: Harper Business.Google Scholar