1. Introduction
Artificial Intelligence (AI) has always been more than a purely technical endeavor. From its earliest days, it was deeply connected to questions about human intelligence, perception and reasoning (McCarthy et al. Reference McCarthy, Minsky, Rochester and Shannon2006; Turing, Reference Turing2009). Researchers drew inspiration from fields like psychology, philosophy and neuroscience, aiming to understand and replicate the ways humans think (Newell & Simon et al. Reference Newell and Simon1972). Over time, as computational power increased and data became more abundant, AI evolved from rule-based logic systems to complex statistical and machine learning models (Bishop & Nasrabadi, Reference Bishop and Nasrabadi2006; LeCun et al. Reference LeCun, Bengio and Hinton2015). What began as an exploration of cognition has grown into a vast, interdisciplinary field touching every aspect of modern life.
Today, AI drives innovation across scientific research, business processes and public services (Agrawal et al. Reference Agrawal, Gans and Goldfarb2022; Eubanks, Reference Eubanks2018; Van Ooijen et al. Reference Van Ooijen, Ubaldi and Welby2019). With this growing presence comes growing responsibility. Ethical questions around fairness, transparency and accountability are faced by institutions around the world (Dignum, Reference Dignum2019; Jobin et al. Reference Jobin, Ienca and Vayena2019; Mittelstadt et al. Reference Mittelstadt, Allo, Taddeo, Wachter and Floridi2016). The question is no longer whether AI will shape the future, but whether AI will shape the future in the right way, and whether we are guiding AI’s development in the right direction.
It is within this context that our research lab, the Civic AI lab, was established in 2020 as a partnership between universities and public institutions designed to address these urgent concerns. The lab was founded on a simple but powerful principle: AI should be developed not just for people, but also with and by people. This draws from the ethos of transdisciplinary research, which argues that knowledge production must occur through iterative engagement with affected communities, blurring the boundaries between science and society (Gibbons & Nowotny, Reference Gibbons, Nowotny, Klein, Häberli, Scholz, Grossenbacher-Mansuy, Bill and Welti2001). Rather than building systems in isolation, the lab aims to connect technological development with the lived experiences of the communities it affects. Our work is grounded in the belief that civic engagement and interdisciplinarity are essential for meaningful AI innovation.
One of the lab’s earliest and most revealing projects focused on a seemingly local issue: school choice in Amsterdam (Tasnim et al. Reference Tasnim, Weesie, Ghebreab and Baak2024). Each year, thousands of primary school students are matched to secondary schools using a matching algorithm. The city uses the Deferred Acceptance algorithm (Abdulkadiroğlu et al. Reference Abdulkadiroğlu, Pathak and Roth2005; Gale & Shapley, Reference Gale and Shapley1962), adapted to use random lottery numbers due to the absence of schools’ preferences over students as a result of Amsterdam’s free school choice policy. This adaptation created unintended consequences: students with worse lottery numbers found themselves placed in schools far down their preference list, sometimes even as low as their 18th choice.
This problem is not unique to Amsterdam. It illustrates a broader tension at the heart of AI and algorithm design: the gap between theory and reality. The Deferred Acceptance algorithm is celebrated for its theoretical properties, but when adapted to the real-world setting of an open school choice policy, its effectiveness was compromised.
Real-world deployment requires contextual awareness, stakeholder involvement, and a deep understanding of the social systems into which algorithms are introduced. Interdisciplinary collaboration, particularly between technical and social domains, has increasingly been positioned as necessary to bridge this divide (Barry & Born, Reference Barry and Born2013). However, interdisciplinarity itself is not without problems. It has often been critiqued for remaining superficial, poorly supported by institutional incentives, or blocked by entrenched disciplinary norms (Barry & Born, Reference Barry and Born2013; Jacobs & Frickel, Reference Jacobs and Frickel2009). This underscores the need to examine how interdisciplinary efforts can be integrative and impactful.
In this paper, we reflect on five years of research at our research lab, drawing lessons from our work on school choice and other civic challenges. We found that trying to solve a concrete, socially impactful problem inevitably pushed us to cross disciplinary boundaries and engage directly with communities. Through this process, we came to see how community-informed, interdisciplinary approaches enrich scientific understanding, generating insights that feed back into existing theory. We observe that it is through the transdisciplinarity of civic engagement that interdisciplinary research can truly take shape and overcome many of the barriers that typically limit it. We argue that impactful AI must operate across three critical dimensions: between science and society, between fundamental and applied research, and between quantitative and qualitative methodologies. At the center of all three is the human, whose life, choices and dignity must remain at the core of our work.
2. Our research principles
Our research work has been shaped by five guiding principles that define not just what we build, but why and how we build it. These principles challenge conventional AI development, which often prioritizes efficiency and profit over ethics and equity. Instead, we center human values, frequently deemed elusive or hard to define, diverse perspectives, and participatory research as core elements of responsible AI.
The first principle, “plan and purpose,” asks the fundamental question: why do we develop AI? Too often, AI is designed to maximize efficiency or profit, sidelining human well-being. Many scholars have long highlighted how top–down technology deployments often fail to account for local needs, reproduce global power imbalances or reinforce existing social hierarchies (Escobar, Reference Escobar2011; Heeks, Reference Heeks2002; Toyama, Reference Toyama2011). Scholars of postcolonial and feminist epistemologies have similarly examined how technological interventions can perpetuate colonial or developmentalist agendas, treating communities as passive sites of experimentation (Harding, Reference Harding1998; Mohanty, Reference Mohanty1988). We aimed to take a different approach, embedding compassion, equity and social good into the very design of our models. This goes beyond adjusting the objective function of an AI algorithm for balancing efficiency and fairness; it means working with communities from the start, setting research questions and priorities together, and designing systems alongside the people who will be most affected.
The second principle, “people and perspective,” considers who develops AI and who is affected by it. AI is never neutral; it reflects the perspectives of those who create it. To ensure inclusive, ethical technology, we must build diverse teams that bring a range of lived experiences to the table. Without this balance, AI risks reinforcing exclusion rather than mitigating it.
The third principle, “parts and parcels,” focuses on what goes into AI – its data, methodologies and computational structures. AI systems are only as good as the data that train them. However, too often data are drawn from narrow populations, specific historical moments or biased sources, leading to models that are neither fair nor representative. Ethical AI demands a critical examination of data quality, algorithmic bias and the sustainability of computational infrastructures.
The fourth principle, “processes and procedures,” asks how AI is developed. Transparency and accountability are often cited as priorities in AI governance but remain largely aspirational. True accountability means more than just publishing reports; it requires open processes, clear explanations of decision-making, and mechanisms for redress when AI systems cause harm.
Finally, the fifth principle, “participation and practice,” ensures that AI is not just built for people, but with and by people. Those who first-hand experience the consequences of AI must have a role in shaping its design. Inclusive, participatory research fosters more robust, trustworthy AI systems that reflect real-world needs and mitigate harms such as bias and discrimination.
These five principles have not only guided our broader work at our research lab but have also been central to our research on school choice and allocation algorithms. By embedding ethics, participation, and interdisciplinary collaboration into AI development, we strive to create technology that serves the public good, rather than just the bottom line.
3. The case of Amsterdam school choice
Our research began with a simple question: how can we use responsible AI to reduce inequality? Our education project centered around the city’s secondary school admissions system. At first glance, this appears to be a technical optimization problem, matching students to schools based on their preferences. But a complex socio-technical challenge lies underneath the surface involving students, parents, policy-makers, schools and municipal authorities. It is a problem where technical solutions meet real-world stakes, where fairness and organizational trust matter just as much as efficiency.
Amsterdam’s school choice policy is unique. Since 2015, the city has used an open school choice system, allowing students to apply to any secondary school in the city regardless of their residential location. This policy was implemented to promote equal access and parental choice, reflecting a broader commitment to educational equity (Gautier et al. Reference Gautier, De Haan, Van Der Klaauw and Oosterbeek2016; De Haan et al. Reference De Haan, Gautier, Oosterbeek and Van der Klaauw2015). However, it also created significant challenges: with only a few schools being highly sought-after, demand became extremely skewed, making allocation both politically sensitive and socially impactful (Ruijs & Oosterbeek, Reference Ruijs and Oosterbeek2019).
The current allocation mechanism is based on the Deferred Acceptance (DA) algorithm, a well-established method in the field of market design and school choice (Roth, Reference Roth2008). DA has been lauded for being strategyproof and fair under certain assumptions. However, in Amsterdam’s case, the DA algorithm had to be adapted to fit the open-choice setting. Because schools do not rank students, either due to policy constraints or institutional indifference, student preferences are resolved through random lottery numbers. This modification, while necessary, significantly undermines some of DA’s most desirable theoretical properties.
Most students do receive one of their top school choices. However, each year, a small but significant group of students are placed in schools they ranked extremely low, sometimes as far down as their 18th choice. These students and their families often express frustration and disillusionment with the system, sometimes even filing court cases (Couzy, Reference Couzy2019). For parents, the school choice process is not just a logistical hurdle; it is an emotionally taxing, high-stakes decision that shapes their child’s future. Being placed far down the list feels like a personal loss and an institutional failure.
In the first phase of the school choice project, our work focused on analyzing this problem and testing alternatives. Our first study examined whether a different algorithm could better meet the goals of the system (Tasnim et al. Reference Tasnim, Weesie, Ghebreab and Baak2024). We proposed a method called Rank Minimization, which uses the Hungarian algorithm to minimize the average rank of school assignments. Unlike DA, this method is not strategyproof – it assumes that some students might game the system. However, our hypothesis was that improving average outcomes might justify the potential for manipulation.
To test this, we analytically derived a best-response strategy for rational agents and conducted simulations under different scenarios. The results were surprising. In 14 out of 16 scenarios, the Rank Minimization method performed better or comparably to DA, even in the presence of strategic behavior. Only in two scenarios did it result in worse outcomes. This suggested that, under real-world constraints, relaxing the strategyproofness requirement might be a trade-off worth considering.
Yet, despite these findings, publishing this work proved challenging. Reviewers questioned its novelty and criticized its specificity to Amsterdam. Some valued its practical relevance, while others dismissed it as too applied or lacking theoretical innovation. Our work also had social and political implications, so we also had to manage complexities on the political and governance fronts. Our work advocated the exploration of non-strategyproof school choice methods, while the dominant view among both senior academics and policymakers (advised often by these same academics) was that this was too risky to attempt. On the other hand, when communicating our research to parents who were affected by the school choice system, we received feedback that alternative algorithms are worth at least a public discussion.
This experience raised broader questions about the role of applied research in AI and the value of studying local, context-dependent problems, as well as its role in navigating complex organizational landscapes to create lasting policy changes. It became clear to us that designing fair and effective algorithms required not only theoretical insight but also an understanding of how those algorithms are used, perceived, and experienced. We also understood the importance of such work to be disseminated among academics, policymakers and citizens. This realization marked a shift in our approach: from designing AI for people to designing AI with people. In the second phase of the school choice project, we turned our attention to understanding how parents engage with the school choice system. We designed a behavioral experiment to study the conditions under which parents might act strategically. The study was embedded in a survey and distributed to parents in Amsterdam who were preparing for their child’s school allocation process.
In the experiment, we presented three scenarios. In the first, parents were asked to submit their preferences without any additional information. In the second, we provided probabilities of success for different manipulative strategies, with high risk and low reward. In the third, the probabilities remained risky but promised higher rewards. We also collected background information such as income, education and aspiration levels, with the hypothesis that strategic behavior might correlate with these variables.
Conducting this study required sustained engagement with the local community. We worked closely with OCO (Onderwijs Consumenten Organisatie), a non-profit organization that supports parents navigating Amsterdam’s education system. Our collaboration began during the early stages of our first project, when OCO reached out to learn more about our research on the city’s school choice system. Given that much of OCO’s advising work centers on the transition from primary to secondary education, they were deeply interested in how the school choice algorithm influenced matching outcomes. In these initial conversations, OCO shared a range of concerns: the lack of transparency in the decision-making processes of school boards, and the broader social inequalities that persist irrespective of any particular matching algorithm. They highlighted how parents from lower socioeconomic backgrounds often face systemic disadvantages, including challenges in compiling complete preference lists or effectively navigating the school choice process. Throughout the first project, we maintained an ongoing dialogue with OCO, incorporating their perspectives into how we framed our research questions. This engagement was instrumental in shaping the second phase of our work. Together with OCO, we recognized the importance of capturing diverse parental perspectives and agreed to collaborate on a survey that OCO would help disseminate. Partnering with OCO not only enriched the design of our study but also helped us navigate a common barrier in academic research: public mistrust toward researchers, especially on sensitive topics like strategic school choice. Through OCO’s established relationships and legitimacy within the community, we were able to reach parents more effectively and conduct the study with greater sensitivity and trust.
Our survey design had to balance academic rigor with real-world constraints: parents were often overwhelmed by the information and unsure about the implications of their choices. One key insight from this work was the uneven cognitive and emotional burden placed on parents. Those from higher-income backgrounds were more confident and engaged, while parents from marginalized communities expressed anxiety, distrust and confusion. These reactions reflected broader systemic inequalities that any AI-based solution must acknowledge and address. Our observation and acknowledgment of these concerns not only deepened our understanding of the problem as a whole, it also gained the trust of parents and their support in our AI research.
Reflecting on both phases of this project, we found that the technical problem of school allocation cannot be solved through algorithms alone. The first phase taught us that improving technical performance is important, but insufficient if the solution does not account for human behavior and lived experience. The second phase reinforced the idea that research done together with communities improves trust and relevance and surfaces new insights that no algorithm can reveal alone. Building on this work, we are preparing a follow-up study that uses unstructured narrative interviews (the Biographical Narrative Interpretive Method, BNIM) (Wengraf, Reference Wengraf2001) to engage more deeply with parents’ experiences.
Together, these projects illustrate the need for a more expansive approach to AI research that embraces theoretical development, interdisciplinary knowledge and civic engagement as complementary rather than competing forces.
4. Reflecting on our experience building civic-centered AI
In this section, we reflect on what our work on school choice taught us about doing responsible, civic-centered AI research, and how these lessons may be relevant to other domains. At our research Lab, we began our work – by design – with a diverse group of researchers and a strong objective: to develop AI that is responsible, inclusive and driven by civic values.
From the onset, we were thus engaged with the fundamental questions of who builds AI and why: questions that lie at the core of our guiding principles of people and perspective and plan and purpose. As shown in Figure 1, these foundational questions sit at the center of the conceptual space where all dimensions intersect: the human.

Figure 1. A spherical conceptual model representing the three dimensions of the practice of interdisciplinarity in civic-centered AI. Each axis spans a spectrum (quantitative–qualitative, science–society, fundamental–applied) tied to the following core questions: how, what and for whom. The who and why reside at the center: the human.
Particularly for the school choice project, our journey began with a technical question: could we improve an inefficient algorithm used in school allocation? It quickly became apparent that a technical lens alone was insufficient. What counts as a “good” allocation when systems operate under imperfect social and organizational conditions? The insights gained from our first project informed our second: we designed a behavioral experiment with parents to understand how they engage with the school choice system. Here, the lens widened again. The algorithm was not just a function to be optimized; it was embedded in a complex socio-technical system and needed to be designed accordingly
We now understand our work as operating across three core dimensions, although we did not name them as such at the time. In our other projects and use cases – covering various social domains such as mobility (Michailidis et al. Reference Michailidis, Tasnim, Ghebreab and Santos2024), environment (Alpherts et al. Reference Alpherts, Ghebreab and van Noord2025), governance (Dankloff et al. Reference Dankloff, Skoric, Sileno, Ghebreab, Ossenbruggen and Beauxis-Aussalet2024), law (Skoric et al. Reference Skoric, Sileno and Ghebreab2024), and co-creation (Kalender, Reference Kalender2024) – these three dimensions of AI design also emerged organically: answering questions on what, how and for whom we design AI. In Figure 1, these dimensions offer a framework through which to understand our interdisciplinary journey.
The first is the relationship between science and society, which relates to the question of: for whom do we design AI? Traditional views often hold that science must remain neutral and objective, keeping a deliberate distance from society while observing and analyzing it from a distance. AI, as a tool born out of scientific innovation, is also seen as separate and distinct from society. However, our experience reinforced that science and AI are both inevitably embedded in social contexts. Algorithms are not neutral artifacts; they reflect the assumptions and priorities of their creators. Civic-centered AI recognizes this and embraces the responsibility that comes with it. This reflection resonates with the principle of participation and practice, which urges that those impacted by AI systems must be part of the design process from the start. Research should not be done merely on communities, but with them.
The second dimension is the interplay between fundamental and applied research, which connects to the question: what are we designing? We often think of this as a one-way street: fundamental research lays the groundwork, and applied research implements it. But in our case, applied work raised new questions for theory, challenged core assumptions, and suggested new problem framing. While this bidirectional relationship between applied work and theoretical development emerged from our specific case, we believe that it carries a broader methodological lesson for AI research. Applied research enriches existing theories by revealing overlooked complexities, challenging core assumptions, and stimulating new lines of foundational inquiry. We believe that iterative movement between theory and application is therefore essential for progressing AI development.
The third dimension is the integration of qualitative and quantitative methods, which informs the question: how should we design AI? AI has historically favored quantitative approaches; statistical models, optimization algorithms and predictive accuracy. But societal problems are rarely reducible to numbers alone. In our second project, we learned that qualitative insights were not secondary or supplemental; they were essential to understanding how people actually interact with algorithmic systems. Combining both lenses offered a more holistic understanding, capturing not just what people do, but why they do it.
At the center of all three dimensions is the human. Whether we are speaking about disciplines, methodologies, or research paradigms, all paths ultimately converge on human experiences, values and relationships. The challenge of building civic-centered AI is, at its core, a human challenge. It requires us to engage across disciplines, collaborate with communities and reflect on our own positions as researchers.
And yet, despite the increasing urgency of these issues, academia remains largely organized around disciplinary silos. Computer science, economics, psychology and sociology each have their own languages, norms and reward structures. Interdisciplinary collaboration is often encouraged in principle but undersupported in practice. In our own experience, we have seen how difficult it can be to publish interdisciplinary work or to gain recognition for research that straddles the line between fundamental and applied. But if we are to respond meaningfully to the societal challenges posed by AI, we must break down these silos. Interdisciplinarity has been widely critiqued for staying superficial, lacking proper incentives, or being blocked by disciplinary boundaries (Barry & Born, Reference Barry and Born2013; Jacobs & Frickel, Reference Jacobs and Frickel2009). In our case, it was not interdisciplinarity on its own that made our work effective, but rather the transdisciplinary nature of civic engagement. When we grounded our research in the real concerns of communities, through partnerships and ongoing dialogue, disciplinary boundaries became flexible out of necessity. The drive to address human needs brought together technical, behavioral and ethical perspectives in a way that individual disciplines could not achieve alone. This leads us to believe that community and societal engagement will be the driving force behind the interdisciplinary creation of AI and the breaking of disciplinary silos as we know them today.
In this spirit, we must also reflect on the principle of people and perspective, which asks: who builds AI? This question goes beyond academic discipline and touches on identity, representation, and power. During our school choice work, when we publicly questioned the limitations of the existing algorithm, we were met with institutional skepticism: not on technical grounds, but based on who we were perceived to be. This moment made clear that credibility in AI is not just about evidence or argument, but about whose voice is valued. Our AI research must actively challenge these hierarchies by uplifting the knowledge and authority of those who have been historically excluded from technological development, especially those embedded in the very communities AI aims to serve.
It is also worth noting that our reflections do not map neatly onto a single stage of the AI lifecycle. Many of the most important insights emerged in the ideation phase, and subsequently carried over into the design, deployment and impact assessment phases.
The three proposed dimensions cut across the typical technical pipeline, emphasizing that centering human needs and civic perspectives can fundamentally reshape whether, how, and why AI systems are pursued in the first place.
Our reflections go beyond the Amsterdam school choice case. They are a call to researchers, policymakers and communities to embrace a broader vision of AI research that is interdisciplinary, context-aware and grounded in civic responsibility. We also underline the supreme importance of civic engagement: to invite citizens’ involvement not as data points, but as co-designers and co-owners of the innovations. This, combined with more transparency in AI research and policy decisions, will help build trust between academia, businesses, government and citizens. This can be a very uncomfortable space for researchers; indeed, this requires skills many scientists are not trained for in their careers. But we argue that true growth and progress can only come when we allow ourselves to be uncomfortable in our working space.
When AI research becomes flexible across all these dimensions, centered on people and engaged with society, it will become not just innovative but transformative.
5. Conclusion
Our experience working on school choice in Amsterdam has been more than a case study in algorithm design – it has been a broader lesson in what it means to build responsible, civic-centered AI. We began with a technical question and ended with a new appreciation for the role of interdisciplinarity, applied relevance and community engagement in shaping impactful research. Along the way, we profoundly experienced that designing algorithms is never just a technical task – it is also a social, political and ethical one.
As we reflect on this journey, we recognize that we are not alone in pursuing this vision. Around the world – in the West, the Global South and Asia, research communities are beginning to grapple with similar questions about how to center civic values in AI. These communities, like ours, are pushing against the boundaries of disciplinary silos and advocating for research that is not only technically rigorous but also socially grounded. Importantly, they recognize that interdisciplinarity alone does not drive social impact; rather, it is the pursuit of social impact that fosters interdisciplinarity and generates new scientific knowledge.
This convergence of thinking signals a paradigm shift in the field. AI is no longer seen solely as a tool to optimize performance or automate decision-making; it is increasingly understood as a co-actor in human systems – one that must be accountable, inclusive and aligned with civic values. Building such systems requires collaboration across disciplines and sectors, and genuine partnerships with the communities affected by AI.
Our hope is that this reflection contributes to that growing movement. We invite fellow researchers, practitioners, and institutions to reimagine the goals and methods of AI research. Let us move beyond narrow definitions of impact and success. Let us invest in relationships, learn from those with lived experience, and pursue questions that matter in the world. Only then can AI fulfill its promise – not just as a technology, but as a force for equitable and collective progress.
Acknowledgements
We are deeply grateful to Floor Kaspers and Lidewij Koren from OCO for their support and advice during the survey project. We also express our sincere appreciation to the parents who participated in the survey for their invaluable insights and time.
Funding statement
This research was supported by the Innovation Center for AI (ICAI, The Netherlands), the City of Amsterdam, and Ministry of the Interior and Kingdom Relations, Netherlands.
Competing interests
The authors declare no competing interests.
Mayesha Tasnim is a PhD researcher at the University of Amsterdam’s Civic AI Lab. Her research focuses on Responsible AI and citizen-centric approaches to algorithm design for public services.
Sennay Ghebreab is Professor of Socially-Intelligent AI at the University of Amsterdam, where he leads the Socially Intelligent Artificial Systems (SIAS) group. He is founder and scientific director of the Civic AI Lab, with a research focus on bridging science and society to develop AI that promotes equal opportunities in domains such as education, welfare, environment, mobility and health.