1. Introduction
It is largely acknowledged that exploring the interstices between several different scientific and technological domains is promising for the development of innovative technologies (Reference Rashid and KausikSeveringhaus, 2019; Reference ShinnShinn, 2001; Reference Severinghaus, Armstrong, Markovits and LavalletteRashid & Kausik, 2024). Interstitial explorations can namely lead to the development of instrumental devices or facilities known as technological platforms: these platforms are specific configurations of (i) technological components (hardware or software) spanning several scientific and technological domains, (ii) individuals (engineers, researchers, etc.) and (iii) programmes (whether industrial or scientific) (Reference Keating and CambrosioLe Masson et al., 2010; Reference Le Masson, Branciard, Paradeise, Aggeri and PeerbayeKeating & Cambrosio, 2002). And they aim to be of value to two types of future users (Reference Merz and BiniokLe Masson et al., 2010; Reference Le Masson, Branciard, Paradeise, Aggeri and PeerbayeMerz & Biniok, 2010):
-
Users (often academics) who see the technological components of the platform as research objects on which to conduct experiments: for them, the behaviors of the technological components are scientific phenomena to be studied. Following Le Masson et al., (Reference Le Masson, Branciard, Paradeise, Aggeri and Peerbaye2010), we will say that these academic users of the platform see it as an experimental device (ED) .
-
Users (often from industry, but possibly also from academia) who see the platform components as interesting future commercial tools with which they could carry out their operations, provided that these tools meet their specific requirements in terms of standards, reliability, control, etc. Following Le Masson et al. (Reference Le Masson, Branciard, Paradeise, Aggeri and Peerbaye2010), we will say that these users of the platform see it as an analytical device (AD) .
Technology platforms have been described in the life sciences (Reference Le Masson, Branciard, Paradeise, Aggeri and PeerbayeLe Masson et al., 2010), in micro and nanotechnology (Reference Merz and BiniokMerz and Biniok, 2010), in the biomedical world (Reference Keating and CambrosioKeating and Cambrosio, 2003), etc.
This paper focuses on a French RTO (Research and Technology Organization) created in 2012, SystemX, which conducts collaborative R&D projects (involving both academic actors, industrial actors and SystemX researchers-engineers) to develop state-of-the-art and application-independent (generic) digital devices, e.g., for evaluating models combining AI and physical simulation (https://lips.irt-systemx.fr/) or for visualizing indicators on data and model performance (https://debiai.irt-systemx.fr/), etc. We model the devices resulting from SystemX projects in the following way, inspired by Suh’s (Reference Suh1995) matrices:
At the end of a SystemX project that has produced such a device, neither the value of the device as an AD for future users (industrial or academic) nor the value of the device as an ED for future newcomer researchers is clearly identified and given. Indeed, this device is not the result of classic user-oriented DevOps, but of a research project: thus, as illustrated by the question marks in Figure 1, some of its features are still the subject of development or research. Even the partners who participated in the initial collaborative project that produced it are not systematically ready (in terms of skills but also in terms of funding) to commit to becoming users. And some actors who could be users (be it of the AD or the ED) are not identified yet or are not aware of the existence of the device: these devices are associated with an unknown value space that needs to be explored. This is in line with situations described in the literature where, at the end of a research project, newly born technological platforms are cost centres, consuming resources (e.g. computational capacity; dedicated engineering teams) but not yet generating revenues (in particular commercial revenues) (ESFRI, 2018). If they are not rapidly the subject to further exploration ((re)design work of the technological bricks, aimed at further research or commercialisation), they run the risk of under-utilisation and obsolescence (Reference Le Masson, Branciard, Paradeise, Aggeri and PeerbayeLe Masson et al., 2010), especially in contexts of rapid technological change. Faced with this risk, the classic lever is to seek funding (from public actors, venture capitalists, etc.) to launch subsequent exploration activities. But as funding resources are not infinite, obtaining funding will only secure future activities for a limited number of candidate platforms: the selection process for funding (economic evaluation, etc.) (Reference Silva, Gaia, ten Caten and FacóSilva et al., 2020) might also reject interesting platforms.

Figure 1. Model inspired by Suh’s matrices to describe the devices resulting from SystemX collaborative research projects
SystemX is approaching the question of how to support the future life of its technological platforms in a way that does not just consist of seeking new funding to carry out further design activities on the technological components of its platforms: it is also developing training programmes linked to its platforms. From a financial point of view, the development of training courses is more affordable than (re)design activities. This means that many more candidate technological platforms can be involved, particularly those that are promising but struggling to find investors. But at the same time, an approach based on training is surprising: indeed, while it is common practice to complement technologies whose value(s) has (have) already been identified with a training programme (Reference Dwyer, Ringstaff, Haymore and Sandholtze.g., Dwyer, 1994), the idea that developing training can support the evolution dynamics of partially known, unfinished devices, many of whose dimensions remain to be explored and whose value has yet to be made explicit, is not very intuitive. More generally, it is not really intuitive what training is for a partially unknown and unfinished object. And even if we can envisage appropriate forms of training for unfinished objects, what is the role that a training can be expected to play in terms of enhancing the value of the platform? The aim of this paper is therefore to determine whether we can find at least one empirical case that would confirm that a training approach, at first sight deviant (if not absurd), is in fact a possible alternative for supporting and stimulating the evolution dynamics of emerging technological platforms: in other words, we are looking for a “talking pig” in the sense of Siggelkow (Reference Siggelkow2007) that would confirm the existence of the anomaly. (We seek to identify such a “talking pig” in SystemX’s training initiatives.)
This paper is structured as follows. Section 2 reviews the literature in order to determine what forms of training can be associated with a partially unknown object such as a technological platform. Section 3 introduces our method: study of a single case where training played a very powerful role in enhancing the value of a SystemX technological platform, which suggests that this case is promising to confirm the possibility of the anomaly. Section 4 presents our findings: it characterizes the training that was added to the studied platform: it highlights that the training generated exchanges of independent knowledge. Finally, Section 5 is dedicated to the discussion opened by this research as well as its limits.
2. Literature review
The aim of this section is to shed some light on what training related to a partially unknown object such as a technological platform can be. To do this, we will first distinguish between the forms of training for designers of technological bricks (in subsection 2.1) and the forms of training for users of technological bricks (in subsection 2.2). In subsection 2.2, we will consider AD users and ED users separately. Indeed, previous research has highlighted the importance of separately treating the stakeholders interested in ED and those interested in AD, namely in micro- and nano-electronics (Reference Merz and BiniokMerz & Biniok, 2010). In the domain of life sciences, Le Masson et al. (Reference Le Masson, Branciard, Paradeise, Aggeri and Peerbaye2010) emphasize that undifferentiated platforms (mixing technological bricks with ED potential and technological bricks with AD potential) tend to become AD lately, without having produced the expected scientific results. As shown in Figure 2, these works highlight the challenge of developing a family of new versions / variants, differentiated according to an objective of analytical device (AD) or experimental device (ED) (Reference Le Masson, Branciard, Paradeise, Aggeri and PeerbayeLe Masson et al., 2010).

Figure 2. Insights from the dynamics of platform evolution studied in the life sciences (Reference Le Masson, Branciard, Paradeise, Aggeri and PeerbayeLe Masson et al., 2010): the importance of developing families of platforms, each of which is either entirely of AD-type or entirely of ED-type
Finally, in subsection 2.3, we will look at forms of training that create new ties between designers, AD users and ED users.
2.1. Forms of training for AD or ED designers: scientific and professional training
One of the main forms of training for AD or ED designers are courses on the scientific and technological fields related to the platform. These courses can take place and be delivered independently of the platform in question. Their formats can be varied, ranging from course sessions, manuals, handbooks, online tutorials, self-guided training that helps to reproduce examples, etc. (Reference Chang, Shih and HsuArtuso et al., 2011; Chang et al., 1993; Reference Hsu and ChenHsu & Chen, 2003). The target audience for these courses are the designers/engineers who are (likely to be) responsible for developing the technological bricks of the platform. The aim is to enable them to develop the skills to understand the behaviour of the platform’s technological bricks, and to be able to keep the platform operational.
There are reports of such training courses for EDs designers: Artuso et al. (Reference Artuso, Grace, Heim, Dragone, Herbst, Rota and Flood2022) focus on the case of ASICS technologies for scientific experiments in high energy physics. Here, increasing the number of highly-qualified workers designing ASICS-based EDs is crucial to maintain the pace of progress in high energy physics. Therefore, “multi-faceted” initiatives of training have been developed (specialised courses in instrumentation (related to photon detectors, calorimeters…), courses in design tools (Computer-Aided Design programmes), …). There are also reports of such trainings aimed at the designers of ADs: still in the case of integrated circuits and ASICS (Reference Hsu and ChenChang et al., 1993; Reference Chang, Shih and HsuHsu & Chen, 2003), training initiatives developed by the research institute ITRI in Taiwan played an important role in structuring the design workforce of the leading national semiconductors firms (namely TSMC). Again, these professional courses focused on basics concepts related to ASICs (courses, publication integrated circuit design manuals,…) and on design tools (e.g., computer-aided design programs…). These same courses were also designed for students on initial training.
In summary, we find here the very classic form of training: a transfer of theoretical and practical knowledge from knowledgeable actors to learners. The training knowledge Ktraining enhances the skills and capabilities of the designers (we write it: Ktraining -> Kdesigners).
2.2. Forms of training for AD and ED users
2.2.1. Adding documents to the platform (tutorials, user guide, etc.)
Whether it is an ED or an AD, it is common practice to attach up-to-date written documentation to a new device.
In the case of EDs, there is written documentation describing good practice in the use of the ED. In addition to good scientific practice, this may also formalise the rules for collective use of a device, when it is a large shared device (e.g. beam allocation rules in a synchrotron (Reference HallonstenHallonsten, 2009)). Furthermore, since EDs are research objects (the design of scientific instrumentation is a scientific activity in its own right (Artuso et al., (Reference Artuso, Grace, Heim, Dragone, Herbst, Rota and Flood2022)), the design of a new ED often involves the publication a scientific paper describing it: such a paper explains how the ED is new compared to the state of the art. It is also possible that this paper will also serve as a user’s guide, i.e. it will give guidelines for ensuring controlled experiments, for minimising error(s), etc. (Reference Chieco, Jonker, Melchiorri, Vanni and Van Noordene.g., Chieco et al., 1994). In this way, knowledge Kdocumentation enhances the knowledge of the scientific user (Kdocumentation -> KED_user).
Given an AD variant of a platform whose technological bricks enable a known list of functions (F1, ..., Fm) to be implemented, the aim of an attached documentation is to help an AD user to contextualise the platform in his own application sector. Having followed the instructions, the user should be able to use the technology bricks in his/her own context (Reference Marcovich and ShinnMarcovitch and Shinn, 2011). The user learns the rules to use the bricks that implement the known functions (F1, ..., Fm), and becomes able to take advantage of the platform’s known values. In this way, knowledge Kdocumentation enhances the knowledge of the AD user (Kdocumentation -> KAD_user).
So far, we have looked at forms of training in which Ktraining is added to the knowledge base of the stakeholders for whom the training is intended (Ktraining -> Kdesigners; Kdocumentation -> KED_user; Kdocumentation -> KAD_user). These are forms of ‘training in the known’ that do not trigger the exploration of the initially unknown characteristics of the platform (i.e. the exploration of the question marks in Figure 1).
For example, as far as we know, the purpose of written user guides attached to EDs is not to deal with ‘anomalous’ situations in which, despite following the recommendations of the guide, the scientist’s experiment gives unexpected results compared to his or her initial hypotheses: this type of situation is interesting because it generates learning for both the scientific user and the ED designer: for the ED designer, it is the sign of unknown control parameters that have not been taken into account so far; for the scientific user, it is a new exploration path). Similarly, to our best knowledge, the purpose of written user guides accompanying ADs is not to help the user invent new, initially unknown functions Fm+1, Fm+2 … which would represent learning in the unknown for both the AD designer and the AD user. In the following subsections, we discuss forms of training that seem to allow such ‘learning in the unknown’.
2.2.2. Developing new variants of the device that are educational (knowledge-oriented application kits)
In the literature, we find forms of training aimed at AD users which do not seek to describe the AD by documenting it, but which consist of a reworking of the device architecture by the designers: the designers explore new platform architectures called kits (or even families of kits).
For example, Mody (Reference Mody2006) describes kits made of minimal standard technological components from the initial platform. These components are supplied to the user as individual parts: the user therefore learns how to reproduce an architecture designed by the designers, by assembling the individual components himself. Then, on this basis, he can add his own technological components, which will perform new functions specific to his application environment(s). In this way, the user designs his own device using the kit: part of his design is guided by the kit; another part is an autonomous exploration, specific to his context. Marcovitch and Shinn (2011) describe another type of kit: here, the user is not included as a designer. The designers of the original device restructure the architecture to generate a generic core that is accessible to non-specialists. And this core can be extended with additional components that can be adapted to the context of the user, with a system of “translator” that manages the variety of the needs of user environments. These kits are interesting because (particularly Mody’s), the Kkit knowledge they provide is not just added to the user’s knowledge base: Kkit makes it possible for the user and/or the designer to explore new, initially technological bricks Bn+1, Bn+2…and associated functions.
2.3. Forms of training that gather designers, AD users and ED users
Research work on instrument research and instrumental communities (Reference Marcovich and ShinnMody, 2006; Reference ModyMarcovitch and Shinn, 2011) has shown that the successful development of subsequent derivatives of an emerging platform (especially variants with a commercial AD orientation) depends on the ability to weave new interactions and new exchanges at an intense pace, especially exchanges that cross disciplinary boundaries: involving newcomers, new institutions, etc. as users and designers. Such exchanges (a kind of companionship across institutional boundaries, in particular science-industry boundaries) are a form of “training”, where all actors are simultaneously knowledge providers and learners. These exchanges can take various forms: short informal visits to laboratories or companies, collaborative projects involving heterogenous actors, professional moves to other institutions (Marcovitch and Shinn, 2006; Roqué, 2001). Marcovitch and Shinn (2006) explain that such forms of knowledge sharing can open new exploration paths: problem reformation, design of new derivatives of the device, reaching new industrial or academic spheres, etc. Conversely, there are reports of seminars called ‘training’, where the exchange of knowledge is closer to bi-directional cross-industry exchanges aimed at opening up new avenues of exploration than to unidirectional transfers of knowledge from a knowledgeable to a learner: for example, at the beginning of the structuration of the ecosystem of spatial oceanography, at a time where it was not entirely clear what value the spatial technologies could bring to the disciplines and industrial sectors based on oceanography, training sessions gathering scientists and engineers from several academic disciplines and industrial sectors took an important role (Reference Le Pellec-DaironLe Pellec-Dairon, 2013).
2.4. Synthesis and research questions
This literature review reveals a variety of forms of training that can be associated with a technology platform, some features of which are still unknown: courses in scientific and technological fields; documentation, user guides; kits; knowledge exchanges across disciplinary boundaries. In a number of situations, training appears to be an essential condition for the future of the AD or ED. In other words, an absence of training can be a problem: for example, lack of workforce in ASICS chips, inability to get to grips with the device if there are no user guide at all, etc. Among the forms of training that we have highlighted, we have distinguished between those that simply increase an actor’s knowledge base in the known, and those that open up new exploration paths in the unknown, for the receiver of the training, but also possibly for the provider of the training. In this respect, kits and cross-disciplinary knowledge exchanges are particularly interesting. In this context, where training appears to be one lever among others, indispensable but not necessarily sufficient to stimulate the further life of a device, this article aims to shed light on the following two research questions: RQ1. Is it possible to find an example where training itself has played a critical and major role in the evolution of a newly designed technology platform? If the answer to the first question is yes : RQ2. What are the characteristics of this training?
3. Material and method
To explore these issues, we used the case of the SystemX research institute, which since 2020 has been developing trainings that promote in academic and industrial spheres the results of its collaborative research projects (namely its technological devices). SystemX has launched a number of initiatives: co-construction of thematic training courses that join the training catalogues of the executive education departments of universities or French grandes écoles; internal training related to its scientific and technological domains for its research-engineers; support program for startups developing AI-based products, etc. This paper focuses on a particular initiative: the development of pedagogical materials designed to help participants in a scientific competition (a data challenge organized by SystemX) get to grips with and effectively use a specific SystemX technology platform. The platform in question, known as Learning Industrial Physical Problems (LIPS) provides a comprehensive benchmarking framework for hybrid AI-Physics models through four categories of evaluation criteria: accuracy, physics compliance, industrial readiness and generalization capability (Reference Leyli Abadi, Marot, Picault, Danan, Yagoubi and EtienamLeyli Abadi et al., 2022). Developed as part of a SystemX project called HSA, LIPS was initially used by the ten or so members of the project team. The competition led to a significant increase in its number of uses: it was used several times by several hundred teams around the world in parallel. This put it to the test and allowed to correct bugs. It also increased its visibility and the trust it represents in the industrial and scientific spheres related to it. Finally, it opened up significant scientific opportunities for the future life of the platform - all this without any re-injection of investment over the already available budget of the HSA project.
Does this striking case represent a ‘talking pig’ as defined by Siggelkow (Reference Siggelkow2007), suggesting that a priori deviant approach of considering training to promote unfinished objects as technological platforms is actually a possible, sensible and powerful alternative to the search for new funding? Does this case bring a positive answer to our first research question? To shed light on this, we have conducted an in-depth case study (Table 1 summarizes collected data) in order to understand the mechanism through which the pedagogical material accelerated the development and opened new opportunities for the platform. We carried out our investigations trying to understand the evolution of the architecture of the platform, from the viewpoint of the designers. And we interviewed challenge participants. We analysed the training content (Ktraining) and the effects of its transfer to the challenge participants with C-K theory (Reference Hatchuel and WeilHatchuel and Weil, 2003). Indeed, the distinction between the Concept space (which represents the unknowns, the new avenues of exploration) and the Knowledge space, enables us to distinguish between situations where Ktraining simply increases a participant’s knowledge base, and situations where Ktraining opens up new avenues of exploration initially unknown to a training recipient. It also enables us to determine whether and how the training has a «feedback effect», in the known and in the unknown, for SystemX and its platform.
Table 1. Summary of the data sources.

4. Results
4.1. Presentation of the case
In 2021, SystemX launched a collaborative research project called HSA (Hybridation Simulation Learning). The project lies at the intersection of two established disciplines: (1) scientific computing, i.e. the discipline that uses physical modelling (thermal and fluid behaviour, etc.) to understand how systems work, particularly complex industrial systems, in order to perform numerical simulations over large ranges of validity, where accuracy and computational time are at stake; (2) artificial intelligence. In 2021, when the HSA project was launched, the hybridation between these two established disciplines was a nascent field that had yet to prove itself: within certain historical scientific computing and numerical simulation organizations, some experts were sceptical that AI could improve numerical simulators. In this context, HSA project launched in collaboration with industrial and academic partners aimed to explore two concepts (shown in Figure 3) related to hybrid AI: (C0.1) Developing digital simulations using AI that would show good performance in terms of accuracy and computation time. To this end, the project explored a variety of algorithms in industrial use cases (aeronautics, energy, etc.); in addition, with the aim of creating synergies between the different sectoral applications, the project aimed (C0.2) to develop a platform (a benchmarking suite) for comparing and evaluating the algorithms developed. The first version of this platform, called LIPS (Learning Industrial Physical Problems), was developed during the first year of the project (Reference Leyli Abadi, Marot, Picault, Danan, Yagoubi and EtienamLeyli Abadi et al., 2022). It was initially used by the ten or so members of the project team, as an Experimental Device, in a context where the issue of evaluating AI algorithms in general was underpinned by numerous research questions. In this context, a benchmarking platform like LIPS was new-to-the-art. In 2023, the HSA project team submitted a proposal to the Data&Benchmarking track of the Neurips conference (Conference on Neural Information Processing System), which is a prestigious conference in the field of AI, to organise a scientific challenge in which the candidate algorithms would be evaluated by the LIPS platform (note that data challenges are a common feature of Neurips: in 2024, 16 data challenges were organized). The SystemX team prepared a proposal for a challenge involving three different use cases and planned to evaluate the candidates’ proposals in terms of their potential for generalisation across the three use cases and beyond. This proposal was rejected by the conference, with a set of reviews that explained that it was too sophisticated and difficult to access. On the basis of the reviews, the SystemX team decided to rework the project and launch the challenge on its own. The competition was reduced to a single use case (optimization of the profile of an airfoil). And a starter kit, which can be seen as training material associated with LIPS, was developed. The challenge took place between November 2023 and February 2024: several hundred teams used the ‘pedagogical’ version of LIPS as an AD, to regularly evaluate the algorithms they were training. The effect was spectacular in terms of the intensity of use of the platform: 126 participants (including academic actors and industrial actors, some representing teams (possibly joint industry-academia teams), others as individuals) took part in the challenge, and 1165 submissions were received and evaluated by the platform to determine the winners - not counting all the evaluations of intermediate solutions that were not submitted. This increase in the intensity of use of the platform meant that it was put under stress: bugs were identified and corrected. In addition, this has helped to raise the reputation of the platform, increase confidence in it and enhance its legitimacy in the world of AI, as well as opening up new opportunities. In addition, the analysis of the winning solutions from the challenge raised new research questions about the hybridisation between digital simulators and AI, and their evaluation. This justified the interest in launching follow-up challenges mobilizing LIPS: a challenge on the original airfoil profile use case, but with more demanding evaluation conditions under which the score of the winning algorithm from the previous competition was severely downgraded. This challenge was submitted to the Neurips 2024 conference and accepted (Reference Yagoubi, Danan and SchoenauerYagoubi et al., 2024); a challenge focused on a power-grid management use case, provided by the French electricity transmission system operator, RTE. These challenges ran from July to December 2024 and April to September 2024 respectively. The winners of both competitions are now collaborating on collective papers, aiming to formalize and consolidate the scientific findings revealed by the sum of their solutions.

Figure 3. Structure of the training material from SystemX viewpoint, modelled with C-K theory
4.2. Analysis of the exchanged knowledge in the known and in the unknown
4.2.1. Training material K_training: an AD-version of LIPS and guidelines
As Figure 3 below illustrates, the initially developed LIPS platform (instantiated in the Suh matrix-based model introduced in Figure 1) has a modular architecture. This architecture was only slightly modified for the needs of the challenge (the version that is in K_training includes the addition of a function for calculating an aggregated score for the evaluated algorithm). To allow participants to use this platform as an AD, the SystemX design team did a significant amount of work developing a getting started kit: preparing the data, developing the Jupyter Notebook that help participants create their working environment and that guide them in reproducing rapidly the results of the baseline on a subpart of the data set. Webinars were organized to explain the use case, showcase concrete examples of method implementation, and guide participants on how to submit their solutions. And a discussion channel was set up to maintain communication between LIPS designers and participants during the challenge, so that problems could be debugged. Although the development effort of such training material K_training is less than investing in a new project centred around the platform and aimed at (re)designing new features for the platform, our interviews showed that this effort is significant in terms of FTE. On the other hand, having done it once greatly reduces the effort for subsequent challenges.
4.2.2. First knowledge flow: reception of Ktraining by the participants
Participants explain that the starter kit was essential to create their work environment; yet, a part of self-exploration, self-learning was not negligible. From a scientific point of view, for actors who were already exploring hybrid AI models in their organisation, receiving KTraining was mainly a learning in the known: the platform neither open up major surprises nor particularly new exploration paths for them (i.e., Ktraining->Kparticipants). Using LIPS did not generate any major change in participants’ scientific exploration strategy: they continued to explore models in which they were already interested and which they were trying to promote in their organisation. Yet, as illustrated in Figure 4 below, the score of baseline solution gave participants a design objective; and it enabled some participants to detect the shortcomings of their method and improve it significantly. Such improvements contributed to amplify the new perspectives that LIPS opened up from the point of view of legitimacy in participants’ respective internal organization. From this second viewpoint, the Ktraining relating to LIPS has a game-changing effect, in the unknown for participants: as mentioned above, the explorations conducted by actors seeking to augment physical simulators with AI are not always seen as legitimate, particularly by certain experts in scientific computing; or else, in a context where many AI models exist, some participants are personally convinced in the relevance of models different from those that their organisation strategy promotes. So, as the C-K scheme below illustrates, the opportunity for a participant to have the model they are working on evaluated by an independent ‘body’ (LIPS) which evaluates and confronts a wide range of international competitors on a ‘large scale’ is game-changing, in particular for the laureates. This is all the more so powerful when the algorithm’s performance has been improved during the challenge. This mechanism precisely took place for one of the laureates who demonstrated in his firm the value of a model he had long thought was relevant. In this respect, Ktraining on LIPS represents independent knowledge for each participant, which cannot be deduced from their own knowledge (the participants did not have the means of evaluating their model themselves and comparing it externally).

Figure 4. Modelling the reception of Ktraining by participants with C-K theory
4.2.3. Second knowledge flow: reception of the participants’ solutions by SystemX
The reception of the algorithms submitted by the participants had a very important learning impact for LIPS designers. Firstly, the scores of the algorithms submitted and objectively evaluated by LIPS demonstrated the relevance of continuing to explore the field of hybrid AI (which was not proven at the start of the HSA project in 2021). Secondly, the designers of the platform were surprised to find that many of the ‘trendy’ currently explored AI models were outperformed by a more traditional algorithm that relied heavily on physical models. This has represented ‘independent knowledge’ for SystemX and has raised new research questions in the AI community. And this has justified the organization of follow-up challenges (mentioned in Section 3). Thirdly, LIPS designers discovered the highly ‘use-case-specific’ nature of the solutions proposed by the participants: none of the submitted algorithm proved generic enough to be applicable to other use cases. Thus, the development of new expertise allowing to make hybrid AI models adaptable to several use cases emerged as an indispensable element to further structure the field of hybrid AI in the future.
Interestingly, the ‘learning in the unknown’ for SystemX is not so much related to LIPS itself: this learning has not contributed to the emergence of a deeply restructuring feature in LIPS (no major question mark in Figure 1 has been designed into a known feature, only minor ones; no significant architecture rework). But the training has helped to significantly increase the value of the initially known features of LIPS: it is recognized that LIPS features helped improve and legitimize hybrid AI algorithms. They also highlighted key research questions and important future development direction for hybrid AI. And in the now more legitimate field of hybrid AI, LIPS has a significant place.
5. Conclusion
This case illustrates that training can play a major role in the evolution of a newly designed technology platform (R1): the construction of Ktraining stimulated the further life of LIPS, without injecting new funds. This confirms that training can be an alternative to funding: this is particularly interesting when funding is unavailable. Future research could explore additional cases of training programs linked to technological platforms, beyond SystemX, to gather further empirical evidence and refine our understanding of the phenomenon. Regarding how training played in a role in stimulating the evolution of LIPS (R2) : our analysis shows that the training on the LIPS platform was a component of a broader knowledge exchange system between the SystemX designers, who wanted to structure the nascent field of hybrid AI and the individual participants, who wanted to improve and promote their respective AI algorithms of interest in their organizations. The Ktraining related to LIPS has initiated an exchange of independent knowledge between SystemX and the participants. This type of knowledge exchange has been highlighted and modelled in research dedicated to “double impact science-industry collaborations” (Reference Plantec, Le Masson and WeilPlantec et al., 2024): it is the formal condition for the simultaneous generation of an industrial impact and an academic impact (i.e., double impact) in a collaboration between science and industry. Following the avenues opened up by research on “double impact”, we can call the training related to LIPS a «double impact training», which opened up new explorations paths in the unknown simultaneously for SystemX and the challenge participants. As exchanging independent knowledge is a sophisticated and demanding condition to fulfil, further research will be necessary to characterise this ‘double impact training’ model in more detail and to explore the methods to construct such trainings.