1. Introduction
1.1. Motivation
Systems Engineering (SE) as interdisciplinary design discipline on system level is highly impacted by the development of AI technologies in various industries. Their use in engineering processes and their utilization as component of AI-based systems leads to added value since especially for Generative AI (GenAI) research and development for new products is seen as one major field for value creation (Reference Chui, Hazan, Roberts, Singla, Smaje and SukharevskyChui et al., 2023). Therefore, many companies are currently focusing on the use of GenAI. In this publication, GenAI is considered a subset of Deep Learning (DL) and Machine Learning (ML) to enable the generation of new and realistic content from unstructured input such as text, images or audio (Figure 1).

Figure 1. Simplified Venn diagram to distinguish considered AI-domains
General use examples for GenAI are ChatGPT and DALL-E, while specialized commercial applications might assist in customer communication to answer queries (Reference Garnitz and SchallerGarnitz and Schaller, 2023) or support sales and marketing activities (Reference Kohne, Kleinmanns, Rolf and BeckKohne et al., 2020). As part of AI-based systems, data analysis by ML can lead to decision-making (Reference Höck, Behr, Fachinger, Glock, Jung and NiederéeHöck et al., 2024; Reference Kläs, Jöckel, Casimiro, Ortmeier, Schoitsch, Bitsch and FerreiraKläs and Jöckel, 2020) - e.g. in autonomous vehicles, AI-powered drones or industrial robots. However, this leads to corresponding risks and therefore companies are forced to ensure compliance to regulations, future legislatives and applicable data security rules.
In case of AI components as part of products, there are serious uncertainties for companies, as many laws have not yet been finalized or remain ambiguous in certain details. In particular, the EU AI Act (European Parliament, 2024) affects companies bringing products into the European market. If design teams are e.g. developing AI-based systems with facial recognition algorithms they must consider if it is permissible at all to develop such systems for non-security authorities as well as clarify which data is available for training and testing. In a classical design team and for existing role models in SE, these questions can’t be answered easily and result in high economic risks. Even irrespective of laws and regulations, the use of ML/DL methods poses a challenge in safety critical systems (Reference Awadid, Robert and LangloisA. Awadid et al., 2024). Additionally, ethical issues might need careful consideration in the development phase. Human factors also play a decisive role, so that to achieve a high level of user acceptance, a focus must be placed on their User Experience (UX) to avoid users’ rejection (Reference Heinrich and BleisingerHeinrich and Bleisinger, 2023).
AI-based systems require greater attention to data security and user acceptance as classical data-driven systems due to the non-deterministic nature of most ML algorithms, lack of transparency and potential societal impact, especially in high-stakes applications like healthcare. Due to these reasons, it can be assumed, that interdisciplinary will become more critical for the development and market success due to a high degree of uncertainties as well as technical and economic risks (Reference Awadid, Robert and LangloisA. Awadid et al., 2024). Especially for SE an adaptation of the typical design team and corresponding roles in SE is necessary. Traditional roles on system level as Requirements Engineer, Systems Engineer and Systems Architect need to adapt by integration of AI-based competencies and basic knowledge of AI. In projects developing AI-based systems, new roles as Data Scientists, ML Engineers, Data Officers, Acceptance Managers as well as Legal and Ethical Advisors will be necessary.
1.2. Research scope
This paper presents challenges and perspectives on the setup of design teams in SE while developing AI-based systems as defined in section 2.1. The research approach is summarized in Figure 2. In a first classical attempt to conduct a structured literature review, no useful results were identified regarding Systems Engineering role models for AI-based systems using the keywords Role Model, Systems Engineering and/or Product Development and Artificial Intelligence at the same time.

Figure 2. Research approach for Systems Engineering of AI-based systems taking activities of the cross industry standard process for data mining (CRISP-DM) and PAISE procedure model into account
For this reason, two databases were searched with a reduced subset of the above-mentioned keywords (1st literature review) and cited sources in relevant publications were also investigated. Mainly role models focused on classical SE (without AI) were found. Based on these unexpected results regarding SE for AI-based systems, additional interviews with practitioners from the European SE community were performed. This led to identification of 12 role models for SE. The non-formal interviews were performed with around 20 representatives of industry and academia using following key questions:
-
Which role models are known to you for Systems Engineering?
-
If applicable: Which role model does your organisation follow for Systems Engineering?
-
Are there any role models you consider suitable for the development of AI-based systems?
At the given time there were no specific criteria for exclusion or inclusion of the responds for the analyses and all regularly mentioned role models (more than 3 responds) were investigated. For the sake of clarity and brevity only the most important findings are presented in this paper. Therefore, the 4 most mentioned role models as well as 1 role model with relation to SE for AI were chosen for presentation as result of the 2nd literature review (see also Figure 2). The overall research approach for this paper additionally includes comparative analyses of role models, gap identification by reflective practices on past projects, structuring schemes as creativity technique and finally conclusion in form of theses. For reflective practices especially the practical experience of the authors from multiple Software and Systems Engineering projects as well as further external authors from published work are used as reference. Supplementary results of field observation and reflective practices of 24 student design teams (∼200 students), who have written reflective reports as part of their engineering education at a German university, are integrated. Although it must be stated that this has been analysed in an unstructured manner at the time of this publication. The final contribution of the presented research is the set of theses for the future collaboration and team set up for system design of AI-based systems. The theses as well as the role model described by Figure 5 in this paper is still considered work in progress at the time of this publication. Following research questions are targeted by the presented research results:
1. How do role models for Systems Engineering support engineering of AI-based systems?
2. Which competencies for future design teams for AI-based systems are highly relevant?
3. Which roles and responsibilities for a suitable team setup for design of AI-based systems exist?
4. How is the collaboration within the design team impacted when developing AI-based systems?
2. State of the art
2.1. Fundamentals
To clarify the relevance of related work and research scope, terms used in this publication should be briefly defined. An AI-based system is a technical system composed by an assembly of multiple hardware and software components defined by a mission and separated by a system boundary from its system environment (Reference Sillitto, Martin, McKinney, Griego, Dori and KrobSillitto et al., 2019). The system is considered to have emergent behaviour according to the theory of complex systems (Stefan et al., 2019), whereby the overall system behaviour (functionality and performance) is highly dependent on software components using algorithms from the AI-disciplines ML, DL or GenAI (Reference Awadid, Robert and LangloisA. Awadid et al., 2024) according to Figure 1. Systems Engineering is an interdisciplinary and integrating approach for realization, operation, and decommissioning of technical systems by utilization of system principles and concepts as well as scientific, technological and management methods (Reference Sillitto, Martin, McKinney, Griego, Dori and KrobSillitto et al., 2019; Reference Walden, Shortell, Roedler, Delicado, Mornas and Yew-SengWalden et al. 2023). In context of the discussed work of this paper, we will focus on the early stages of product development of AI-based systems, namely the system design before development of electronics, software and mechanical components (Figure 3). A design team in this paper is considered a cross-disciplinary group of professional individuals, which are focussed on engineering of products and/or systems. Necessarily the definition of a team in leadership theory is applied, e.g. according to Yukl (Reference YuklYukl, 2010) or Kotter. This implies that a design team has a shared goal, namely the development of a technical or socio-technical solution for specific users. Therefore, a design team can follow a role model defining roles and responsibilities. For the presented research especially a condensed definition of role models in terms of the RACI-matrix (Responsibility, Accountability, Consultation, Information) is preferred since this is a practical approach often utilizes in industrial engineering practice and used for the student design teams mentioned in section 1.2.

Figure 3. Scoping of early stages of design for AI-based systems along the V-model (in red)
2.2. Related work
Since basic terms are defined, it should be noted that research and existing standards for role models for design teams in the field of Systems Engineering of AI-based systems is considered related work. Because there is barely work especially focussed on the design of AI-based systems, also the current state for non-AI-based systems is considered related work. For the sake of clarity and brevity only the most important findings are presented as following and a comparative comparison shown in Table 1.
The “Twelve Roles in Systems Engineering” concept (Reference SheardSheard, 1996) serves as a critical foundation for this work, as all these roles are involved in the development process of AI-based systems. However, the scope of some roles must be expanded and further roles added. This concept identifies the collaboration of the following roles as essential for the development of complex systems:
Requirements Owner, System Designer, Systems Analyst, Customer Interface, Technical Manager, Information Manager, Validation/Verification Engineer, Logistics/Ops Engineer, Process Engineer, Glue Among Subsystems, Coordinator, Classified Ads Systems Engineer. These roles must work closely together to understand and implement the requirements, ensure that the system is stable, scalable, and functional, and minimize risks. However, this is not sufficient for the development of systems involving AI, making it necessary to adapt and expand this role model, despite its solid foundational structure.
The model described in (Reference Walden, Shortell, Roedler, Delicado, Mornas and Yew-SengWalden et al., 2023) aligns with the “Twelve Roles in Systems Engineering” concept and addresses the development of complex systems. However, its focus is more on stakeholders and system boundaries, whereas the “Twelve Roles in Systems Engineering” concept has a more technical emphasis, placing SE at its core. In general, this concept is less suitable for adaptation to AI involvement. However, since it covers the entire lifecycle, an essential aspect for AI-based systems, and not just development and integration, a combination of both concepts is necessary to define and specify teamwork in the development process of AI-based systems.
Furthermore, there is also the role model proposed in (Reference Gräßler and OleffGräßler and Oleff, 2022). This model is also based on the assumption that complex systems can only be developed if the process is divided into several specialized roles. These roles take on different, but interconnected, responsibilities and tasks. The latter aspect is crucial for the interdisciplinary work required in AI-based systems. The key difference from the other role models is that Gräßler and Oleff expands them by adding interdisciplinarity, a focus on the lifecycle, and emergence, which, as already mentioned in the context of the “Theory of Complex Systems” (Reference Stefan, Klimek and HanelStefan et al, 2018), is a fundamental factor for the development of AI-based systems.
The “Advanced Systems Engineering” (ASE) role model (Reference Dumitrescu, Albers, Riedl, Stark and GausemeierDumitrescu et al., 2021) also describes necessary roles in system development. It is mainly applied in the development of complex technical systems and is relevant in the context of interdisciplinary teams, which is why it also serves as a basis for the considerations presented in this paper. It emphasizes the necessity of an interdisciplinary approach and clear role distribution to optimize communication and collaboration within a team. The advantage is mainly the adaptability of this model, which is of importance due to the rapid changes in AI technologies and legislation. Furthermore, the model already takes some requirements for AI-based systems into account since the design of so-called advanced systems is focussed.
Finally, the System Engineering Framework (SAF) model (Reference Andres, Bleisinger, Könnemann and StanglAndres et al., 2019) should be mentioned. It was developed to define and structure roles and responsibilities in the development process of complex systems. SAF focuses on teamwork and the various disciplines necessary to successfully develop and integrate a system. It is closely linked to the principles of SE and ensures that all relevant aspects of a project are covered, while maintaining the possibility for tailoring to specific needs of an organisation. It is frequently used in SE projects and adapted by multiple companies in the German automotive industry. However, the roles and definitions are not adapted to include AI involvement. The difference, and thus the uniqueness, of this model, which is why it is also considered, is that it already integrates risk management, which is essential for the development of AI-based systems. Table 1 summarizes the differences of these 5 models whereby roles in the different models were mapped for the sake of clarity.
Table 1. Comparative analysis of role models for Systems Engineering including mappings

When applying a mapping of role names in the different models and taking a closer look at Table 1, it is apparent that there is basically little difference between the different role models since all of the mentioned models are focused on SE. Therefore, some of the main roles will be taken into account for the derivation of a suitable role model for the setup of design teams for engineering of AI-based systems. One major takeaway from the comparative analysis is a lack for specific roles from disciplines tackling the demands mentioned in section 1 as e.g. explicit competencies with AI algorithms or legal issues. Section 3 will address this issue by discussing challenges specific for SE of AI-based systems.
3. Assumptions for the design team
3.1. Challenges for AI-based systems
As motivated in section 1 of this contribution, the development of AI-based systems is assumed to require extensive interdisciplinary collaboration, involving a wide range of diverse fields and expertise. The entire development process can thus be viewed as a complex socio-technical system on its own. In this context, reference is made to the “Theory of Complex Systems” (Reference Stefan, Klimek and HanelStefan et al., 2018). This theory is an approach that examines systems composed of many interacting components, which lead to emergent properties that can’t be derived from the characteristics of the individual parts directly. For the design team this means, that each role in the development process of AI-based systems is considered a role that interacts with at least one other role. Due to the strong interdependencies among various roles, changes in the workflow of one role can significantly impact the outcomes of other areas. Therefore, this perspective on the design team is applied in this work. Taking these assumptions into account, this means that technical, legal and organizational challenges impact the dynamic and collaboration of the team. Following there will be a description of challenges for the design of AI-based systems before requirements for the collaborative development in the design team are derived in section 3.2.
At this point it should be noted that there is quite a lot of related work regarding development procedures for ML algorithms or applications which are not taking hardware or rather an overall view on AI-based systems into account (as defined in section 2.1). Still, it is rewarding to closely investigate some significant standards which will provide insight for challenges in development of software components of the system which leads to impact on the overall system design. As of the research questions 2 and 3 of section 1.2 following standard and procedures/publications were screened and investigated regarding challenges in development of AI/ML algorithms: ISO 5338, ISO 15288, ARP-6983, SE-Handbook, Safety Lifecylce for Neuronal Networks (Reference Pfrommer, Usländer and BeyererKurd and Kelly, 2003), AI Systems Engineering (Reference Pfrommer, Usländer and BeyererPfrommer et al., 2022), PAISE (Reference Hasterok, Stompe, Pfrommer, Reiter, Ziehn and UsländerHasterok et al., 2022) and CRISP (Reference Hasterok, Stompe, Pfrommer, Reiter, Ziehn and UsländerWirth et al., 2000).
Since these works don’t focus on role models, they are not considered related work according to section 2.2 but should be briefly summarized as follows. It is often highlighted that there is a need to adapt existing processes and standards to integrate AI components in systems development. CRISP (Cross-Industry Standard Process for Data Mining) as well as classical Software Engineering and SE procedures are not suitable to achieve this goal. Emerging standards are not yet finalized and existing standards not adapted to new topics as e.g. GenAI or advanced ML algorithms in AI-based systems. SE and AI/ML engineering remain separate disciplines with minimal collaboration. From the further findings, especially from the CRISP model, following challenges for AI-based systems can be summarized:
Data quality is a driver for the success of ML components, since the impact of a low accuracy of AI predictions might lead to unintended behaviour in complex systems. Thus, not only the quality of the data but also data understanding is a crucial part for the design team as well as data preparation, since a wrong preparation can have the same influence as an insufficient data quality. The impact of predictions or even decisions on ML components have a high chance to lead to ethical issues. Thus, AI-based systems must be closely engineered while keeping ethical questions in mind. Regular ethical advisory regarding the impact of the engineered system in operation is therefore crucial. The same is true for consideration of the users’ acceptance. As user-centric design is getting common for commercial products, the management of acceptance should be considered an important goal for development of the systems besides performance and functionality. To estimate the future acceptance in the operational phase of AI-based systems a special focus should be put on consideration of user experience (UX). One distinct factor for UX is data protection, which poses another challenge for AI-based systems and needs to be assured while considering legal risks which might impact the technical solution.
3.2. Requirements for Systems Engineering
To tackle some of the challenges mentioned above, once again, it should be remembered that the design team is considered a complex system itself, and roles of the team members will influence each other. For instance, a new legal framework defined in the team would not only potentially influence the consideration of possible ethical and data protection issues but might also have direct impact on the data preparation. Changing the data preparation approach might lead to a technical solution where the dataset would need to be collected and prepared differently than originally planned by the design team. An isolated view on the roles would fail to explain these behavioural dynamics.
For SE and existing roles mentioned in section 2.2 this means the Requirements Engineer will need to have basic knowledge of needs regarding data quality and might require a basic data understanding to solve conflicting requirements while performing requirements inspection. Additionally, a Requirements Engineer not aware of legal risks won’t be able to assess if the customer or system requirements specification is complete. For sure it might not be the responsibility of the Requirements Engineer to provide legal requirement, but especially for AI-based systems it is a threat for the design team to fail a development project because of selecting a non-compliant technical solution. Since the EU AI Act by default e.g. forbids the use of real time face recognition by AI in consumer products this can lead to failure of the development project. Knowing this as Requirements Engineer is an issue of awareness leading to understand the need for legal advisory at a specific maturity stage in the project.
Complementary to the change of specific roles, new roles in design teams will become relevant. For example, the Legal Advisor is crucial as part of bigger or risky projects involving AI-based systems since it will take to long to have an external project partner (e.g. lawyer) assess legal risks and going on with the development in the meantime. The same holds true for consideration of the user experience and management of acceptance of the future product. In summary following conclusions are defined for relevant competencies in design teams, according to the literature review as well as practical experience from projects of the authors and education of undergraduate students in engineering:
-
basic knowledge on data processing, cleaning, storage as well as data quality issues
-
advanced knowledge and proficiency in implementation of relevant AI procedures
-
advanced competencies to (objectively) assess/predict user acceptance in the operational phase
-
basic experience with data security and/or privacy techniques and methods
-
basic legal education in risks concerning application of relevant AI procedures
-
basic ability to address ethical issues in a structured way and knowing current ethical discussions
4. Future team setup
Based on the requirements and challenges in section 3 and using the role models in section 2 as basis, a vision for a team setup is proposed. Since the proposed design team model is still subject of evaluation in engineering projects, the suggestions should be visualized as RACI-matrix in favour of brevity. In Figure 4 only tasks are considered, which are rather uncommon and not yet addressed in existing SE role models. Additionally, it should be noted that not all 14 roles must be fulfilled by a distinct person as this is true for all role models. For the RACI-matrix exemplarily some insights should be mentioned related to uncommon roles in design teams.
First, it might not be clear what the difference in terms of role descriptions might be for the data engineer and the data scientist. As per common definitions a Data Scientist is focused on analysis of data and development of models to support decisions of algorithms. Technically, tools used are the languages R, Python as well as ML libraries and frameworks for visualization. In contrast it is important to emphasize that a Data Engineer is considered a professional, who focuses on acquisition and provision of data by means of pipelines or software and hardware components needed to gather data as e.g. sensory systems and infrastructural features as data storages. In the design team she/he will be more concerned with database management and techniques of Software Engineering, e.g. to provide architectural solutions to ensure data privacy or security. These two roles tackle the challenges in section 3.1 regarding data quality, data understanding, data preparation and solve basic demands mentioned in section 3.2 as e.g. the need for basic knowledge on data processing, cleaning, storage as well as data quality.
In the same manner the roles of Legal Risks Officer and Acceptance Manager solve issues regarding the UX and low acceptance of the AI-based system, which is a major economic threat for the success of the development. Herby the Legal Risk Officer is considered to have a strong background and education on law while not neglecting basic knowledge in information systems or technology. In today’s world this profile is getting more common since special classes at university focus on closing the gap between law and IT by providing an interesting field to get involved closely in complex development projects as part of the design team. It is important to note, that the direct integration of specialists in law seems a bit strange compared to classical design teams but has about the same impact on development costs as the Requirements Engineer. It is common design knowledge, that erroneous development and wrong decisions in early design phases lead to highly multiplied costs in later design phases (Reference Ehrlenspiel, Kiewert and LindemannEhrlenspiel et al., 2007). This is not only true for erroneous requirements, but also for the selection of algorithms and technique which cause legal risks and must be cancelled in later development phases if they are detected too late. The same argumentation holds for neglection of acceptance of AI-based systems. Therefore, the Acceptance Manager role is mostly filled by computer scientists specialised in human-machine-interaction or product designers with a strong proficiency in user-centred design. The main goal of the Acceptance Manager is to objectively describe the estimated user acceptance of the final product in early design stages by best practices from research and development, including scoring models/metrics.

Figure 4. RACI-Matrix excerpt (R=Responsibility, A=Accountability, C=Consultation, I=Information)
5. Theses for team design
The overall explanation and understanding of interdependencies within the proposed role model of section 4 of this publication is difficult without working experience with the rather new roles proposed as part of the design team. For this reason, the main impacts on the setup of the design team should be condensed to some major theses to provide a more practical view on the collaboration in early system design (according to Figure 3) of AI-based systems. The theses are not exhaustive but are intended to illustrate the importance of considering previously unusual roles as part of the design team. It is obvious that by integrating new roles and extending classical roles, the issues mentioned in section 3 for SE can be overcome. Therefore, the theses describe the impact of the new/extended roles on the team dynamics. Following theses are considered results of the research procedure depicted in Figure 2:
-
Integration of legal knowledge in design teams leads to reduction of economic risks and reduces the available design space in early project phases for the System Architect, Data Engineer and Maintenance Engineer by definition of legal constraints/risks
-
Although the details of utilized data engineering concepts and applied ML algorithms is defined by the Data Scientist and ML Expert, these decisions impact tasks of the Systems Architect who must integrate and harmonize the overall architecture with AI-based components
-
While the Legal Risk Officer is responsible for current and future legal risks as well as accountable for data security compliance, close collaboration with the Risk Manager and Data Engineer is critical to ensure that major design goals to reduce economical risks are achieved
-
The code of conduct to tackle ethical issues is defined by the Ethical Advisor, but the Project Manager will be accountable and she/he must advocate for compliance with the code of conduct while closely discussing topics of ethical risks with the Risk Manager and Legal Risk Officer
-
The acceptance scoring/assessment is accounted by the Acceptance Manager, but she/he needs a close collaboration with the Requirements Engineer, who performs the acceptance scoring based on requirements relevant for the UX in the operational phase of the AI-based system
-
Although data security measures as well as data acquisition and management tasks are the responsibility and accountability of other roles, classical roles as the Maintenance Engineer and Test Engineer must be involved to ensure long-term maintenance and testability of the solutions
6. Conclusions
6.1. Summary
In summary this contribution outlined the importance of the cross-disciplinary design team which considers rather unusual roles to tackle challenges in the development of AI-based systems. For the 1st research question, it was shown that currently established role models don’t deliver sufficient support to tackle challenges and demands mentioned in section 3 regarding the design of AI-based systems. All investigated role models for SE define quite similar roles which are classically sufficient for conventional system design, but don’t embed competencies related to AI. Regarding the 2nd research question, competencies for the future design of AI-based systems were derived based on insights from different sources as ongoing research, standards and expert opinions. By comparing the demands on competencies to design AI-based systems with current role models, it was apparent that classical roles in Systems Engineering are not suitable to fulfil the needs of future design teams. Based on the results of these investigations, for research question 3 a suitable role model in form of an excerpt of the RACI-matrix was presented. Newly introduced roles were discussed regarding their relevance in future design teams. To show the impact of the new and adapted roles within the design team, conclusively theses were presented regarding the collaboration and dependencies between the roles in the new model. The theses summarize some insights regarding the collaboration between the roles for development of AI-based systems to provide an answer for research question 4. Overall, this paper presented the results of the research procedure defined in section 1.2 (Figure 2) which includes performing literature review, practitioner interviews, comparative analysis of role models, derivation of requirements for AI-based systems development from different standard procedures, reflective practices on past projects and field observations of engineering student teams developing AI-based systems in university education.
6.2. Limitations & future work
The presented paper discussed a current research gap regarding research questions addressing existing role models for Systems Engineering and future challenges in engineering of AI-based systems. For scientific correctness it should be clearly mentioned that a couple of limitations exists for the results of the discussed work. On the one hand, data was analysed and taken into account which can be considered limited mainly to the European countries. Especially the interviews within parts of the European SE community, according to the main interview key questions mentioned in section 2.2, are quite selective due to availability and willingness of participants. In future work this must be broadened to include and explicitly state quantity of participants of different countries. Furthermore, one drawback is that the role model presented in this paper might be derived by valid assumptions and demands for the development of AI-based systems, but a clear evaluation was not yet performed. Current work will focus on gathering a higher quantity and especially more precise reflection of experience with the usage of the role model in design projects. Therefore, further student engineering projects are planned in lecturing and the reflective reports of the students should be analysed in a more structured way - e.g. by quantitative analysis of mentioned challenges of the students instead of the qualitative analysis already integrated in the results of the presented paper. Finally, it should be stated, that the mapping of the roles in the different role models investigated and mentioned in section 2.2 is purely done by comparison of keywords in the role descriptions of the source documents. For the sake of brevity, the original descriptions were shortened in a first step, which might lead to over-simplifications and might have enforced a wrong mapping of roles.