Hostname: page-component-cb9f654ff-mwwwr Total loading time: 0 Render date: 2025-08-27T08:46:11.593Z Has data issue: false hasContentIssue false

From theory to practice: derivation of the application domain card for supporting AI system development

Published online by Cambridge University Press:  27 August 2025

Benedikt Müller*
Affiliation:
University of Stuttgart, Germany
Daniel Roth
Affiliation:
University of Stuttgart, Germany
Matthias Kreimeyer
Affiliation:
University of Stuttgart, Germany

Abstract:

The contribution introduces the Application Domain Card (ADC) as a structured, problem-oriented method for documenting the status quo and challenges within application domains, addressing a gap in existing AI development methodologies. Derived through a literature review, the ADC emphasizes flexibility, modularity, and accessibility, thereby enabling domain experts to identify AI use cases independently while fostering collaboration with AI experts. The practical applicability of the ADC was confirmed by a support evaluation involving technical drawing assessments in the context of design theory exercises. Future research will focus on refining the ADC to meet specific demands of industrial product development. This includes developing a software-supported application with automated tools for information collection and creating a library of practical examples for the method’s modules.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Author(s) 2025

1. Introduction

Artificial intelligence (AI) has emerged as a transformative technology with substantial potential to enhance product development (PD) processes and outcomes. Its application can drive both incremental improvements and radical changes by enabling the development of intelligent products or by optimizing the processes involved in their creation (Brakemeier et al., Reference Brakemeier, Gebert, Hartmann, Schamberger and Waldmann2023). This contribution focuses on the latter - utilizing AI to support and enhance PD processes. AI holds the promise of increased efficiency and quality through capabilities such as data-driven design (Wang et al., Reference Wang, Liu, Liu and Tao2021) and generative AI methods, which excel in processing vast amounts of data (Kretzschmar et al., Reference Kretzschmar, Dammann, Schwoch, Braun, Saske, Paetzold-Byhain, Malmqvist,M., Candi, Saemundsson, Bystrom and Isaksson2024). However, realizing these benefits requires the successful implementation of AI systems within existing PD workflows, necessitating a transformation toward AI-enabled product development. This often involves overcoming significant challenges, including a lack of in-house AI expertise, resource constraints, and limited access to reproducible best practices (Müller et al., Reference Müller, Roth and Kreimeyer2023). Consequently, many companies turn to external partners for support in implementing AI systems, leveraging benefits such as reduced internal expertise requirements, accelerated learning curves, and cost efficiencies from pre-developed solutions (Dzhusupova et al., Reference Dzhusupova, Bosch and Olsson2022). Regardless of the chosen strategy - whether in-house development, hybrid approaches, or external procurement (Hartmann et al., Reference Hartmann, Liebl and Schamberger2023) - the integration of AI follows structured development processes. Existing models for AI system development typically commence with a well-defined use case (Müller et al., Reference Müller, Roth and Kreimeyer2024). Yet, in industrial contexts, such specific use cases often need to be identified and developed as part of the project’s early stages (Brakemeier et al., 2023). In this context, activities can be carried out from different perspectives. While the problem- and data-oriented perspective focuses on internal company aspects, the technology-oriented perspective deals with the opportunities offered by AI (Feike, Reference Feike2022). Crucially, the success of AI implementation depends on both AI expertise and domain expertise. While AI expertise is essential for system development (Rädler and Rigger, Reference Rädler and Rigger2022), domain expertise is critical during the early stages, where specific, problem-oriented use cases are identified (Müller et al., 2024). However, this interplay between AI and domain expertise is often hindered by challenges such as high costs, communication gaps, and unrealistic expectations of AI capabilities (John, Reference John2021). Effective collaboration is therefore a key success factor (Uren and Edwards, Reference Uren and Edwards2023), requiring structured documentation to bridge the AI and application domain spheres (Luley et al., Reference Luley, Deriu, Yan, Schatte and &Stadelmann2023; Müller et al., 2024). In this context, methods such as AI canvases (e.g., Schuller et al., Reference Schuller, Peissner, Bauer, Groß and Staff2024), model cards (e.g., Mitchell et al., Reference Mitchell, Wu, Zaldivar, Barnes, Vasserman, Hutchinson, Spitzer, Raji and Gebru2019), and data cards (Pushkarna et al., Reference Pushkarna, Zaldivar and Kjartansson2022) have emerged as valuable assets. These methods facilitate communication by summarizing and contextualizing AI system capabilities and limitations within application domains (Sevilla et al., Reference Sevilla, Heim, Ho, Besiroglu, Hobbhahn and Villalobos2022). Despite their utility, these cards predominantly focus on technology- and data-oriented aspects, leaving a gap in systematic problem-oriented documentation that domain experts can easily leverage without heavy dependence on external AI expertise.

1.1. Contributions of this paper

To address this gap, this contribution proposes the Application Domain Card (ADC), a structured method aimed at empowering domain experts to systematically document specific challenges and the context of their application domain. Unlike existing approaches, the ADC emphasizes problem-oriented perspectives, enabling companies to independently identify and document potential AI use cases. It supports initial exploratory steps, helping companies communicate their status quo effectively to external AI experts and fostering a foundation for successful collaboration. The following question is addressed: What elements does the ADC method need to include to document the status quo and challenges in application domains, and to what extent is the method applicable in a practical context?

The elaboration of the method is based on a literature review on AI system development processes with a focus on early stages as well as related methods for documenting contents (Chapter 2). AI experts' methods for technical documentation of AI systems, models and code, e.g. in git repositories, are not considered. The focus is on collaborative methods in the early stages of AI adoption. On the findings, Chapter 3 is dedicated to the examination of the fundamental tenets of extant methods that pertain to the initial stages and problem-oriented approach. Furthermore, aspects for the ADC are derived, augmented by product development-specific features and criteria. The outcome is tested as part of a support evaluation using a product development-related process in the context of design theory exercises (Chapter 4). Concluding remarks discuss the results and identify research needs (Chapter 5 and 6).

2. State of science

AI adoption takes place within a complex interplay of technological, organizational, and environmental factors. Pumplun et al. (Reference Pumplun, Tauchert and Heidt2019) propose a holistic framework, adapted from the Technology-Organization-Environment (TOE) model, providing a structured perspective on the factors influencing successful AI adoption. This framework highlights technological factors such as compatibility with business processes and cases, organizational elements including data availability, quality, and protection, and environmental aspects such as regulatory requirements (e.g., GDPR compliance), industry standards, and customer readiness. Additionally, the study identifies critical enablers such as dedicated AI budgets, access to skilled data scientists and developers, while emphasizing barriers such as departmental silos, and resistance to change (Pumplun et al., Reference Pumplun, Tauchert and Heidt2019). Building on these perspectives, Radhakrishnan et al. (Reference Radhakrishnan, Gupta and Prashar2022) identify additional factors that extend beyond the TOE dimensions, including reproducibility, effort and performance expectancy, cost reduction, Human-AI interaction, and perceived ease of use. The TOE perspective can be applied to the elicitation of requirements for AI use cases by serving as a basis for contextualization, whereby use cases are described and evaluated in a context (Hofmann et al., 2020). These insights collectively underscore the multifaceted nature of AI adoption and the importance of addressing both technical and organizational challenges for successful implementation. In the following, relevant aspects in the context of this contribution are examined.

2.1. AI ecosystem and stakeholder

AI systems rely on a mix of technologies and resources, including compute, storage, and network resources. Therefore, it is important to consider the entire AI ecosystem when implementing AI. The AI ecosystem, as depicted in ISO/IEC 22989, is structured in functional layers, with each layer utilizing resources from the lower layers to implement its functions. The vertical sector (application domain), with its processes and activities, as well as specific requirements for the AI system, plays a central role, as each individual use case is anchored within. The AI system itself follows a functional global path, in which information is acquired either by hardcoding (through engineering) or by machine learning to model the domain (ISO/IEC 22989:2022). Once the model has been built, the AI function's task is to compute a result that would help achieve the AI system's goal. In principle, different AI technologies can be used to enable the function. Two paradigms emerge: Machine learning identifies patterns in data to make predictions, while traditional engineering relies on hardcoded knowledge based on developers' domain expertise (ISO/IEC 22989:2022). These fundamentally different paradigms highlight the shift from static, rule-based systems to dynamic, data-driven models (ISO/IEC 22989:2022). In practice, there are a variety of approaches that can be categorized as behavioral (e.g. intelligent robotics), perceptual (e.g. computer vision), and cognitive and learning (e.g. ML, generative AI) (Wang et al., 2021). The implementation of these approaches depends on fundamental factors such as data, its processing and sources. Big data has gained increased importance as organizations have expanded the collection of data in terms of breadth and depth, necessitating specialized techniques to gain insights (ISO/IEC 22989:2022). Equally relevant are resource pools, their management and provision (ISO/IEC 22989:2022). The ecosystem encompasses various technologies which are used simultaneously in large AI systems. Therefore, companies need to consider the diverse functional layers and resource requirements when implementing AI to ensure the efficient development and deployment of AI systems. (ISO/IEC 22989:2022)

The ADC addresses the vertical sector, i.e. the application domain of the AI system, and is intended to support its systematic description for domain experts, so that implications for the other functional layers of the AI ecosystem can be identified in collaboration with AI experts. PD processes can be organized into task clusters, each defined by specific goals, inputs, and outputs, such as report generation, approvals, data acquisition, and knowledge management (Gerschütz et al., Reference Gerschütz, Goetz and Wartzack2023). AI systems can support these by transforming inputs into desired outputs. In this contribution, the documentation of processes with clear input-output relationships is essential and is a core aspect of the ADC method.

In addition, different stakeholder groups can be considered regarding ISO/IEC 22989. The AI provider supplies platforms, products or services, while the AI producer is responsible for the design, development and deployment of the systems. AI customers integrate or use the systems directly, while the subset of AI users apply them in a specific operational context. AI partners are involved in tasks such as data delivery, system integration or auditing. AI subjects are central to how the systems operate, including the data used. Authorities regulate and monitor the systems. A company can simultaneously hold several stakeholder roles. (ISO/IEC 22989:2022)

This contribution focuses on the group of AI customers and their subgroup of AI users (domain experts), who are the target group for the ADC method, regardless of the implementation approach (e.g. in-house development, external purchase). The different stakeholder groups can be added various roles with a practical focus related to AI development projects in industry. Domain experts possess deep knowledge of their application domain, including processes, tools, and heuristics, alongside moderate AI competence for applying and analyzing AI solutions. Examples include development engineers, simulation engineers, and department heads (Müller et al., Reference Müller, Roth, Kreimeyer, Malmqvist, M., Saemundsson, Bystrom and Isaksson2024). In contrast, AI experts excel in AI development and project execution but have limited expertise in the application domain. Typical roles include data scientists, ML engineers, and AI product owners (Müller et al., Reference Müller, Roth, Kreimeyer, Malmqvist, M., Saemundsson, Bystrom and Isaksson2024).

2.2. AI system development process models

A basic approach to AI system development can be described by the AI system life cycle according to ISO/IEC 22989, which extends from inception, through design and development, deployment, to retirement (cf. Figure 1). The scientific literature offers a variety of other similar process models predominantly from IT-specific disciplines, some of which focus on specific technologies such as ML (Müller et al., 2024). The implementation of AI system development projects requires interdisciplinary teams, iterative working methods and a high level of AI expertise as key success factors (Müller et al., 2024; ISO/IEC 22989:2022), where cooperation with and early involvement of domain experts and stakeholders is also crucial (Müller et al., Reference Müller, Roth, Kreimeyer, Malmqvist, M., Saemundsson, Bystrom and Isaksson2024; Uren and Edwards, 2023). While the technical development of the AI system is carried out by AI experts, the involvement of domain experts is particularly relevant in the early stages and in validation and evaluation activities throughout the lifecycle, as well as in the operational phase. Particularly in the early stages, information requirements from the domain experts need to be gathered in order to determine the status quo, objectives and requirements. These relate to the areas of process and activity description, IT applications and infrastructure involved, data, existing roles and competencies, management and governance, transition conditions and requirements for the AI system to be developed (Müller et al., Reference Müller, Roth, Kreimeyer, Malmqvist, M., Saemundsson, Bystrom and Isaksson2024).

Figure 1. AI system life cycle stages and support methods

The models from the literature, described in a generic and system-independent way, typically start with a specific project idea, a defined AI use case (Müller et al., 2024), but from an industrial perspective, these are often not initially available. A more detailed, industry-oriented examination of the early stages of AI adoption is provided by Brakemeier et al. (2023), comprising the stages of preparation, ideation, assessment, prioritization and execution (cf. Figure 1).

The Preparation stage is critical for embedding the use and value of AI within an organization. Key activities include fostering positive attitudes towards AI through awareness and training initiatives, promoting collaboration, and establishing a shared understanding of AI's potential and limitations (Hofmann et al., Reference Hofmann, Jöhnk, Protschky and & Urbach 2020; Feike, 2022; Brakemeier et al., 2023). Developing a strategic AI vision aligned with business units, products or services is essential, as is assessing organizational maturity in areas such as data and expertise (Feike, 2022; Brakemeier et al., 2023; Kutzias et al., Reference Kutzias, Dukino and Leuteritz2023). Important outcomes of this stage are competent AI-trained employees, relevant case studies, search areas for AI application, e.g. development departments or specific activities, and the AI maturity results, on the basis of which transformation actions can be derived (Brakemeier et al., Reference Brakemeier, Gebert, Hartmann, Schamberger and Waldmann2023). The Ideation stage aims to identify potential AI use cases that address specific product development challenges or open up new technological opportunities. A distinction can be made between problem-, data- and technology-oriented approaches (Feike, 2022). The problem-oriented approach looks at development processes and activities (e.g. through detailed process analysis) with the aim of identifying current challenges from the perspective of AI customers, which can later serve as a starting base for the use of AI (Feike, 2022). The data-oriented approach also looks at the status quo regarding data and aims to provide an overview of its availability and description (Hofmann et al., 2020; Feike, 2022; Brakemeier et al., 2024). An outside-in perspective is applied through the technology orientation, in which opportunities through AI are identified (Hofmann et al., 2020; Feike, 2022; Brakemeier et al., 2024). The stage typically results in a systematic documentation that includes both problem and solution spaces (Brakemeier et al., Reference Brakemeier, Gebert, Hartmann, Schamberger and Waldmann2024). Concrete examples are the data map (data-oriented) according to Pushkarna et al. (Reference Pushkarna, Zaldivar and Kjartansson2022) and the method map (technology-oriented) according to Adkins et al. (Reference Adkins, Alsallakh, Cheema, Kokhlikyan, McReynolds, Mishra, Procope, Sawruk, Wang and Zvyagina2022). Problem-, data- and technology-oriented approaches exist in AI canvas methods, e.g. according to Schuller et al. (Reference Schuller, Peissner, Bauer, Groß and Staff2024), which, however, are kept very generic and lack the necessary level of detail for the accompanying application in later stages of technical implementation. This contribution focuses on the ADC (problem-oriented) as a complement to existing approaches. The Assessment stage builds on the output to evaluate the use cases for value and feasibility. The aim is to apply objective criteria for informed decision making. Criteria often relate to economic benefits (e.g. cost reduction, time savings), strategic alignment, and implementation requirements such as data quality, required algorithms, and technical and organizational expertise (Feike, 2022; Brakemeier et al., 2024). Make-or-buy decisions are also part of this phase, considering factors such as IP protection, risks, contract design and maintenance requirements (Hofmann et al., 2020; Brakemeier et al., 2024). The Prioritization stage deepens the assessments to derive specific implementation roadmaps. This is an iterative process where initial prioritizations are supplemented by detailed assessments and reviews (Brakemeier et al., 2024). In addition, holistic coordination of data and platform decisions is recommended to provide a coherent basis for more advanced systems (Kutzias et al., 2023). The Execution stage aims to successfully transfer prioritized AI use cases from the conceptual design phase to practical application (Brakemeier et al., 2024). Realization is based on previously defined roadmaps and is typically aligned on AI development procedures, such as the AI system life cycle (cf. Figure 1).

2.3. Methodological AI-related documentation support

In the context of AI system development, methodological support exists for the documentation of data and AI systems or models. In dynamic and uncertain AI environments, documentation is essential to ensure transparency and traceability. Publications relevant to this contribution are outlined hereafter and assigned to the relevant stages in Figure 1.

Micheli et al. (Reference Micheli, Hupont, Delipetrev and Soler-Garrido2023) conducted a literature review of data and AI documentation approaches, identifying methods such as questionnaires, information sheets, composable widgets, and checklists. They analyzed 36 approaches, examining their focus (data, AI models, or both), documentation methods, scope (domain-specific or horizontal), level of automation, and alignment with ISO/IEC 22989 stakeholder groups. In addition, they highlighted key concerns associated with each approach, providing valuable insights into their applicability and limitations (Micheli et al., Reference Micheli, Hupont, Delipetrev and Soler-Garrido2023). The findings of Micheli et al. (Reference Micheli, Hupont, Delipetrev and Soler-Garrido2023) underline key requirements for the development of the ADC method. Existing documentation approaches primarily focus on either data (18 of 36), AI models (4 of 36), or both (14 of 36) (Micheli et al., Reference Micheli, Hupont, Delipetrev and Soler-Garrido2023), with no emphasis placed on problem-oriented methodologies - thus validating the need for problem-focused solutions. Most approaches use information sheets or questionnaires (15 of 36 each) (Micheli et al., Reference Micheli, Hupont, Delipetrev and Soler-Garrido2023), which are easy to implement, expandable, and customizable, guiding the ADC's structure. While many approaches are horizontal (15 of 36), no domain-specific methods for product development exist (Micheli et al., Reference Micheli, Hupont, Delipetrev and Soler-Garrido2023), confirming the value of a tailored solution. Limited automation (6 of 36) (Micheli et al., Reference Micheli, Hupont, Delipetrev and Soler-Garrido2023) highlights the importance of applicability for domain experts rather than reliance on technical expertise. Furthermore, the dominant focus on AI-related personas, with AI users being mentioned only 13 times (Micheli et al., Reference Micheli, Hupont, Delipetrev and Soler-Garrido2023), underscores the necessity of empowering domain experts. Existing approaches to documenting AI use cases can be categorized into five perspectives: stakeholder-oriented, technology-oriented, data-oriented, multi-oriented, and problem-oriented perspectives, each addressing specific aspects of AI system development.

Stakeholder-oriented approaches , such as the method proposed by Humpert et al. (Reference Humpert, Wäschle, Horstmeyer, Anacker, Dumitrescu and &Albers2023), focus on identifying and analyzing stakeholders, systematically describing problems, and mapping potential use cases. Their concept utilizes UML methods like an activity diagram, environment model, stakeholder map to create functional networks for visualizing relationships between functions, machines, and requirements in a product development context. Primary technology-oriented approaches emphasize AI model transparency and performance. Model Cards (Mitchell et al., Reference Mitchell, Wu, Zaldivar, Barnes, Vasserman, Hutchinson, Spitzer, Raji and Gebru2019) document model details (e.g., version, training data, evaluation metrics) along with ethical considerations. Method Cards (Adkins et al., 2022) outline machine learning processes, including safety, data preparation, benchmarking, and robustness. AI Usage Cards (Wahle et al., Reference Wahle, Ruas, Mohammad, Meuschke and Gipp2023) focus on reporting AI-generated content, addressing project details, methodology, and ethical concerns. Solution Space Cards (Brakemeier et al., 2024) describe the AI system's desired output, the problem to be solved, and the required training data. Data-oriented approaches s focus on documenting datasets comprehensively. Data Cards provide a structured template with 31 themes, including dataset purpose, publisher information, and domain-specific knowledge, enabling informed and transparent dataset usage (Pushkarna et al., 2022). Multi-oriented approaches combine perspectives on technology, data, and problems to create a holistic view of AI use cases. The Industrial AI Canvas (Renumics, 2023) includes information on data curation, AI methods, and workflows, as well as problem-oriented elements like value propositions and stakeholder processes. Similarly, the AI-Canvas (Schuller et al., 2024) integrates problem descriptions, goals, and challenges with technological and data considerations, while the AI Use Case Canvas (Brakemeier et al., 2024) provides short use case descriptions, evaluation metrics, and implementation details. Multi-oriented approaches often use several forms of documentation. Hupont et al. (Reference Hupont, Fernández-Llorca, Baldassarri and Gómez2023) introduce the use case cards method, which includes a structured template comprising a use case table and a visual modeling canvas based on Unified Modeling Language (UML). The table captures essential elements such as the intended purpose (e.g. context of use, scope), type of product or system, application areas, primary actors and stakeholders, as well as detailed information on the intended use, including success criteria, failure protection, triggers, and key action steps (Hupont et al.,Reference Hupont, Fernández-Llorca, Baldassarri and Gómez2023). The approach of using UML to document use cases is also considered in the contributions of Almendros-Jiménez and Iribarne (Reference Almendros-Jiménez and & Iribarne2004), as well as Hupont and Gómez (2022). Problem-oriented approaches , while often integrated into multi-oriented frameworks, remain less developed as standalone methods. Brakemeier et al. (2024) propose a Problem Space Card, focusing on describing the issue to be solved, the processes affected by the use case, and user stories outlining expected interactions with the AI system. These diverse approaches highlight the need for tailored methods that address specific documentation requirements, particularly in problem-oriented perspectives, which are crucial for many industry contexts lacking inhouse AI expertise.

3. Content analysis and ADC conceptualization

The review of existing methods for documenting AI-related information indicates that the application domain is addressed to some extent, predominantly in multi-oriented approaches (cf. Chapter 2.3). To develop the Application Domain Card (ADC) concept, these approaches are analyzed with a focus on the problem-oriented perspective. This format is designed to facilitate the capture and documentation of the status quo and challenges of the application domain specifically by domain experts (AI customer and AI user). Furthermore, the information requirements of domain experts based on Müller et al. (2024) are considered, reflecting the needs of AI experts during subsequent AI development stages. In accordance with the observations presented in Chapter 2, the format of an information sheet comprising specific guiding questions, has been selected for the realization of the ADC method. In order to provide an accurate description of the focus processes and activities, it is essential to contextualize them within the findings of Müller et al. (2023) specifying the relationship between the application domain and the AI system. For a comprehensive and targeted information base, it is necessary to consider the activities and their interfaces in as detailed a context as possible (Müller et al., 2024). Subsequently, a comprehensive account of the existing situation is provided, employing both visual and textual representations. This necessitates an exact delineation of the activities in terms of their input-output relationships within the context of digital applications (Gerschütz et al., 2023). The ISO 9001:2015 standard provides a comprehensive structure that serves as a guiding framework in this context. A visual frame for process and activity description is provided which allows the possibility of applying UML (cf. Hupont et al., 2023), although other types of visual process description can also be integrated. Additionally, mapping via the knowledge staircase (cf. Figure 2) is incorporated, following the approach of Müller et al. (2023), to provide a standardized and structured representation. With this foundation, the process without the use of AI can be described. This perspective can be integrated with complementary technology-oriented methods, such as method cards, to facilitate the description of human-AI interaction in a comprehensible manner. In the context of AI adaptation, it is also essential to conduct a precise mapping of the stakeholders involved, their respective roles, and their competencies. These factors are considered in the context of the status quo through the creation of personas and the assignment of competency levels in accordance with Krathwohl (Reference Krathwohl2002).

In designing the format, particular attention was paid to the principles of flexibility, modularity, extensibility, accessibility, and content-agnosticism, as exemplified by the approach outlined by Pushkarna et al. (2022). Figure 2 provides an overview of the categories for documenting the application domain in current approaches, accompanied by product development-specific extensions on further literature, thereby depicting the contents of the ADC. The disparate field sizes preclude any conclusions regarding their relative significance. Due to space limitations, the display is presented here in a condensed format. As part of the evaluation process and the intended application, the fields will be enlarged to provide users with sufficient space to complete the requisite information. Furthermore, the leading questions are integrated into the format, and instructions on how to utilize the method are provided. For instance, pertinent secondary information, such as a list of competence levels based on Krathwohl (2002) with their respective explanations or a concise introduction to UML, is incorporated.

Figure 2. Application Domain Card (ADC) concept

4. Support evaluation

The purpose of the evaluation was to assess the usability of the ADC method in documenting the status quo and challenges within an application domain, focusing on its clarity, practicality, and overall usefulness. A secondary objective was to identify potential improvements based on participant feedback. Twelve participants with expertise in technical drawing assessment within engineering design were selected to ensure familiarity with the process, but no prior knowledge of the ADC methodology. The study was conducted using a digital representation of ADC on an online collaborative whiteboard and included guided instruction on UML and competence levels. Conducted in three sessions of four participants, the study included a brief introduction to the study objectives, context, and ADC methodology (10 minutes). This was followed by a practical, group-based application to document and analyze the assessment of engineering drawings in engineering design exercises (25 minutes). Feedback was collected through an evaluation questionnaire. The results are provided in Figure 3.

In addition to the results of the evaluation questionnaire, two key observations were noted. Iteration loops occurred in all groups as initial problem analyses revealed issues rooted in contextual processes, leading to a redefinition of boundaries. Challenges also arose in applying the UML, mapping processes to the knowledge staircase, and classifying competence levels. To address these, a structured process is

Figure 3. Results of the support evaluation

proposed for industrial application: (1) introduction - building method competence through training in terminologies, UML and classifications along the knowledge staircase, (2) co-design - initial method application and content completion, conducted in workshops or via asynchronous collaboration, and (3) validation - review by qualified internal stakeholders (e.g. department heads). The validated ADC can then serve as a basis for internal AI development, collaboration, or engagement with external AI experts.

5. Discussion

The ADC method is designed as an information sheet incorporating guiding questions, tailored to AI customers and users, such as domain experts in product development environments. The results of the support evaluation indicate that the method is considered principally applicable and effective in documenting the status quo and identifying challenges within an application domain. The modular structure of the ADC permits users to map the current status of their projects in a flexible manner, while enabling adaptability and the incorporation of company-specific and project-specific expansion. The paper-based design improves accessibility. It can be printed or used digitally, such as with collaborative online whiteboards. However, implementation in a software application offers significant advantages, including support for versioning, storage and archiving, and metadata integration, for example in RAG (Retrieval-Augmented Generation) systems. Although the evaluation presented here demonstrates the utility of the ADC in the context of product development, it should be noted that the approach reflects a simplified structure with limited stakeholder, data, and application complexity. Given the greater complexity and diverse requirements of industrial processes, the ADC method must be re-evaluated for broader industrial applications. A key limitation has been identified as the lack of a common understanding of the method's terminology and concepts, such as accurately describing activities along the knowledge staircase, the initial modeling with UML, and the assessment of the competence level regarding the involved roles. Furthermore, it remains to be examined which alternative methods for process and activity representation, beyond UML, are suitable for industrial application. There is a need for improvement in both the lucid presentation of the individual modules and their accompanying explanation and description. The participants observed that it would be of benefit to include completed examples, a step-by-step sequence for working through the modules, and demonstrations of their relevance in the context of AI system development. Further industry-oriented analyses are essential to refine and adapt the ADC method to the nuanced demands of real-world requirements of domain experts.

6. Conclusion and future work

This contribution introduces a problem-oriented support method for the initial stages of AI system development projects. Based on a literature review and analysis of existing support methods, the ADC was developed and presented. The method includes key components such as the contextual, detailed description of domain activities in both textual and visual form. In this context, an understanding of the relationships between activities, roles involved, data and applications used is essential in order to provide a foundation for AI system development projects. This understanding enables the targeted identification of problems and challenges, allowing them to be described both qualitatively and quantitatively, supplemented by process metrics. The elaboration thus answers the question regarding the elements required for the method to document the status quo and challenges in application domains. The applicability of the method was assessed in a support evaluation within a product development-related process, specifically focusing on the assessment of technical drawings in the context of design theory. The results demonstrate the practical applicability of the method, but also indicate potential for improvement with regard to the guided, software-based implementation and the provision of exemplars for the various tasks involved in its completion. Further research is required to fully assess its efficacy and to enhance the ADC and its practical applications. The key areas for further research include the automated extraction and capture of ADCs, as well as the development of domain-specific card sets offering pre-filled best practices to assist organizations in developing their competencies and defining actionable steps. Furthermore, the consolidation of interfaces between problem-, data-, and technology-oriented approaches is crucial to guarantee consistent methodological support throughout the AI lifecycle. The utilization of platforms and services that employ standardized documentation could assist in the resolution of industry-wide challenges by facilitating the reproducibility of best practices, the identification of similarities between problems, and a comprehensive comparison of solutions.

References

Adkins, D., Alsallakh, B., Cheema, A., Kokhlikyan, N., McReynolds, E., Mishra, P., Procope, C., Sawruk, J., Wang, E., &Zvyagina, P. (2022). Method Cards for Prescriptive Machine-Learning Transparency. 2022 IEEE/ACM 1st International Conference on AI Engineering – Software Engineering for AI (CAIN), 90100. https://doi.org/10.1145/3522664.3528600 CrossRefGoogle Scholar
Almendros-Jiménez, J. M., & Iribarne, L. (2004). Describing Use Cases with Activity Charts. Lecture Notes in Computer Science LNCS , 3511, 141159. https://doi.org/10.1007/11518358_12 CrossRefGoogle Scholar
Brakemeier, H., Gebert, P., Hartmann, P., Schamberger, M.&, Waldmann, A. (2023). Applying AI: How to find and prioritize AI use cases. appliedAI Initiative GmbH. https://www.appliedai.de/en/insights/how-to-find-and-prioritize-ai-use-cases Google Scholar
Dzhusupova, R., Bosch, J., & Olsson, H. H. (2022). The Goldilocks Framework: Towards Selecting the Optimal Approach to Conducting AI Projects. 2022 IEEE/ACM 1st International Conference on AI Engineering – Software Engineering for AI (CAIN), 124135. https://doi.org/10.1145/3522664.3528595 CrossRefGoogle Scholar
Feike, M. (2022, July 26). In 4 Schritten zum ersten KI Use Case. Fraunhofer IAO Blog. https://blog.iao.fraunhofer.de/in-4-schritten-zum-ersten-ki-use-case/# Google Scholar
Gerschütz, B., Goetz, S., & Wartzack, S. (2023). AI4PD—Towards a Standardized Interconnection of Artificial Intelligence Methods with Product Development Processes. Applied Sciences, 13(5), 3002. https://doi.org/10.3390/app13053002 CrossRefGoogle Scholar
Hartmann, P., Liebl, A., & Schamberger, M. (2023). Applying AI: Enterprise Guide for Make-or-Buy Decisions. appliedAI Initiative GmbH. https://www.appliedai.de/en/insights/make-or-buy-decisions Google Scholar
Hofmann, P., Jöhnk, J., Protschky, D., & Urbach, and N. (2021). Developing Purposeful AI Use Cases – A Structured Method and Its Application in Project Management. https://library.gito.de/2021/07/wi2020-zentrale-tracks-2/ Google Scholar
Humpert, L.,Wäschle, M., Horstmeyer, S., Anacker, H., Dumitrescu, R., &Albers, A. (2023). Stakeholder-oriented Elaboration of Artificial Intelligence use cases using the example of Special-Purpose engineering. Procedia CIRP, 119, 693698. https://doi.org//10.1016/j.procir.2023.02.160 CrossRefGoogle Scholar
Hupont, I., Fernández-Llorca, D., Baldassarri, S., & Gómez, E. (2023). Use case cards: a use case reporting framework inspired by the European AI Act. arXiv. https://doi.org/10.48550/arXiv.2306.13701 CrossRefGoogle Scholar
Hupont, I., &Gomez, E. (2022). Documenting use cases in the affective computing domain using Unified Modeling Language. 2022 10th International Conference on Affective Computing and Intelligent Interaction (ACII), 18. https://doi.org/10.1109/ACII55700.2022.9953809 CrossRefGoogle Scholar
International Organization for Standardization. (2015). Quality management systems — Requirements (ISO 9001:2015). ISO. https://www.iso.org/standard/62085.html Google Scholar
International Organization for Standardization. (2022). Information technology - Artificial intelligence - Artificial intelligence concepts and terminology (ISO/IEC 22989:2022). ISO. https://www.iso.org/standard/74296.html Google Scholar
John, M. M. (2021). Design Methods and Processes for ML/DL models [Licentiate thesis, Malmö University]. Holmbergs. https://doi.org/10.24834/isbn.9789178771998 CrossRefGoogle Scholar
Krathwohl, D. R. (2002). A Revision of Bloom’s Taxonomy: An Overview. Theory into Practice, 41(4), 212218.CrossRefGoogle Scholar
Kretzschmar, M., Dammann, M. P., Schwoch, S., Braun, F., Saske, B., Paetzold-Byhain, K. (2024).Evaluating the Current Role of Generative AI in Engineering Development and Design - A Systematic Review. In Malmqvist,M., J. Candi, R. J. Saemundsson, F. Bystrom, , & Isaksson, O. (Eds.), DS 130: Proceedings of NordDesign 2024 (pp. 2130). https://doi.org/10.35199/NORDDESIGN2024.3 Google Scholar
Kutzias, D., Dukino, C., & Leuteritz, J.-P. (2023). Leitfaden zur Durchführung von KI-Projekten. https://doi.org/10.24406/publica-1637 CrossRefGoogle Scholar
Luley, P.-P., Deriu, J. M., Yan, P., Schatte, G. A., &Stadelmann, T. (2023). From Concept to Implementation: The Data-Centric Development Process for AI in Industry. 10th IEEE Swiss Conference on Data Science (SDS), Zurich, Switzerland, pp. 7376. https://doi.org/10.1109/SDS57534.2023.00017 CrossRefGoogle Scholar
Micheli, M., Hupont, I., Delipetrev, B., & Soler-Garrido, J. (2023). The landscape of data and AI documentation approaches in the European policy context. Ethics and Information Technology, 25(4), 56. https://doi.org/10.1007/s10676-023-09725-7 CrossRefGoogle Scholar
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model Cards for Model Reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220229. https://doi.org/10.1145/3287560.3287596 CrossRefGoogle Scholar
Müller, B., Roth, D., & Kreimeyer, M. (2023). Barriers to the Use of Artificial Intelligence in the Product Development – A Survey of Dimensions Involved. Proceedings of the Design Society, Volume 3: ICED23, (pp. 757766). https://doi.org/10.1017/pds.2023.76 CrossRefGoogle Scholar
Müller, B., Roth, D., & Kreimeyer, M. (2024). Survey of the Role of Domain Experts in Recent AI System Life Cycle Models.In Malmqvist, J., M., Candi, Saemundsson, R. J., Bystrom, F., & Isaksson, O. (Eds.), DS 130: Proceedings of NordDesign 2024 (pp. 256265). https://doi.org/10.35199/NORDDESIGN2024.28 CrossRefGoogle Scholar
Pushkarna, M., Zaldivar, A., & Kjartansson, O. (2022). Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 17761826. https://doi.org/10.1145/3531146.3533231 CrossRefGoogle Scholar
Pumplun, L., Tauchert, C., & Heidt, M. (2019). A New Organizational Chassis for Artificial Intelligence-Exploring Organizational Readiness Factors. In Proceedings of the 27th European Conference on Information Systems (ECIS), Volume 27 (pp. 115). AIS Electronic Library (AISeL). ISBN 978-1-7336325-0-8Google Scholar
Radhakrishnan, J., Gupta, S., & Prashar, S. (2022). Understanding Organizations’ Artificial Intelligence Journey: A Qualitative Approach. Pacific Asia Journal of the Association for Information Systems, 14(6), 4377. https://doi.org/10.17705/1pais.14602 CrossRefGoogle Scholar
Rädler, S. & Rigger, E. (2022). A Survey on the Challenges Hindering the Application of Data Science, Digital Twins and Design Automation in Engineering Practice. 17th International Design Conference (pp. 16991708). Cambridge University Press. https://doi.org/10.1017/pds.2022.172. CrossRefGoogle Scholar
Renumics GmbH. (2023). Industrial AI Canvas. Renumics GmbH. https://renumics.com/blog/the-industrial-ai-canvas Google Scholar
Schuller, A., Peissner, M., & Bauer, W. (2024). Die KI-Roadmap für Ihr Unternehmen – Ein Vorgehensmodell für erfolgreiche KI-Anwendungen. In Groß, M. & Staff, J., KI-Revolution der Arbeitswelt (1st ed., pp. 290304). Haufe-Lexware.Google Scholar
Sevilla, J., Heim, L., Ho, A., Besiroglu, T., Hobbhahn, M., & Villalobos, P. (2022). Compute Trends Across Three Eras of Machine Learning. International Joint Conference on Neural Networks (IJCNN), 18. https://doi.org/10.1109/IJCNN55064.2022.9891914 Google Scholar
Uren, V.; Edwards, J. S. (2023). Technology readiness and the organizational journey towards AI adoption: An empirical study. International Journal of Information Management, 68, 102588. https://doi.org/10.1016/j.ijinfomgt.2022.102588 CrossRefGoogle Scholar
Wahle, J. P., Ruas, T., Mohammad, S. M., Meuschke, N., & Gipp, B. (n.d.). AI Usage Cards: Responsibly Reporting AI-generated Content. arXiv. https://doi.org/10.48550/arXiv.2303.03886 CrossRefGoogle Scholar
Wang, L., Liu, Z., Liu, A., & Tao, F. (2021). Artificial intelligence in product lifecycle management. The International Journal of Advanced Manufacturing Technology, 114(3), 771796. https://doi.org/10.1007/s00170-021-06882-1 CrossRefGoogle Scholar
Figure 0

Figure 1. AI system life cycle stages and support methods

Figure 1

Figure 2. Application Domain Card (ADC) concept

Figure 2

Figure 3. Results of the support evaluation