Hostname: page-component-cb9f654ff-plnhv Total loading time: 0.001 Render date: 2025-09-10T04:50:43.109Z Has data issue: false hasContentIssue false

Enhancing knowledge transfer through LLM-based applications: a preliminary study

Published online by Cambridge University Press:  27 August 2025

Alexander Patrick Schlegel*
Affiliation:
University of the Bundeswehr Munich, Germany
Alexander Koch
Affiliation:
University of the Bundeswehr Munich, Germany

Abstract:

Large Language Models offer a novel approach with low barriers to entry to potentially improve knowledge transfer in product development. After identifying knowledge barriers from literature that are potentially addressable through LLM-based applications, we analyze two GDPR-compliant LLM applications - ChatGPT Enterprise and Langdock - examining their key features: assistants and chatbots for both, and prompt libraries and LLM-based file search for Langdock. Then, we evaluate each feature’s potential to mitigate each barrier. Our findings show that assistants and chatbots provide wide-ranging support across many barriers, whereas prompt libraries and file search deliver targeted solutions for a narrower set of specific challenges. Given the numerous influencing factors and the rapidly evolving field of LLMs, the study concludes with a research agenda to validate the theoretical findings.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Author(s) 2025

1. Introduction

Inefficient knowledge transfer (KT) poses a significant challenge for businesses, leading to substantial financial losses: A 2018 study found that an average US business with 1,000 employees loses about 2.4 million USD annually due to such inefficiencies (Panopto, 2018). Over the years, various technological interventions aimed at improving knowledge transfer have been explored, progressing through distinct phases that encompassed Information and Communication Technologies (ICT) (Grieb, Reference Grieb2008; Vartiainen & Jahkola, Reference Vartiainen, Jahkola and Yamamoto2013; Walter et al., Reference Walter, Rapp and Albers2016), and more recent innovations such as cloud-based file storage systems (Reference Kern, Müller and KollmannKern & Müller, 2019), or data-driven design (Reference Cantamessa, Montagna, Altavilla and Casagrande-SerettiCantamessa et al., 2020; Reference Liu, Wang and WangLiu et al., 2022), which facilitates knowledge transfer and collaboration by systematically integrating data as a strategic resource in the product design process. Whilst these approaches show great potential in enhancing productivity and informing decision making, the complexity is not to be underestimated - as additional difficulties, e.g. communication barriers between data scientists and engineers, are introduced (Reference Liu, Wang and WangLiu et al., 2022). In this context, the widespread availability of ready-to-use Large Language Model (LLM)-based applications, on the other hand, offer a new opportunity to reassess the challenges identified in previous research with comparably low barriers to entry. These applications offer the potential to realize significant performance gains in knowledge transfer, potentially addressing longstanding barriers that previous solutions could not fully overcome. While recent research has extensively explored the potential of Large Language Models (LLMs) in enhancing product-related tasks (Gärtner & Göhlich, Reference Gärtner and Göhlich2024; Göpfert et al., Reference Göpfert, Weinand, Kuckertz and Stolten2024; Paetzold-Byhain et al., Reference Paetzold-Byhain, Saske, Berger, Schwoch, Dammann and Kretzschmar2024), the application of these models in overcoming knowledge barriers remains largely unexplored. This paper aims to conduct an initial exploration of the potential of LLM-based applications in mitigating knowledge barriers in product development, given two ready-to-use, GDPR-compliant LLM platforms available at the time of research. With the help of relevant literature, the authors derive potential in the mitigation of knowledge barriers for the respective features of these applications, providing theories that lay the foundation for future research. To achieve this goal, the double diamond is utilized as a methodological foundation for our research. We will first find a comprehensive source for knowledge barriers to open up the problem space and then select knowledge barriers that have actual potential in being mitigated by LLM-based features. Following up, we gather a list of LLM-based features and strengths, answering our first research question:

  • RQ1: What features do currently available and GDPR-compliant LLM-based applications offer that may help mitigate knowledge barriers in product development?

We then evaluate each feature’s potential on a three-point scale, calculating a total score to identify the most useful features overall and for specific barriers, thus addressing our second research question:

  • RQ2: What LLM-based features offer the highest potential to mitigate particular knowledge barriers in product development?

Finally, we critically reflect on the limitations of our study and provide an outlook of the next steps in this research: a more comprehensive research of platforms, validation through quantitative laboratory testing and a real-world case study to ensure practical applicability of theoretical findings.

2. Methodology

The Double Diamond methodology was selected as the research framework due to its established efficacy in facilitating both divergent exploration and convergent analysis, providing a structure for the systematic identification of knowledge barriers and subsequent evaluation of LLM-based mitigation strategies. Figure 1 illustrates the specific implementation showcasing the connection between chapters, phases of the double diamond, and the proposed research questions. After Chapter 1 (Introduction and Motivation) and 2 (Methodology), we begin our study by tracking down a list of knowledge barriers that are on a similar level of abstraction (See 3. in Fig. 1). Next up, we choose which barriers can potentially be mitigated through utilization of LLM-based features, leaving those out that e.g. require measures on an organizational scale, like shortage of resources. That leaves us with a selection of barriers as our problem definition, allowing us to enter the solution space. In the solution space we provide an insight into LLMs and analyze LLM-based features which represent the answer to RQ1. We then switch to the convergent phase in the solution space, the selection of the solutions with the most potential to tackle the selected barriers. Finally, we critically reflect upon our study in the conclusion and give an outlook for future research.

Figure 1. Methodology: Double Diamond (based on Design Council (2005))

3. Overview of knowledge barriers in technical product development

A well-designed information flow can be characterized by the information reaching the right addressee (1) at the right time (2) (Reference Kersten and KernKersten & Kern, 2003). In reality, information flows do not always match this criteria - and can, if they prevent the generation of new knowledge, hinder the representation of knowledge (e.g. in documents, databases) and prevent the exchange of knowledge between employees, be classified as “knowledge barriers” (Reference Kern, Sackmann, Koch, Keuper and NeumannKern et al., 2009). Kern et al. (Reference Kern, Sackmann, Koch, Keuper and Neumann2009) state that although these barriers have been developed in the context of concept development and experimentation projects, the catalogue is very generic - and the extent, to which each barrier is prevalent, is highly project specific.

3.1. Selection process of knowledge barriers

To gain a clear understanding of the causes of inefficiencies in knowledge sharing, the authors have scanned relevant literature for a list of knowledge barriers at a comparable level of abstraction. Kern et al. (Reference Kern, Sackmann, Koch, Keuper and Neumann2009) identified 46 knowledge barriers using two different methods suggested by Bick et al. (Reference Bick, Hanke and Adelsberger2003), also making suggestions on how to mitigate these barriers. The paper by Kern et al. (Reference Kern, Sackmann, Koch, Keuper and Neumann2009) will therefore serve as a foundation for our analysis, from which we will pre-select the barriers that could possibly be mitigated through implementation of LLM-based applications. To fall into the preselection, the authors only choose barriers that show the following characteristics:

  1. 1. Employees are actively trying to improve or ensure knowledge transfer but lack the knowledge / ability to provide the necessary information in sufficient quality at the given time.

  2. 2. The employee has the possibility to improve KT on an operative level, excluding large-scale measures and structural changes within the company.

For barriers that do not fulfil these criteria, mitigation through LLM-based tools remains highly unlikely if not impossible - provision of tools does not seem to do justice to the actual causes and effects of these barriers. The barriers in Kern et al. (Reference Kern, Sackmann, Koch, Keuper and Neumann2009) are categorized by origin, allowing us, per criteria number 1. to remove the category “Willingness to share and absorb [knowledge]” as a whole, as these barriers all stem from employees actively hindering knowledge transfer, which needs to be addressed on a personal level. This category consists of eight knowledge barriers, e.g. lack of motivation, resistance to change and aversion towards foreign knowledge. Another category that can be removed from scope of consideration is “Technical equipment”, which consists of three barriers (e.g. insufficient IT Infrastructure). Further barriers that were removed from consideration are listed in Table 1:

Table 1. Barriers removed from consideration, in (Reference Kern, Sackmann, Koch, Keuper and NeumannKern et al., 2009) (except classified as “Willingness to share & absorb” and “Technical equipment”)

3.2. Description of relevant knowledge barriers

From this selection emerges the set of knowledge barriers displayed in Table 2, which will be described in further detail below. Kern et al. (Reference Kern, Sackmann, Koch, Keuper and Neumann2009) also categorized the barriers according to the HTO-Approach (Human, Technology, Organization), dividing the category “Human” into individual and interpersonal barriers for improved differentiation. The following descriptions are essentially based on the descriptions by Kern et al. (Reference Kern, Sackmann, Koch, Keuper and Neumann2009), unless other sources are explicitly mentioned.

Table 2. Result of the selection process of knowledge barriers, in (Reference Kern, Sackmann, Koch, Keuper and NeumannKern et al., 2009)

3.2.1. Individual barriers

The first knowledge barrier to be investigated in this paper is “Communication problems” (No. 1). It is in the original source classified as an individual barrier in the subcategory “ability to share and absorb knowledge” - a collection of barriers that are the result of missing understanding or know-how on knowledge transfer. This barrier arises when the sender or the recipient of the information lacks the ability to communicate effectively, making the necessary provision of information unclear, therefore leading to an insufficient information flow. Barrier No. 2 in contrast to No. 1 refers to the knowledge itself, where it is clear to sender and recipient what information is needed and when it is needed, but the ability to provide that information or to process it is missing (Reference Wang and NoeWang & Noe, 2010).

3.2.2. Interpersonal barriers

The knowledge barriers No. 15 to 18 are classified as interpersonal barriers that focus on conflicts because of human interaction and communication. Barrier No. 15 describes the formation of subgroups, a phenomenon emerging from varying frequencies of communication between team members, which leads to uneven development of trust and cohesion. These subgroups may isolate certain employees from others and reduce communication to or from outside the subgroup (Reference CramtonCramton, 2001). Language barriers (No. 16) encompass the loss of efficiency in communication resulting from difficulties in finding appropriate words and adequately expressing feelings and ideas. This barrier is also especially prevalent in globally distributed teams, as industry studies have shown (Meyer-Eschenbach et al., Reference Meyer-Eschenbach, Gautam, Wildung and Schüler2008; Meyer-Eschenbach & Blessing, Reference Meyer-Eschenbach and Blessing2005). Barrier No. 18, termed “lack of dialogue culture”, results in diminished spontaneous communication among team members. This deficiency impedes employees from maintaining current knowledge of their colleagues’ work progress and overall project status. The absence of a robust dialogue culture thus potentially compromises the efficacy of information dissemination within the team structure.

3.2.3. Organizational barriers

Organizational barriers arise from workplace culture, structure, and processes. While these factors are external, the resulting knowledge barriers often stem from human reactions to these conditions. These barriers can potentially be mitigated through LLM-based features. For instance, Barrier No. 29, “Uncertainty about knowledge in the team,” results from a lack of awareness about knowledge management. This leads to poor distribution and documentation of knowledge within the organization. Consequently, knowledge becomes fragmented and consolidated by different individuals in various locations, causing efficiency losses. Barrier No. 36, in contrast to barrier No. 29, does not evolve because of organizational culture, but because of a knowledge process being implemented but staying incomprehensible for the employees that should utilize it. Insufficient involvement of relevant knowledge holders (No. 37) is the opposite expression to barrier No. 29 - it is clear, who holds the knowledge, but they are not included in the project sufficiently. Inefficient access to knowledge (No. 39) manifests itself through information search being time-consuming. An exemplary root cause for this barrier is the formation of functional silos (potentially with different data storages), making access to knowledge from one functional area difficult for stakeholders from other functional areas (Reference Riemer, Schellhammer and MeinertRiemer et al., 2019). Barrier No. 40, data quality, refers to poor documentation (formally or content-wise) of already existing knowledge - making it incomprehensible for other team members.

3.2.4. Technology-related barriers

Only one barrier from the category technology-related barriers can be potentially addressed using LLM-based features: qualification deficits (No. 46). The missing ability of the user to use the tools and software given to him may hinder him in sharing knowledge as intended by the organization.

4. Characteristics and features of LLMs

To provide a foundational understanding of LLMs, this chapter examines their underlying mechanisms, contextualizes them in the field of artificial intelligence (AI), and clarifies their application in this research. We will first review the definition of artificial intelligence and then move on to the features and functions of LLMs, addressing Research Question 1 in the “develop” phase of the double diamond.

4.1. Introduction to AI and LLMs

The term artificial intelligence in general describes a field of computer science that focuses on creating agents that have human-like capabilities like reasoning, learning, and acting autonomously (Reference AuffahrtAuffahrt, 2023). The foundation of AI is the field of machine learning, which is a field of computer science that develops algorithms that can learn from known data. These algorithms can then be applied to unknown data (Reference CorneliusCornelius, 2019). These algorithms do not correspond to the widespread conviction that computers do only what they are programmed to do - by applying self-learning, these programs can program themselves based on existing data (Reference CorneliusCornelius, 2019). While the field of AI consists of a large variety of algorithms and applications, this paper will only investigate so-called large language models, which Yang et al. (Reference Yang, Jin, Tang, Han, Feng, Jiang, Yin and Hu2023), p. 2 have defined as:

“LLMs are huge language models pretrained on large amounts of datasets without tuning on data for specific tasks.”

These LLMs consist of four distinctive layers: a) the hardware needed to run the models, b) the model layer which contains the pure models, c) the infrastructure layer that hosts and trains the models and d) the application layer or front end that contains the user interface and the features the end user can use (e.g. Chatbots, Assistants) (Adebowale Adeleke, 2024).

4.1.1. Context window in LLMs

To gain a better understanding of the strengths and limitations of LLMs, the authors will first provide a brief overview of the model layer. The field of LLMs is characterized by rapid innovation and a variety of approaches, with key players such as OpenAI with models like GPT-4o and OpenAI o1, Meta with Llama 3, Google with Gemini 1.5, and Anthropic with models like Claude 3.5 Sonnet (Reference Naveed, Khan, Qiu, Saqib, Anwar, Usman, Akhtar, Barnes and MianNaveed et al., 2024; Reference Yang, Jin, Tang, Han, Feng, Jiang, Yin and HuYang et al., 2023). These models differ towards the end user, apart from overall performance and individual strengths in specific tasks, in the length of their context window, which is the interface that transmits all the users input to the model for processing. This context window has a maximum length of text it can accept, which represents the maximum amount of context information an LLM can use at once to generate an answer. It is measured in tokens, which represent words or subwords that the model uses as its dictionary (Reference AuffahrtAuffahrt, 2023). As an numerical example, Google’s Gemini 1.5 Pro supports up to ~2m tokens as input, with 100 tokens corresponding to about 60-80 English words (Langdock GmbH, 2024c). A larger context window increases the amount of information to be added to further define the context of the prompt, therefore increasing the possibility of getting useful answers: Research has found that adding the right information to prompts, so-called in-context-learning, can reduce biases and lead to significant performance gains (Reference Yang, Jin, Tang, Han, Feng, Jiang, Yin and HuYang et al., 2023).

4.1.2. Strengths and limitations of LLMs

After having explained the basic concepts of LLMs, we will further elaborate on the strengths and limitations of LLMs relevant for this work. Beginning with the strengths, the first specific strength is the capability of modern LLMs to process large amounts of information at once (See chapter 4.1.1: Context window in LLMs). This capability is supplemented through the ability to give individual feedback to users in real-time (Reference Moon, Lima, Froula, Li, Newsome, Trivedi, Bercu and Wawira GichoyaMoon et al., 2024). In addition to the real-time capabilities, AI-powered systems are available for use 24/7 (Reference Naveed, Khan, Qiu, Saqib, Anwar, Usman, Akhtar, Barnes and MianKhan, 2024). Finally, LLMs are able to process and generate input and output in different languages effortlessly, being able to translate difficult texts context-aware whilst also discerning emotional tones (Reference Naveed, Khan, Qiu, Saqib, Anwar, Usman, Akhtar, Barnes and MianKhan, 2024). To facilitate a well-grounded critical reflection on the use of Large Language Models in mitigating knowledge barriers, it is also crucial to understand their limitations: Hadi et al. (Reference Hadi, Al-Tashi, Qureshi, Shah, Muneer, Irfan, Zafar, Shaikh, Akhtar, Wu and Mirjalili2023) identified reliance on surface-level patterns, poor reasoning ability, difficulty with rare words, limited domain-specific knowledge, and challenges handling noisy input data as main limitations. Improvements can be made through in-context-learning, through chain-of-thought prompting, where the model is asked to output its reasoning (Reference Naveed, Khan, Qiu, Saqib, Anwar, Usman, Akhtar, Barnes and MianNaveed et al., 2024), or through Retrieval-Augmented generation (RAG), where the implicit (parametric) knowledge of a model can be expanded with external documents (Reference Lewis, Perez, Piktus, Petroni, Karpukhin, Goyal, Küttler, Lewis, Yih, Rocktäschel, Riedel and KielaLewis et al., 2021). Even with the use of these techniques, all LLM responses should be fact-checked due to the risk of hallucinations, that derive from the models insufficient knowledge on certain topics and lead to the generation of incorrect content (Reference AuffahrtAuffahrt, 2023).

4.2. Features of the investigated LLM-based applications

We have now gained an understanding of the model layer itself (b in 4.1) and will in this chapter focus on the application layer (d in 4.1), determining features of interest for our research. The features under investigation must enable knowledge transfer between employees directly or indirectly, product-focused features like code generation and data analytics are not the aim of this paper and therefore excluded from our review. Our analysis will include the platform ChatGPT Enterprise (used in over 80% of Fortune 500 companies) (OpenAI, 2022) and Langdock, which is a German GDPR-compliant platform providing an application with access to several popular LLMs whilst also developing their own features, which is used by over 150 companies (Langdock GmbH, 2024b). These platforms encompass over 16 LLMs, including notable models like GPT-4o and Gemini 1.5 Pro (Langdock GmbH, 2024c; OpenAI, 2022). Both platforms offer the features chatbots and assistants with comparable fundamental capabilities, while Langdock extends this foundation with proprietary features like the prompt library and file search that go beyond ChatGPT Enterprise’s capabilities. Key features are highlighted in the text for clarity. The first feature under investigation is the Chatbot , in which the user chats with the LLM that generates answers that resemble a conversation with another human (Reference CorneliusCornelius, 2019). As mentioned in the limitations of LLMs, the responses usually rely on surface-level-patterns, which is not enough to make structured decisions on company-specific data. Therefore, most models offer the opportunity to upload files in forms of .pdfs, .docx or other to the context window that will be passed to the model, potentially improving the quality of answers (Reference Naveed, Khan, Qiu, Saqib, Anwar, Usman, Akhtar, Barnes and MianNaveed et al., 2024). When it is foreseeable that one or more users will use the Chabot for the same situation repeatedly or will repeatedly require the context of a specific document, the user can also create an assistant (a.k.a. CustomGPT by OpenAI) (Langdock GmbH, 2024a; OpenAI, 2022). The basic situation or persona is described to the LLM in the “instruction window” of the assistant, which is also passed to the LLM as part of the context window. In addition to the upload of single or multiple documents given in the chatbot function, the assistant on some platforms offers the possibility to connect file storages to the LLMs (e.g. Google Drive, Confluence, SharePoint), where the user can select documents to be uploaded directly from their file storage, which makes the use of additional files for upload easier. After completing the creation of an assistant, it can also be shared with other people in the workspace (Langdock GmbH, 2024b, 2024a). If similar tasks need to be carried out by the same or multiple persons repeatedly but in different contexts, Langdock offers the possibility to add prompts to a prompt library that can be shared with specific people, specific groups or the whole workspace (Langdock GmbH, 2024d). An example of a useful application of this feature is a prompt that asks the LLM to summarize uploaded documents in a specific format that serves as the input for a later process requiring this specific format. This feature complements the chatbot or assistant by enhancing knowledge transfer capabilities. The last feature relevant to our research is the file search . This AI-based search capability allows users to find documents based on content and topic rather than just filenames, including author information (Langdock GmbH, 2024b). Unlike Assistants and Chatbots that only access specifically uploaded documents, file search can scan entire connected storage systems (e.g., Sharepoint, Confluence, Google Drive), enabling cross-silo document discovery while returning only document links rather than content summaries (Langdock GmbH, 2024e). These four features (chatbot, assistant, prompt library and file search), in combination with the strengths and limitations mentioned above, present the answer to research question 1.

5. Tackling knowledge barriers with LLMs

Following the comprehensive analysis of selected knowledge barriers in product development and basics and features of LLMs, this chapter evaluates the potential of each LLM feature to help overcome identified barriers, addressing our second Research Question. To do this, we evaluate the described LLM-based features and their theoretical potential in daily work environments and then derive the knowledge barriers that are mitigated in the process. To quantify the potential of each feature in mitigating knowledge barriers, the potential of each function is rated on a three-point scale in ascending order:

  • o : Barrier mitigation unlikely

  • + : Potential to slightly mitigate barrier

  • ++ : Potential to greatly mitigate barrier

5.1. Chatbots

The chatbot as the most basic LLM feature offers natural language communication, but with severe limitations. Without external integrations, the LLM relies solely on its training data, which excludes the provision of recent real-world information (Reference AuffahrtAuffahrt, 2023). Also, the chatbot misses the capabilities to share information of any kind with other people or access knowledge from other departments / teams, therefore making it inapplicable to any barrier that requires this possibility: barriers No. 3, No. 15, No. 18, and No. 29 to 39. For the remaining barriers, that require explanation, altering, restructuring, summarizing or translation capabilities, in combination with file upload, the chatbot can give more specific replies through in-context learning and help with several document-related tasks. Its strengths can for example be used to create content, translation, summarization, and text-editing tasks (Reference AuffahrtAuffahrt, 2023), or perform consistency checks (Reference Gärtner and GöhlichGärtner & Göhlich, 2024), making it slightly useful to improve communication (e.g. through the formulation of e-mails) or mitigate barriers in knowledge transfer (No. 2). Also, through the general strengths of LLMs to perform language-related tasks, the authors suppose high potential for context-aware translation, greatly mitigating language barriers (No. 16). The use cases mentioned above (such as text-editing, summarization) can be leveraged to decrease the amount of spelling and grammatical errors, whilst finding duplicates and contradiction, therefore improving data quality (No. 40). This leaves us with slight potential, as the LLM cannot complete the documents if data is missing but can potentially improve its layout and form. Through the provision of individualized, real-time feedback to users, the chatbot has the potential to help several people overcome their qualification deficits (No. 46) by acting as a 24/7 available IT-support for common problems, slightly mitigating qualification deficits.

5.2. Assistants

The assistant as a more sophisticated feature, mainly with the advantages of sharing it with colleagues and giving it a persona with information about the expected outcome, can help with several more knowledge barriers. Like the chatbot, its knowledge only comprises of documents and instructions added directly by the creator/editor. That makes it still slightly useful to overcome communication problems (Barrier No. 1), as it can help with the formulation and presumably with the scheduling of information flows, but only with manual input and when directly asked. Knowledge transfer (No. 2), if in text form, can be greatly improved, as the transferred information via the assistant allows the receiver to ask questions and receive individual answers, therefore decreasing the occurrence of misunderstandings, without the need to organize a meeting with potentially interested colleagues. Subgrouping (No. 15) and lack of dialogue culture (No. 18) cannot be completely overcome through the utilization of assistants, but the authors suggest that this feature might at least slightly mitigate these barriers. Through a widely shared assistant, even people that don’t have direct contact through ICT or face-to-face (F2F) can gather access to information from the other party. A basic requirement being that the assistant is used and kept up to date with actual information. As with the chatbot, the potential of this feature to mitigate the language barriers (No. 16) is valued as great, for the previously given reasons. The wide access on assistants makes most of the organizational barriers likely to be slightly mitigated: an assistant consisting of curriculum vitaes of the employees (given their consent) may slightly improve the certainty about knowledge in the team (No. 29), as well as finding the right competencies and involvement of relevant knowledge holders (No. 37). A centralized supporting assistant could be used to make the knowledge processes more transparent (No. 36) and give individual instructions on where to preserve knowledge to the employees, offering great potential. The same assistant could then be used to gather information about where to find certain knowledge, making access slightly more efficient (No. 39), but through not having direct access to the knowledge nor the possibility of altering it, the core problem remains (e.g. unorganized file storage). Through the possibility of receiving instructions from multiple people or through multiple people adding files to the assistant, the assistant can be provided with knowledge from different sources, which in combination with the above mentioned ability to perform e.g. consistency checks, check for duplicates and combining knowledge poses great potential to improve data quality (No. 40). Possible aspects of data quality drawn from Liu et al. (Reference Liu, Wang and Wang2022) include validity, consistency, comprehensiveness and accuracy. Through the broadcasting-like ability, assistants can also serve as an educator (No. 46).

5.3. Prompt library

The prompt library that can be combined with either of the previously explained features is of a different kind, as it cannot generate responses, but preserve instructions to the model and make them shareable.

This helps slightly with all barriers that could be mitigated through well-designed instruction on how to transform a given text or document, that can be preserved in the prompt: Barriers No. 1, No. 2, No. 16, No. 40 and No. 46. All barriers that directly require the transfer of project-specific knowledge to other people have no mitigation potential as direct information exchange through the prompt library is not possible.

5.4. File search

The final feature to be examined is the file search. While file search can neither modify text or documents, nor directly facilitate knowledge transfer, it excels in locating documents and content even when e.g. filenames or storage locations are unclear. This capability therefore holds significant potential for reducing uncertainty about knowledge within the team (Barrier No. 29). Additionally, by providing direct hyperlinks to the files located, it can greatly enhance the efficiency of knowledge access (No. 39). There is also slight potential for mitigating qualification deficits (No. 46) through facilitating finding necessary instructions.

5.5. Summary

The analysis proposes that different LLM-based features have significant potential to mitigate knowledge barriers when implemented appropriately. Table 3 summarizes the respective mitigation potential for each barrier by features. Chatbots provide immediate, context-specific responses that support text structuring, consistency verification, and error detection, which contributes to improved communication. Assistants effectively facilitate direct knowledge sharing and address multiple barriers, particularly those related to knowledge transfer and organizational processes. Their capacity for cross-team sharing and integration of multiple knowledge sources makes them valuable tools for collaborative environments. While the Prompt Library and File Search have more focused applications, they address critical barriers that other features cannot effectively resolve. The Prompt Library offers standardized approaches to recurring tasks, making it particularly effective for addressing qualification deficits and establishing consistent documentation practices. File Search provides capabilities for reducing uncertainty about available knowledge and improving access efficiency across organizational silos.

Table 3. Potential of LLM-based features for the mitigation of knowledge barriers

6. Conclusion and research agenda

The authors have proposed that great potential lies in the abilities of LLMs as tools to improve knowledge transfer, if the features are used accordingly to the knowledge barrier that they are most likely to mitigate. As the extent to which each barrier, as mentioned before, is highly project-specific, our findings can serve as a selection guide for the application of LLMs.

6.1. Critical evaluation

This work proposed that different features of LLM-based applications may have different potential to overcome knowledge barriers in product development. However, this preliminary study has several key limitations: (1) examination of only two GDPR-compliant platforms, (2) reliance on a single source for knowledge barrier identification, (3) theoretical assessment without empirical validation and (4) actuality of the features in the rapidly evolving field of LLMs, which is why next steps have been defined in the research agenda. Importantly, this study was conducted independently, with no financial or personal ties to the platforms analyzed. Selection was based solely on relevance and availability, with no external influence on the results.

6.2. Research agenda

The findings of this preliminary study therefore need to be validated in the next step in h theoretical and real-life environments, in order to overcome above-mentioned limitations. We propose a four-step research agenda to address these limitations:

  • Expand the analysis to include several more LLM platforms to provide a more representative assessment of available technologies and their capabilities to tackle limitations (1) and (4)

  • Expand the analysis of knowledge barriers in order to ensure completeness of barrier catalogue to mitigate limitation (2)

  • Barrier-specific validation under controlled environment: measuring the effectiveness of LLM features against each knowledge barrier (e.g. spelling mistake count, translation accuracy) to overcome limitation (3)

  • Multiple case studies across different organizational contexts that utilize these LLM-based applications to assess real-world applicability and identify contextual factors through questionnaires and interviews to overcome limitation (3), measuring both occurrence rates and effectiveness across different barrier categories

By implementing this research agenda, we will establish a more robust evidence base for the practical application of LLM-based solutions in product development. This will contribute to both theoretical understanding and practical implementation guidance for organizations seeking to leverage these technologies to enhance knowledge transfer and management.

References

Adebowale Adeleke. (2024). Generative ai 4 layers. Aws Re:Post. https://repost.aws/de/articles/AR0SHZ2HNsQ1mrhdOrZuCo0w/generative-ai-4-layers Google Scholar
Auffahrt, B. (2023). Generative AI with LangChain: Build large language model (LLM) apps with Python, ChatGPT and other LLMs. Packt Publishing Ltd. https://portal.igpublish.com/iglibrary/obj/PACKT0006980 Google Scholar
Bick, M., Hanke, T., & Adelsberger, H. H. (2003). Prozessorientierte Analyse der Barrieren der Wissens(ver)teilung. Industrie Management, 19(3), 3740.Google Scholar
Cantamessa, M., Montagna, F., Altavilla, S., & Casagrande-Seretti, A. (2020). Data-driven design: The new challenges of digitalization on product design and development. Design Science, 6, e27. https://doi.org/10.1017/dsj.2020.25 CrossRefGoogle Scholar
Cornelius, A. (2019). Künstliche Intelligenz. Haufe-Lexware GmbH & Co. KG. https://www.wiso-net.de/document/HAUF__9783648132043127 Google Scholar
Cramton, C. D. (2001). The mutual knowledge problem and its consequences for dispersed collaboration. Organization Science, 12(3), 346371.CrossRefGoogle Scholar
Design Council. (2005). The Double Diamond. Design Council. https://www.designcouncil.org.uk/our-resources/the-double-diamond/ Google Scholar
Gärtner, A. E., & Göhlich, D. (2024). Automated requirement contradiction detection through formal logic and llms. Automated Software Engineering, 31(2). Scopus. https://doi.org/10.1007/s10515-024-00452-x CrossRefGoogle Scholar
Göpfert, J., Weinand, J. M., Kuckertz, P., & Stolten, D. (2024). Opportunities for large language models and discourse in engineering design. Energy and AI, 17, 100383. https://doi.org/10.1016/j.egyai.2024.100383 CrossRefGoogle Scholar
Grieb, J. C. (2008). Auswahl von Werkzeugen und Methoden für verteilte Produktentwicklungsprozesse (1. Aufl). [Doctoral thesis, Technical university of Munich]. Dr. Hut.Google Scholar
Hadi, M. U., Al-Tashi, Q., Qureshi, R., Shah, A., Muneer, A., Irfan, M., Zafar, A., Shaikh, M., Akhtar, N., Wu, J., & Mirjalili, S. (2023). Large language models: A comprehensive survey of its applications, challenges, limitations, and future prospects. TechRxiv. https://doi.org/10.36227/techrxiv.23589741.v1 CrossRefGoogle Scholar
Kern, E.-M., & Müller, J. C. (2019). Digitales Wissensmanagement oder die Frage: Kann Wissen online gemanagt werden? In Kollmann, T. (Hrsg.), Handbuch Digitale Wirtschaft (pp. 121). Springer Fachmedien. https://doi.org/10.1007/978-3-658-17345-6_65-1 CrossRefGoogle Scholar
Kern, E.-M., Sackmann, S., & Koch, M. (2009). Wissensmanagement in Projektorganisationen – Instrumentarium zur Überwindung von Wissensbarrieren. In Keuper, F. & Neumann, F. (Hrsg.), Wissens- und Informationsmanagement (pp. 5369). Gabler Verlag. https://doi.org/10.1007/978-3-8349-6509-7_3 CrossRefGoogle Scholar
Kersten, W., & Kern, E.-M. (2003). Distributed product development as a core element of supply chain management. EFTA 2003. 2003 IEEE Conference on Emerging Technologies and Factory Automation. Proceedings (Cat. No.03TH8696), 2, 3539. https://doi.org/10.1109/ETFA.2003.1248667 CrossRefGoogle Scholar
Khan, A. (2024). Artificial Intelligence: A Guide for Everyone. Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-56713-1 CrossRefGoogle Scholar
Langdock GmbH. (2024a). Creating an Assistant. Resources: Assistant Creation. https://docs.langdock.com/resources/assistant-creation Google Scholar
Langdock GmbH. (2024b). The all-in-one platform. Langdock. https://www.langdock.com/de Google Scholar
Langdock GmbH. (2024c). Model Guide. Resources: Model Guide. https://docs.langdock.com/resources/models Google Scholar
Langdock GmbH. (2024d). Prompt Library. Resources: Prompt Library. https://docs.langdock.com/product/chat/prompt-library Google Scholar
Langdock GmbH. (2024e). Search. Langdock. https://www.langdock.com/de/products/search Google Scholar
Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W., Rocktäschel, T., Riedel, S., & Kiela, D. (2021). Retrieval-Augmented Generation for knowledge-intensive NLP tasks. NIPS’20: Proceedings of the 34th International Conference on Neural Information Processing Systems (pp. 9459-9474). Curran Associates Inc. https://doi.org/10.48550/arXiv.2005.11401 CrossRefGoogle Scholar
Liu, A., Wang, Y., & Wang, X. (2022). Data-Driven Engineering Design. Springer International Publishing. https://doi.org/10.1007/978-3-030-88181-8 CrossRefGoogle Scholar
Meyer-Eschenbach, A., & Blessing, L. (2005). Experience with distributed development of household appliances. DS 35: Proceedings ICED 05, the 15th International Conference on Engineering Design, Melbourne, Australia, 15.-18.08.2005. (160-161). DESIGN.Google Scholar
Meyer-Eschenbach, A., Gautam, V., Wildung, W., & Schüler, P. (2008). Experience with cultural influences in distributed german-chinese development project cooperations. DS 48: Proceedings DESIGN 2008, the 10th International Design Conference, Dubrovnik, Croatia. (10491056). DESIGN.Google Scholar
Moon, J. T., Lima, N. J., Froula, E., Li, H., Newsome, J., Trivedi, H., Bercu, Z., & Wawira Gichoya, J. (2024). Towards inclusive biodesign and innovation: Lowering barriers to entry in medical device development through large language model tools. BMJ Health and Care Informatics, 31(1). Scopus. https://doi.org/10.1136/bmjhci-2023-100952 CrossRefGoogle Scholar
Naveed, H., Khan, A. U., Qiu, S., Saqib, M., Anwar, S., Usman, M., Akhtar, N., Barnes, N., & Mian, A. (2024). A Comprehensive Overview of Large Language Models (arXiv:2307.06435). arXiv. https://doi.org/10.48550/arXiv.2307.06435 CrossRefGoogle Scholar
OpenAI. (2022). ChatGPT for enterprise. OpenAI. https://openai.com/chatgpt/enterprise/ Google Scholar
Paetzold-Byhain, K., Saske, B., Berger, E., Schwoch, S., Dammann, M. P., & Kretzschmar, M. (2024). Key concepts, potentials and obstacles for the implementation of large language models in product development. Entwerfen Entwickeln Erleben 2024, 381393.Google Scholar
Panopto. (2018). Valuing Workplace Knowledge. Panopto. https://www.panopto.com/resource/ebook/valuing-workplace-knowledge/ Google Scholar
Riemer, K., Schellhammer, S., & Meinert, M. (Hrsg.). (2019). Collaboration in the Digital Age: How Technology Enables Individuals, Teams and Businesses. Springer International Publishing. https://doi.org/10.1007/978-3-319-94487-6 CrossRefGoogle Scholar
Vartiainen, M., & Jahkola, O. (2013). Pros and cons of various ict tools in global collaboration – a cross-case study. In Yamamoto, S. (Hrsg.), Human Interface and the Management of Information. Information and Interaction for Learning, Culture, Collaboration and Business, (pp. 391400). Springer. https://doi.org/10.1007/978-3-642-39226-9_43 CrossRefGoogle Scholar
Walter, B., Rapp, S., & Albers, A. (2016). Selecting appropriate tools for synchronous communication and collaboration in locally distributed product development. DS 85-2: Proceedings of NordDesign 2016, Volume 2, Trondheim, Norway, 10th - 12th August 2016. (258267). DESIGN.Google Scholar
Wang, S., & Noe, R. A. (2010). Knowledge sharing: A review and directions for future research. Human Resource Management Review, 20(2), 115131. https://doi.org/10.1016/j.hrmr.2009.10.001 CrossRefGoogle Scholar
Yang, J., Jin, H., Tang, R., Han, X., Feng, Q., Jiang, H., Yin, B., & Hu, X. (2023). Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond. ACM Transactions on Knowledge Discovery from Data, 18(6), 132.Google Scholar
Figure 0

Figure 1. Methodology: Double Diamond (based on Design Council (2005))

Figure 1

Table 1. Barriers removed from consideration, in (Kern et al., 2009) (except classified as “Willingness to share & absorb” and “Technical equipment”)

Figure 2

Table 2. Result of the selection process of knowledge barriers, in (Kern et al., 2009)

Figure 3

Table 3. Potential of LLM-based features for the mitigation of knowledge barriers