1. Introduction
Facing increasingly dynamic market environments and global challenges such as climate change and resource scarcity, companies are under constant pressure to innovate and remain competitive. A critical enabler of innovation is technological advancement. However, identifying and understanding the drivers of technological change has become increasingly complex. In particular, shorter technology cycles, the rapid emergence of new technologies, and the ever-growing volume of data does overwhelm manual foresight approaches. As a result, making informed business and engineering design decisions for the future is becoming more complicated. Companies and designers must adapt quickly to disruptive change, making it crucial to gather relevant data and insights about evolving markets and emerging technologies.
This is where Technology Foresight comes in, systematically identifying and analysing emerging technologies and the uncertainties they bring. By gathering actionable insights into technological developments and their potential impact on companies or entire industries, Technology Foresight enables engineering teams to decide which design options to pursue and which features to develop for future products. However, the growing volume of potentially relevant data for Technology Foresight poses a significant challenge, often exceeding the limits of manual processing capabilities. Without robust filtering and data analysis tools, companies risk overlooking critical technology developments and opportunities. Furthermore, unstructured data is often underutilized in the process, limiting informed and data-driven decision making.
Previous research has emphasised the potential of data-driven approaches, including machine learning and text mining, to process large data sets, recognize patterns and thus identify emerging technologies. While these methods are significant steps forward, they typically provide only partial automation and often lack the ability to synthesize insights into actionable intelligence. At the same time, the recent emergence of Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) opens new opportunities to automate processes more thoroughly. However, the integration of these novel technologies into end-to-end workflows has been insufficiently explored in research.
Against this background, leveraging GenAI in Technology Foresight offers the potential to bridge information gaps, enable data-driven decisions, strengthen competitive advantages and ultimately accelerate the innovation and design process. This paper analyses the relevance of AI automation in Technology Foresight and bridges the gap to systematically integrate GenAI into TF processes. First, we provide a detailed problem analysis, followed by an overview of existing solution approaches. Second, we present a novel framework for AI-based Technology Foresight that uses Retrieval Augmented Generation (RAG) to combine data collection with the powerful capabilities of LLMs for enhanced efficiency. Finally, the key results are summarized and a discussion on future developments is given, focusing on how RAG-based systems can make TF more efficient and actionable.
2. Research design
The research presented in this paper follows the Design Research Methodology (DRM) proposed by Blessing and Chakrabarti (Blessing & Chakrabarti, 2009). DRM comprises four distinct phases: research clarification, descriptive study I, prescriptive study, and descriptive study II.
This paper presents the phases research clarification and descriptive study I. In the first phase, we identified the primary challenges in Technology Foresight and outlined the research needs. This included extensive literature reviews on foresight methodologies, the concept of Data-Driven Foresight, and the role of LLMs in supporting Technology Foresight.
Building on these insights, we entered descriptive study I, where we systematically analysed existing approaches. This included a thorough examination of traditional TF methods (e.g., patent analysis, statistical trend detection), as well as more advanced AI- and data-driven solutions (e.g., text mining, topic modeling frameworks). In addition, expert interviews were conducted with practitioners from different industries to narrow down the relevant challenges such as rapid technological change, growing data volumes, and the inefficiency of manual foresight activities. The main findings of the descriptive study I are presented in the Problem Analysis and State of the Art sections. By synthesizing these findings, we derived requirements for a holistic, automated TF system.
In this paper we can only give a small insight in the prescriptive study phase, as it is work in progress. We propose our GenAI-based solution approach using RAG. This concept, discussed in the Solution Approach section, aims to address the requirements identified in the previous phase. However, this paper does not include final evaluations. Phases three and four will therefore be completed in the future and presented in another paper. In descriptive study II, the final phase of our DRM cycle, we plan to evaluate the effectiveness of our AI-based TF system in real-world contexts.
3. Problem analysis
To provide a comprehensive foundation, we structure the problem analysis along four key aspects. Section 3.1 offers an introduction to TF, outlining its central role in identifying trends and its limitations. Section 3.2 examines the implications of increasing data volumes and the need for AI-driven analysis. In Section 3.3, we emphasise the need for automated TF, while Section 3.4 derives requirements for a solution approach, which address the shortcomings of current processes. This structure provides a substantial basis and clear rationale for developing an AI-enabled approach to improving efficiency and decision making in technology foresight.
3.1. Technology foresight
Technology Foresight (TF) refers to a systematic long-term study to identify future key technologies with significant economic and social benefits. Originally conceived by Ben R. Martin in Reference Martin1995, the approach seeks to analyse and forecast technological developments that could have a significant future impact. As a strategic tool, technology foresight focuses on identifying areas of research and emerging technologies that are likely to create economic value, with particular attention to detecting weak signals - subtle early indicators of potential future trends (Martin Reference Martin1995; Reference Minghui, Hanrui, Yao and LinglingMinghui et al., 2022; Mühlroth and Grottke, 2018). In a corporate context, TF supports the early identification of technology trends and enables proactive action in strategic decision making (Mühlroth and Grottke, 2018). Figure 1 illustrates the TF process. According to Reger, it can be divided into six phases: (1) Formulation of Information needs or selecting the search area, (2) Selection of information sources, methods and instruments, (3) Collecting data, (4) Filtering, analysing and interpreting the information, (5) Prepare decisions and (6) Evaluating proposals and decision-making at the start or financing a project or program (Reger, 2001).

Figure 1. Technology Foresight process
The growing volume of unstructured data requires automated methods and systems to efficiently identify weak signals and trends (Mühlroth and Grottke, 2018). Furthermore, with the expansion of foresight initiatives, increased automation becomes essential to reduce human bias and limited processing capability in data analysis and to ensure objective, data-driven decision-making processes (Mühlroth and Grottke, 2018).
In our research, we identified three core challenges in technology in TF: first, defining a precise search strategy that identifies which technologies are in the given context; second, streamlining the collection of vast, diverse data sources to capture comprehensive insights; and third, efficiently analysing and communicating technological knowledge through structured technology profiles.
3.2. Data and artificial intelligence (AI)
Artificial Intelligence (AI) refers to the simulation of human cognitive functions, such as learning, reasoning and decision-making, by machines. AI includes various approaches, such as rule-based systems, machine learning and deep learning, which enable computers to analyse data, recognise patterns and make predictions or automate tasks without explicit programming. Generative AI (GenAI) includes AI solutions that aim to generate new data or content, such as text, images, video, audio or code. GenAI models use advanced machine learning techniques, such as transformer models, to process large amounts of data and generate new data like that on which they were trained. This involves using models that have been trained on large datasets to generate creative and novel content. Unlike discriminative models, which sort data according to differences, generative models can generate entirely new content (Reference Hutschek and BeisswangerHutschek et al., 2023).
It is becoming increasingly clear that data plays a key role in the innovation process (Reference Trabucchi and BuganzaTrabucchi and Buganza, 2019; Reference ThönnessenThönnessen, 2019; Reference Trabucchi and BuganzaTrabucchi and Buganza, 2019). Beyond the utilization of individual data sources, the combination of diverse data sources opens numerous new potentials. This can be internal and external data generated by the customer (Reference HutchinsonHutchinson, 2021). A challenge lies in effectively analysing and evaluating this data to gain relevant insights for decision-making (Reference ThönnessenThönnessen, 2019). Another issue is data availability. While some data sources are publicly available, much relevant information is held in proprietary databases, journals, behind paywalls or in patents, making access difficult. In addition, there are legal concerns about the use and processing of data, particularly regarding data protection regulations and intellectual property rights. Compliance with regulations such as the General Data Protection Regulation (GDPR) presents additional challenges for companies and can severely limit the amount of usable data.
At the same time, human cognitive capacities are reaching their limits, resulting in a discrepancy between the amount of information available and the ability to process it (Reference HutchinsonHutchinson, 2021). Handling the ever-growing flood of information and the rapid development and number of new technologies represent an ever-increasing challenge for companies worldwide. In recent years, the volume of data generated worldwide has increased exponentially. Between 2010 and 2015, the amount of digital data generated each year rose from 2 to 15.5 zettabytes. In 2022, this figure had already reached 103.6 zettabytes and is forecast to grow to 284.3 zettabytes by 2027 (IDC, 2023). These enormous amounts of data serve as the basis for the rapid identification and evaluation of new technologies in various industries such as IT and manufacturing. Although the information is theoretically available, manual scouting processes are often unable to process all the available data within a reasonable time. The enormous amount of data overwhelms the human capacity to select and interpret relevant information and leads to inefficiencies in innovation management. Data synthesis is also a particularly central challenge. The entirety of the data is made up of a wide variety of perspectives, such as publications, news, technology-relevant information, and even patents. Depending on the context of the project, information must be combined, used, and interpreted differently. The often manual and time-consuming nature of the tasks and the limited information processing capability currently make it difficult to scale technology foresight to the desired performance level (Reference Korell, Schloen, Korell and SchloenEllermann et al., 2023; Rohrbeck et al., Reference Rohrbeck, Heuer and Arnold2006; Wagner et al., Reference Wagner, Kayser and Kesselring2020; Reference Ellermann, Asmar and DumitrescuBullinger, 2012; Reference BullingerKorell und Schloen, 2012)). As a result of this overload, decision-making processes are delayed as technological trends and opportunities are not recognized and exploited in good time. This not only leads to financial losses due to missed opportunities but also to a technological gap compared to competitors who can react more quickly to trends.
To meet these challenges, new approaches for information collection and processing are required. The use of tools and AI is a promising approach (Reference BullingerSavioz, 2004; Reference Wagner, Kayser and KesselringWagner et al., 2020; Reference SaviozBullinger, 2012), as they have the potential to efficiently analyse the growing volumes of data and automatically identify emerging trends. Unlike traditional methods that merely sort, group and deliver structured data, GenAI enables further automation of the process. While traditional approaches leave the combination and interpretation of information to the user, GenAI is able to go a step further by not only analysing large amounts of data, but also processing it directly into structured, usable results and creating specific content. This makes the entire process, from data collection and analysis to content creation, more efficient, faster and more scalable. Furthermore, it is possible to add company specific context to the task for better results without a need to retrain or change the AI-model. This approach is known as Retrieval Augmented Generation (RAG) and results in high quality results without the need for costly and time intense task and company specific fine-tuning.
To summarize, manual technology foresight processes are inefficient and slow to keep up with the growing amount of information and rapid development of technologies. Companies are faced with the challenge of identifying, reviewing, and evaluating technologies in a time-consuming process. This harbours the risk of important technologies being overlooked.
3.3. Need for automated technology foresight
As shown, the identified challenges have an impact on the market position of whole industries. If technological trends are not identified in time or are even missed, this inevitably leads to a loss of competitive advantage. This challenge affects not only to large companies with extensive R&D activities, but also to small and medium-sized enterprises. Moreover, the manual and often repetitive nature of technology foresight binds considerable resources. As a result, a lever for efficiency is missing.
In addition, the manual identification and assessment of technology trends will become increasingly difficult in the coming years due to the technology's inherent complexity and flood of information. Consequently, there is an urgent need for scalable and automatable solutions. Based on the needs identified during the interviews and an analysis of the technological potential of LLM and RAG, the following aspects should be explored:
1. Search strategy: Determining the search strategy requires a clear definition of the information needs as well as relevant search terms and their aggregation into technological search fields to obtain impulses for TF. This ensures that companies focus on precisely those areas that match their strategic needs.
2. Data collection: Real-time data is particularly valuable for TF. By systematically collecting and structuring these data streams, companies can establish a comprehensive information base.
3. Technology profiles: Analysing the collected data and synthesising the information into technology profiles requires a lot of manual resources and cognition. These reports must provide concise and comprehensive information about the respective technology. The structured presentation und communication of the information enables companies to discuss new technologies and derive actionable insights for the future.
3.4. Deriving requirements to solve the identified challenges
To connect the results of phase one of the DRM with the next step, the descriptive study I, we transfer the challenges into requirements for a solution. This enables the efficient comparing of existing solutions and building own solutions. A structured approach to deriving system requirements is essential to ensure the effective implementation of AI-driven technology foresight. The identification of relevant stakeholders and sources of requirements serves as a fundamental basis. Key stakeholders include research departments, innovation managers and product developers within companies, as well as industry research associations. In addition, compliance with relevant standards and regulations is crucial, particularly in the areas of data protection (e.g. GDPR), IT security and industry standards for innovation management, such as ISO 56000. The solution must meet specific functional requirements:
-
Automated generation and customisation of search strategies
-
Integration with real-time data sources for continuous updates
-
Automated generation of technology profiles
Beyond functional aspects, technical requirements must ensure a scalable and adaptable system architecture. The use of a RAG approach enables context-aware information retrieval, ensuring that technology scouting is supported by real and relevant data. To accommodate diverse data formats and sources, the system must support file types such as .csv, .pdf, .txt, and .docx, and provide API interfaces for integration with external databases. In addition, modularity and scalability are critical to enable the system to adapt to evolving technological and business requirements. The proposed approach is expected to deliver significant benefits, such as increased efficiency by reducing manual effort and faster identification of relevant technology trends. It also aims to improve the quality of decision making through structured data integration and improved traceability through metadata and source documentation. Scalability will be ensured by a modular system structure that allows flexible expansion with additional data sources and AI models. Ethical considerations must also be addressed to ensure the responsible use of AI. Transparency in data processing and decision making should be a core principle, supported by clear documentation of data sources and algorithmic processes. Compliance with data protection regulations, including pseudonymisation, is essential to mitigate security risks. By integrating these requirements, the proposed solution aims to provide a robust, scalable and ethically responsible approach to AI-enabled technology foresight.
4. State of the art
At the time of writing, there were no approaches specifically using GenAI for technology foresight. However, we expect research to be published in the near future. Nevertheless, we have widened the scope of the search to include similar topics in the state of the art that we hope to build upon.
4.1. Data-driven foresight
Numerous data-driven approaches already exist, which are based on mathematical-statistical methods, for example, and are used in TF (Reference WaldeWalde, 2011). A particular potential of these methods lies in the combination and linking of different data sources and types to gain technological knowledge (Reference ZellerKeller and Gracht, 2014; Reference Keller and GrachtZeller, 2003). Unstructured text data is also integrated into these approaches and analysed using methods such as data mining to gain further insights. Large amounts of data are essential for meaningful results (Reference ZellerWeigand et al., 2021; Reference Weigand, Hoffmann, Brantner, Müller, Ebner and KlenkZeller, 2003). In the context of TF, such data sources can be publications, patents, news, and social media content. Increasingly, both the processing of information and the subsequent evaluation are being automated as much as possible to achieve more objective results (Reference ZellerWeigand et al., 2021; Reference Antons, Grünwald, Cichy and SalgeAntons et al., 2020; Reference Weigand, Hoffmann, Brantner, Müller, Ebner and KlenkZeller, 2003). The aim of all approaches is to expand and optimize the analyses to achieve greater efficiency and accuracy and thus increase the plausibility of the results (Reference LeeKeller and Gracht, 2014; Geurts et al., 2022; Reference Keller and GrachtLee, 2021).
Several research papers have already dealt intensively with pattern recognition methods in the context of technology foresight and scouting. In the field of emerging technology identification, for example, (Xu et al., 2021) developed a topic modelling framework that identifies emerging technologies by analysing patents and scientific publications. They use the Topical N-Gram (TNG) model to extract topics from text corpora that could indicate potential technology trends. In this context, (Reference Lee, Kwon, Kim and KwonLee et al., 2018) use patent-related indicators in conjunction with machine learning algorithms to identify emerging technologies at an early stage.
In the area of weak signal detection, (Reference Griol-Barres, Milla, Cebrián, Fan and MilletGriol-Barres et al., 2020) use a system that uses text mining and natural language processing (NLP) to detect weak signals in text data. They analyse large amounts of text in news and social media to detect early indications of technological changes. (Reference Thorleuchter, Scheja and van den PoelThorleuchter et al., 2014) use semantic methods to detect weak signals and analyse text data to identify unexpected trends in specific technology areas. Their work focuses on detecting signals that could indicate emerging trends using semantic tracing methods. (Reference Kim and GeumPark and Kim, 2021) apply Structural Topic Modelling (STM) to analyse weak signals in the field of renewable energy and combine the concept of weak signals with STM to identify early changes in academic topics. In the field of trend analysis, (Reference Yoon and KimYoon and Kim, 2011) have developed a method for the automated identification of TRIZ evolution trends by applying text mining to patents to identify trends in patent documents that could indicate the development of a particular technology field. In the research area of technology road mapping, (Reference Kim and GeumKim and Geum, 2021) combine topic modelling and link prediction to create a data-driven roadmap. They extract topics from the analysed data and link them using link prediction to visualize possible future developments in a roadmap. (Reference Choi, Kim, Yoon, Kim and LeeChoi et al., 2013) use an SAO-based text mining method to create a technology roadmap. They analyse patent information and identify SAO (Subject-Action-Object) patterns to depict technological developments. For Technology Opportunity Analysis, (Reference Yoon, Park, Seo, Lee, Coh and KimYoon et al., 2015) developed a Technology Opportunity Discovery (TOD) framework that analyses existing technologies and products. They use a function-based TOD structure to assess market potential and technological feasibility. (Li et al., 2023) use SAO semantic mining and outlier detection to identify technology opportunities by detecting anomalies in existing technological developments that could indicate new opportunities. In the field of social impact analysis, (Reference Kwon, Kim and ParkKwon et al., 2017) apply LSA text mining techniques to assess the social impact of new technologies such as drone technology. They analyse social structures and behaviours that could be influenced by such technologies.
In addition to these existing approaches, the use of AI methods in the field of Technology Foresight is increasingly being discussed (Reference Muhlroth and GrottkeGeurts et al., 2022; Muhlroth and Grottke, 2020) The emergence of GenAI, information retrieval, and LLMs such as ChatGPT has greatly stimulated this area of Technology Foresight (Reference John and MartiniDumitrescu and Hölzle, 2023; Reference Dumitrescu and HölzleJohn et al., 2023). Particularly noteworthy here is the advantage of increased efficiency in the automation of manual and time-consuming activities (Reference LeeLee, 2021). The comprehensive use of so-called metadata provides further additional information about the origin of the documents and sources (Reference WaldeWalde, 2011).
While these approaches demonstrate the potential of data-driven TF, most solutions focus on pattern recognition rather than guiding users in synthesizing or interpreting results. As a result, manual processes are still frequently required. This underscores the need for future TF approaches that offer more comprehensive decision support and enhanced capabilities for interpreting technology information. GenAI and LLMs hold significant promise for bridging this gap by facilitating advanced knowledge synthesis and actionable insight generation.
4.2. Large Language Models (LLMs)
The development of LLMs has made enormous progress in recent years. While they were initially based on simple statistical methods, more recent models such as GPT-4 are based on the use of neural networks (transformer architecture) and large amounts of data. This has significantly improved the quality and relevance of the generated content.
LLMs show impressive capabilities, but despite these advances, important problems remain: Timeliness of information, hallucinations, and non-transparent generation processes (Reference Gao, Xiong, Gao, Jia, Pan and BiGao et al., 2024). In their basic form, LLMs rely on a fixed, static body of knowledge. This is not continuously updated, which can lead to a lack of current and context-relevant information, depending on the level of information. This is where the RAG method comes in, combining the generative capabilities of an LLM with external and dynamic data sources. This enables the integration of domain-specific knowledge from in-house databases and increases the accuracy and credibility of the generated output, especially for knowledge-intensive tasks. The combination of intrinsic LLM knowledge with large, dynamic datasets from databases forms a broad knowledge base for a variety of complex applications (Reference Gao, Xiong, Gao, Jia, Pan and BiGao et al., 2024). This enables companies to use relevant and up-to-date information efficiently. There are numerous variants of retrieval augmented generation, for example:
Naive RAG comprises the basic steps of indexing, retrieval, and generation and is often referred to as a “retrieve-read framework”. It consists of the processing and preparation of documents in a standardized text form and their embedding in vector databases to be able to quickly retrieve the information relevant to queries. The advantage of Naive RAG lies in its simplicity and efficiency, but it poses challenges in terms of the accuracy and relevance of the information retrieved. Advanced RAG builds on Naive RAG and implements optimizations in the pre-and post-query phases. This includes techniques for refining indices and queries as well as re-ranking the retrieved information. These methods aim to improve the accuracy and relevance of the results, thereby increasing the relevance of the information for specific queries. Modular RAG, on the other hand, offers more flexibility and allows the replacement or customization of specific modules within the individual RAG pipeline. This enables an adaptive and iterative structure in which new modules such as storage and search modules can be added. The advantage lies in the improved adaptability to different tasks and scenarios (Reference Gao, Xiong, Gao, Jia, Pan and BiGao et al., 2024).
5. Solution approach
The goal of our solution approach is to develop an automated TF system that handles large data volumes efficiently and meets the need for more scalable, AI-driven processes. By integrating Generative AI and RAG, we aim to automate manual foresight activities - focusing the creation of search strategies, data collection, and technology profile generation. While many machine learning studies focus on clustering or topic modelling, our research explores how substantial textual content can be analysed and newly generated with Generative AI.
The proposed system is built on several key components to ensure modularity, scalability, and efficiency. First, the RAG architecture enables the combination of internal and external data sources with Generative AI models (see Figure 2). By retrieving current, real-time data and integrating it into the generation process, RAG significantly enhances analysis depth. Furthermore, the use of metadata - such as data provenance and timestamps - improves traceability, allowing users to verify sources and thereby increase the credibility of results.

Figure 2. RAG architecture
Building upon the RAG framework, Llama Index processes and vectorizes large data sets, storing them locally in a vector database for downstream analysis. LangChain handles prompt construction, ensuring that queries submitted via the LLM API are dynamically tailored to each section of a search strategy or technology profile. By using custom prompt templates, the system produces context-specific responses, boosting both efficiency and precision. The combination creates a robust framework for data processing and data retrieval, which forms the core of the solution (see figure 3).
Another feature is the automation of search strategies. Based on user input, the system generates a search strategy, defining distinct search fields and associated keywords. These terms are then executed via the Google Search API and OpenAlex, allowing real-time retrieval of relevant publications, reports, and blog articles. To harness these data points, the most promising search results are scraped and stored in a .csv file. After that, they are integrated into the local vector database - complete with metadata - making them readily available for subsequent analysis and profile creation. Using metadata enables the system to verify the original sources and the credibility of the generated information. As a result, the generated content is based on reliable and real information.
The automatic creation of technology profiles is a structured and iterative process that is divided into different sections. Each profile is created piece by piece, starting with a brief description, the technology readiness level (TRL) and ending with identified challenges and key facts. It is important to note that each section has different requirements and best practices that must be strictly adhered to achieve an optimal result. The integration of LLMs makes it possible to generate the outputs defined in the prompts. An OpenAI API is used for this purpose. Each LLM has different strengths and weaknesses, which is why a later comparison of the outputs is highly relevant. On this basis, suitable models should be defined for certain subtasks within the technology foresight process.

Figure 3. left: front- and backend communication, right: data architecture
The implementation of the above solution is expected to significantly reduce manual and time-consuming processes in Technology Foresight such as determining a search strategy, collecting data and creating technology profiles. The automation is expected to not only accelerate data gathering but also improve the consistency and detail of the resulting profiles. Overall, our approach stands to enhance AI-based methods for data-intensive processes such as Technology Foresight, improving both efficiency and accessibility. By leveraging LLMs and intuitive interfaces, we aim to lower the entry barrier for non-technical users and further expand the adoption of advanced foresight solutions in diverse organizational contexts.
6. Discussion
Global challenges and accelerated technological change are exerting pressure on companies and thus design engineering teams to continuously innovate. Conventional, manual Technology Foresight processes for gaining insights into technological developments are reaching their limits, as they struggle process the existing volume of data efficiently. As a result, there is a strong need to improve the efficiency and effectiveness of Technology Foresight.
To address these challenges, the objective of this paper was to explore how GenAI and RAG could enhance Technology Foresight. In this paper, we presented a detailed problem analysis with derived requirements for a solution, an overview of the state of the art as well as a prototypical solution approach that combines the capabilities of LLM and RAG to efficiently process large amounts of data and support the user in synthesizing relevant data for sensemaking. Core elements of the solution approach are Llama Index for data processing, LangChain for dynamic prompting, and RAG for the integration of (real-time) data using API queries. The GenAI-based creation of search strategies and technology profiles within Technology Foresight has the potential for significant time savings by reducing manual effort while improving the quality and possibly even the accuracy.
Several important insights emerged during the first prototypical implementation of the system. The step-by-step and fine-grained approach to process automation offered greater stability and accuracy. In particular, the creation of the search strategy and the profiles could be significantly optimized through customized sub-processes. An important finding was the definition of a solution space instead of a fixed number of results by the LLM. Last, the use of the RAG architecture proved to be very advantageous, as it enables the linking different data sources. However, these observations stem primarily from initial testing rather than comprehensive industrial application.
Several limitations of our current approach highlight the need for further research. First, updating the vector index with new data will be highly time consuming, which currently restricts real-time use. Second, while RAG is expected to mitigate hallucinations, completely avoiding them will remaind a challenge. Finally, domain-specific finetuning of the models is not planned yet, however might be necessary for better results. Future developments should also focus on the modular integration of additional data sources, such as patents.
Building upon the current framework, the next steps include extensive testing and empirical validation to ensure that the proposed approach is practicable. Engaging practitioners will allow for a deeper evaluation of how AI-driven technology foresight can realistically perform in different contexts. Overall, our preliminary findings indicate that Generative AI can significantly enhance Technology Foresight by automating data-intensive tasks and providing efficient and more accurate insights. With further refinements, this approach could evolve into a robust, adaptable tool, ultimately helping firms make faster, better-informed decisions in a rapidly changing technological landscape.