1. Introduction
Design specification plays a crucial role in product development by bridging user needs with engineering solutions and defining clear project targets (Reference Ulrich and EppingerUlrich & Eppinger, 2016). It transforms design concepts into concrete engineering specifications. This ensures alignment and requires collaborative decision-making among design, engineering, and UX research disciplines. While collaborative efforts are essential for addressing complex problems and promoting innovation, interdisciplinary teams often encounter challenges such as communication barriers (Reference Maier, Eckert and ClarksonMaier et al., 2021) and difficulties in translating qualitative UX insights into technical parameters (Reference Fedosov, Boos, Schmidt-Rauch, Ojala and LewkowiczFedosov et al., 2021). Although UX insights provide valuable contextual and emotional dimensions, they are often overlooked in traditional engineering specifications (Reference Michailidou, von Saucken and LindemannMichailidou et al., 2013).
UX has become central to modern product design, emphasizing user needs, preferences, and behaviors to shape functionality and interaction. Rooted in human-centered design, UX methodologies use empathy-driven, iterative processes to enhance usability and market success (Reference HassenzahlHassenzahl, 2010). Scenario-based design, a key UX methodology, employs narratives to envision user interactions, identify usability challenges, and generate actionable insights. Scenarios facilitate innovation and critical reflection, support design flexibility, and improve interdisciplinary communication (Reference CarrolCarrol, 1999). By focusing on concrete use stories over abstract requirements, scenario-based design helps teams to understand user needs better, envision new activities and technologies, and create effective systems and innovative products (Reference Garcia and CalantoneGarcia & Calantone, 2002).
However, translating user scenarios into concrete engineering specifications presents several challenges. A key difficulty is bridging the gap between “real use” and “normal use” in product design. Additionally, integrating scenario-based design with engineering practices, covering diverse use cases, and reconciling contradictory evidence is difficult (Reference Vincent and BlandfordVincent & Blandford, 2015). Researchers have proposed structured frameworks like “Action Steps” to convert scenarios into functional requirements and engineering specifications (Reference Ismatullaev and KimIsmatullaev & Kim, 2024). Nevertheless, this process is labor-intensive and prone to misinterpretation if teams lack a shared understanding of user needs and technical feasibility. Technological support along with structured frameworks is essential to overcome these challenges.
While computer-based tools have long supported engineering design, they primarily aid documentation, concept exploration, modeling, and visualization (Reference Li, Lu, Fuh and WongLi et al., 2005; Reference Simpson, Bobuk, Slingerland, Brennan, Logan and ReichardSimpson et al., 2012), rather than guiding design reasoning or decision-making. Though computer-aided design and modelling tools improve technical feasibility, they do not proactively assist in problem-solving or interdisciplinary collaboration (Reference Wood, Bahr and RitterWood et al., 2015). Comparative studies suggest that digital ideation tools promote convergent thinking but do not significantly impact divergent thinking or overall creativity (Reference Frich, Nouwens, Halskov and DalsgaardFrich et al., 2021).
Recent advancements in artificial intelligence (AI), extend beyond traditional tools by supporting reasoning and decision-making (Reference Fei, Wang, Song and (Roger) JiaoFei et al., 2023). AI acts as a cognitive collaborator, enhancing decision-making, communication, and interdisciplinary coordination. Particularly, GPT-based systems facilitate requirements documentation (Reference Dalpiaz and NiuDalpiaz & Niu, 2020), user story generation (Reference Kulkarni and BansalKulkarni & Bansal, 2023), and user-centered explanations (Reference Lee, Lee and MutluLee et al., 2024).
Existing studies highlight AI’s broader role in ideation, concept generation, and modeling but rarely examine its effectiveness in understanding user needs, translating them into structured requirements, and facilitating team communication (Reference Khanolkar, Vrolijk and OlechowskiKhanolkar et al., 2023). Questions remain regarding AI’s impact on usability, team dynamics, design quality, and its role in specific tasks such as translating user scenarios into engineering specifications. Research also emphasizes the need to determine when AI should be leveraged (i.e., where AI-assisted performance surpasses that of unassisted humans) and how to optimize its output timing and quantity for effective human-AI collaboration and decision-making support (Reference Steyvers and KumarSteyvers & Kumar, 2024). To address this, the study compares AI-assisted and non-AI workflows within the “Action Steps” framework (Reference Ismatullaev and KimIsmatullaev & Kim, 2024). Through comparative workshops, it investigates:
1. How effective is the framework for supporting interdisciplinary teams in translating user scenarios into engineering specifications?
2. How does AI impact process efficiency and team collaboration?
3. How can AI tools be further developed to optimize their roles in design specification processes?
The first question assesses the framework’s effectiveness in developing engineering specifications. The latter two explore AI’s potential to enhance process efficiency and team collaboration, identifying opportunities for integrating AI into design specification processes. By addressing these questions, this study provides practical insights into AI integration in collaborative design workflows and proposes scalable methodologies for interdisciplinary teams to create user-centered engineering specifications.
2. Method
2.1. Workshop setup
This study employed a workshop-based methodology. We conducted two separate workshop sessions, each with a different team. One team utilized AI assistance (Team B), whereas the other Team A did not (Team A). Both teams followed the same six-step procedure (see the Procedure section) to develop the engineering specifications under the guidance of a moderator.
Figure 1 illustrates the workshop settings. The sessions were conducted in a collaborative space equipped with digital displays, laptops, sketching tools, various materials, and recording equipment. Miro, an online innovation workspace, was chosen to help teams develop engineering specifications collaboratively. Each team received preorganized templates to follow the framework instructions and ensure an efficient workflow. We obtained informed consent from all the participants before starting the workshop.

Figure 1. Workshop Settings and Process
We provided both teams with the same user scenario, product concept, materials, overall time limit, setting, and instructions by the moderator. The only difference was that Team B incorporated AI into their design process. Each session included six participants (see the Participants section) from diverse disciplines to foster an interdisciplinary environment. A moderator facilitated each workshop, guiding as needed.
2.2. Participants
This study included two interdisciplinary teams with diverse backgrounds. Team A (non-AI) comprised experts in product design, interaction design, mechanical engineering, and electrical engineering. Team B (AI-assisted) comprised professionals in product design, interaction design, electrical engineering, and software engineering. Table 1 summarizes the participants’ experiences, familiarity with the scenarios, design specifications, AI tools, and involvement in interdisciplinary projects.
Table 1. Participants Overview

2.3. AI integration
To evaluate AI’s role in engineering specification development, we integrated the o1-preview ChatGPT model, selected for its advanced reasoning and problem-solving capabilities (OpenAI, 2024). AI served as a collaborative assistant, providing structured responses to guide teams through the framework. We provided identical resources for both teams to minimize bias from extra tools (e.g., Miro board, pre-organized templates). This ensured that AI was the sole experimental variable influencing decision-making and collaboration. Before the workshop, we presented the framework with detailed instructions, guidelines, and definitions to maintain methodological consistency. The AI model was preconditioned with a structured prompt to ensure uniform responses:
“Please review the following instructions and background information carefully. Then, use these details to understand the intended framework for this conversation. Once you have a firm grasp of the framework, configure your responses so that the chat will consistently operate according to these guidelines: …”
Additionally, we provided AI with six reference cases demonstrating how user scenarios could be systematically translated into engineering specifications. This setup ensured that AI recommendations remained consistent, structured, and relevant to the framework.
During the workshop, AI-assisted participants interacted with the model through structured prompts on their laptops, with facilitators monitoring interactions to maintain focus. Each framework step had three predefined AI prompts to guide participants through the process. For example, during Step 1: Extracting User and Product Actions, participants used the following prompt:
“Given the following user scenario, identify all User Actions (UAs) and Product Actions (PAs). Organize your findings into a clear table with coded references for each element.”
Participants were encouraged to explore additional AI queries where necessary but were required to document all AI-generated responses to maintain transparency in the decision-making process. This structured approach helped isolate AI’s impact while minimizing confounding variables from other digital tools.
2.4. Procedure
The workshops adopted a three-stage framework to translate the user scenarios into engineering specifications, as outlined in “Action Steps” framework (Reference Ismatullaev and KimIsmatullaev & Kim, 2024). Furthermore, the six sub-steps in these three stages were detailed. The framework stages and corresponding steps are as follows:
-
Stage I: Translation of user scenarios into action steps:
-
Step 1 (10 minutes): Participants analyzed a user scenario (Figure 2) to identify user and product actions.
-
Step 2 (20 minutes): Participants paired user and product actions or built comprehensive action steps. Action steps provided a detailed mapping of the interactions between a user and a product.
-
-
Stage II: Extracting functional requirements
-
Step 3 (15 minutes): Teams developed functional requirements based on the categorized user and product actions. This step involved defining what the system must achieve to fulfill the user-product interactions.
-
-
Stage III. Developing engineering specifications
-
Step 4 (15 mins): Participants translated the functional requirements into the necessary parts and technologies required to meet these requirements.
-
Step 5. (15 mins): Teams detailed the engineering specifications by adding measurable parameters and constraints.
-
Step 6. (15 mins): Finally, participants integrated all specifications into a technical drawing blueprint with product concept to arrange parts logically in the product.
-

Figure 2. User Scenario given for workshop participants
To understand collaboration and the roles of designers and engineers at each step, the participants were not assigned specific leadership roles. Instead, they made decisions together for each task to promote interdisciplinary teamwork. Each team received the same user scenario and was tasked with developing engineering specifications by using a structured approach and moderator instructions. The moderator explained the purpose of the workshop and introduced a scenario-based design specification framework before starting the workshop. Although each step had a set time, the teams could adjust the time spent on each step, as needed. If teams chose to move to the next step before the allotted time, they were allowed to do so to maintain workflow efficiency. Each workshop session lasted 90 minutes, including a 10-minute break halfway through for participants to rest and regroup.
2.5. Data collection and analysis
We collected data through pre-and post-surveys, focus group interviews (FGIs), observations, and session recordings to capture various aspects of participants’ experiences and the framework’s effectiveness:
-
Pre-Surveys: Collected participants’ backgrounds, interdisciplinary project experience, collaboration challenges, and familiarity with scenario-based methods and design specifications.
-
Post-Surveys: Assessed perceptions of the framework’s usability, collaboration effectiveness, efficiency, and design quality using 5-point Likert scales and open-ended questions.
-
Focus Group Interviews (FGIs): Provided qualitative insights into the framework’s strengths, areas for improvement, and the impact of AI on work processes and collaboration dynamics. The interviews were recorded and transcribed for analysis.
-
Observations and session recordings: Observers noted team interactions, decision-making, and tool usage (e.g., Miro). Recordings captured workshop dynamics for post-session analysis.
-
Team outputs and AI interaction logs: Collected and analyzed team outputs, including Miro boards and engineering specifications. The AI interaction logs tracked how AI assistance influenced specification development.
Table 2. Data Collection and Analysis Methods

3. Results
3.1. Framework usability and design quality
The framework exhibited moderate to high usability and effectively translated user scenarios into engineering specifications. However, usability and effectiveness varied by discipline, team, and step, highlighting both the strengths and areas for improvement.
3.1.1. Usability
Participants found the framework intuitive, with Team A averaging 4.0 and Team B averaging 4.1. Early steps (1–3), which involved extracting user and product actions and building action steps, were manageable and suited to the participants’ expertise. Team B noted that these steps were easy even without AI. Designers excelled by leveraging their familiarity with user scenarios, whereas engineers appreciated the structured process for extracting functional requirements.
Later steps (4–5), especially Step 5 (establishing engineering specifications), were more challenging due to technical complexity. Both teams found insufficient functional details (e.g., detection ranges and target speeds), which hindered the feasibility assessments. Engineers handled these steps better, whereas designers struggled without adequate guidance. Team B found Step 5 slightly easier with AI assistance as it provided baseline specification suggestions, although these often lacked the necessary context. The following is a summary of usability trends:
-
Steps 1–3: Designers found these steps intuitive, with an average rating of 3.7/5 across teams. Participants suggested more explicit instructions and pre-filled templates to boost creativity and streamline the action steps. Both teams noted that minimal AI involvement was needed, as designers found these steps straightforward.
-
Steps 4–5: These stages were more challenging, with Team A scoring 3.3/5 and Team B 3.8/5 with AI assistance in identifying engineering specifications. Participants suggested adding a collaborative mini-step to refine functional requirements and define key targets. Step 5, in particular, was a bottleneck. Designers favored abstract functional requirements to encourage innovation, while engineers needed precise inputs to select components effectively, highlighting the need for iterative collaboration. Engineers performed well in these steps. AI played a supportive role, particularly in technical phases, by providing benchmarks, parameter ranges, and component suggestions. However, AI outputs often lacked context and required refinement to align with project goals. In Step 5, participants emphasized that integrating AI seamlessly into the framework with tailored, user-centered recommendations could enhance usability.
-
Step 6 (Placement/Layout): Both teams rated this step similarly, with Team A scoring 3.5 and Team B 3.7, indicating comparable effort. Interdisciplinary collaboration was evident, with minimal support of AI.
3.1.2. Effectiveness
Both teams produced moderately user-centered specifications, with Team A scoring 3.3 and Team B scoring 4.0. Team B, especially its engineers, found the framework to be effective in maintaining user focus while supporting feasible solutions. The early steps helped engineers link technical decisions to user needs. An electrical engineer stated:
“Engineering specifications were developed by focusing on the user’s key requirements. It’s (this approach) easy for designers and engineers to be involved in every step of the process, leading to consensus on the final product. By identifying the user requirements (user needs) and thinking about engineering (constraints), we were able to come up with a realistic (feasible) idea.”
Team A’s lower user-centeredness score was due to time constraints that limited their ability to verify whether all user actions were addressed in specifications. Team A designers struggled with the technical details in Step 5, thereby reducing output clarity. In Step, 6 both worked together on part placement and layout specifications for the potential product concept. This collaborative input aligned user needs with technical constraints. During this process, teams discussed where parts should be located, and how the product shape should change based on user needs, part locations, and constraints on part selection and placement (Figure 3).

Figure 3. Step-6 Representative outcomes from Team A (non-AI) and Team B (AI-assisted)
Innovativeness scored lowest for both teams (2.7), as feasibility constraints in later steps (4 and 5) limited the exploration of novel approaches. The participants noted that earlier steps fostered creativity, particularly in inferring product actions to meet user actions. However, feasibility dominated in the latter stages as teams worked within time limits. They tended to rely on familiar solutions rather than exploring novel approaches, particularly in Steps 5 and 6. Participants suggested that with extended timeframes and iterative discussions between design and engineering teams, the framework could promote enhanced innovation. Team B noted that AI provided practical suggestions but required validation, such as a GPS-based solution suggested by AI deemed impractical for indoor use owing to signal issues. Despite these challenges, the framework showed the potential to balance creativity and feasibility with better interdisciplinary collaboration and refined AI integration.
3.2. Process efficiency and team collaboration
3.2.1. Efficiency
The time allocated to each step varied slightly between the teams, but the overall durations were similar. Team A (non-AI) took 83 minutes in total, while Team B (AI-assisted) took 81 minutes. However, Team A produced a more detailed output. In Step 1, Team A identified 14 User Actions (UAs) and 7 Product Actions (PAs) in 9 minutes, compared to Team B’s 10 UAs and 8 PAs in 12 minutes, demonstrating deeper scenario engagement by Team A.
In Step 2, both teams developed eight action steps. Team A spent 25 minutes, while Team B took 15 minutes. Although the outputs were similar, Team A’s greater collaboration and iterative refinement led to longer time spent. In Step 3, Team A extracted 12 functional requirements in 9 minutes, whereas Team B identified 11 requirements in 17 minutes. This reflects Team A’s structured, human-guided approach versus Team B’s AI-reliant, less-active teamwork.
Steps 4–6 highlighted key differences. Team A identified 24 parts and technologies (11 minutes), 17 specifications (14 minutes), and 19 layout placements (15 minutes), spending more time refining outputs through collaboration. Team B, aided by AI, completed the steps faster but produced fewer outputs: 18 parts and technologies (12 minutes), 8 specifications (10 minutes), and 16 layout placements (15 minutes).
AI reduced the task time for Team B but did not enhance output comprehensiveness. Interaction logs showed that the AI-assisted participants explored more parts and technologies than their reflected outputs. This discrepancy was due to information overload and the need for extensive crosschecking, which hindered the effective inclusion of AI-suggested elements.
AI provided baseline suggestions but lacked proactive, context-aware guidance to stimulate exploration. Engineers in Team B prioritized technical feasibility and often chose familiar parts, limiting innovation. By contrast, Team A relied on human-led discussions, effectively balancing user needs and technical constraints.
3.2.2. Team collaboration
Collaboration dynamics differed between the teams. Team A demonstrated stronger interdisciplinary integration through iterative discussions and shared decision-making. Designers and engineers worked together to balance feasibility and user needs, especially in Step 6, where the component placement and layout were discussed in detail.
Team B experienced a more fragmented collaboration. Although AI supported their tasks, it did not foster a shared understanding or interdisciplinary alignment. Unlike Team A’s iterative discussions, Team B lacked structured moments for a joint review and refinement. They relied on AI outputs to define engineering specifications, which provided numerous options and overwhelmed the participants with information. This led to individual decision-making rather than collective validation, as team members struggled to manage and contextualize AI suggestions within a limited timeframe. This siloed approach limited joint exploration and validation of outputs.
Designers ensured user-centeredness in the early steps by leading the process, whereas engineers led the latter stages to define engineering specifications. Designers often saw themselves as user advocates and concept guardians. They were comfortable letting engineers handle detailed specifications but stressed the importance of understanding why certain technical decisions were made. Engineers indicated that knowing the design rationale from the early steps helped them select the appropriate parts. If designers communicate their intentions, user priorities, or why a specific function matters, engineers can propose solutions that align with the user experience goals. Without this context, engineers may default to standard or over-specified components that do not necessarily reflect user-centric decision-making.
4. Discussion
The study confirms that the proposed framework effectively guided teams in translating user scenarios into engineering specifications. AI showed promise, particularly in defining target specifications and improving efficiency. However, it did not inherently foster creativity or interdisciplinary collaboration.
Compared to traditional computer-based tools, AI acted as an adaptive collaborator, providing real-time recommendations, contextual insights, and reasoning support. However, our findings indicate that AI alone did not enhance team collaboration, exploratory reasoning, or interdisciplinary engagement, reinforcing prior research that AI must be intentionally designed to complement, rather than replace, human decision-making (Reference Steyvers and KumarSteyvers & Kumar, 2024). Based on this study’s findings, the following sections outline AI’s roles, challenges, and solutions for effective integration.
4.1. Promises for AI integration in design specification
AI as a technical consultant AI provided benchmarks, parameter ranges, and component specifications, enabling AI-assisted teams (Team B) to develop technical specifications faster than non-AI teams (Team A). However, AI lacked contextual awareness, requiring manual adjustments to align outputs with project constraints. This underscores AI’s potential for engineering precision when paired with structured validation mechanisms.
AI as a bridge between disciplines AI contributed to interdisciplinary alignment by standardizing terminology and clarifying engineering constraints. Team B participants reported that AI reduced misinterpretation. However, it did not inherently improve collaboration, as designers and engineers continued working in silos. This suggests that AI should be structured to actively promote interdisciplinary exchange rather than passively provide information.
AI for iterative refinement AI-assisted teams conducted faster feasibility assessments but engaged in less exploratory iteration. While AI identified technical conflicts (e.g., power consumption vs. resolution), it often prioritized feasibility over creative problem-solving. This indicates that AI workflows should present multiple alternative specifications to ensure active human engagement in decision-making.
AI as a knowledge repository AI reduced manual research time by retrieving technical data and component options but often overwhelmed participants with excessive, unfiltered information. Implementing AI interaction models with contextual filtering and relevance ranking would help teams efficiently select components that align with user needs and technical feasibility.
AI as a proactive collaborator: AI showed the potential to streamline decision-making. However, workshop findings suggest that AI in its current state did not proactively facilitate team discussions. Participants relied on AI for data retrieval rather than reasoning support. This indicates that AI must be designed to engage users in structured collaboration rather than merely generate outputs.
4.2. Addressing pitfalls of AI for effective integration
Ensuring contextually relevant AI outputs AI-generated specifications sometimes exceeded practical constraints, requiring manual refinement. Lacking situational awareness, AI occasionally recommended impractical solutions (e.g., sensors with excessive power demands). This aligns with prior research emphasizing the need for domain-specific (Reference Pathirannehelage, Shrestha and Von KroghPathirannehelage et al., 2024), such as scenario-driven AI training and engineering-specific datasets to enhance AI’s context awareness and output relevance.
Balancing AI assistance with human creativity and decision making AI reduced manual workload but limited exploratory reasoning, with AI-assisted teams prioritizing validation over alternative exploration. Overreliance on AI led Team B to engage in less ideation than Team A, suggesting AI may reinforce convergent thinking rather than creative divergence. This aligns with prior research emphasizing that AI should propose multiple alternatives with trade-offs to keep teams actively involved in decision-making and mitigate the “bandwagon effect” where individuals readily adopt AI recommendations when others do so (Reference YinYin, 2024).
Managing information overload AI-generated specifications overwhelmed participants with excessive technical details, such as listing multiple motor options with varying torque specifications. Findings support prior research on the importance of managing information volume to prevent cognitive overload and reduced efficiency (Reference Steyvers and KumarSteyvers & Kumar, 2024) in AI-assisted decision-making. A tiered information delivery system, where AI first provides general recommendations (e.g., “motor size and torque should match load requirements”) and refines details upon request, can improve the usability of AI support.
Maintaining a user-centric focus in AI workflows AI prioritized technical feasibility over user experience, occasionally neglecting usability considerations. This aligns with similar findings that suggest AI-driven outcomes are influenced by data availability and may not inherently support UX design unless they are explicitly trained in human-centered principles (Reference Yang, Wei and PuYang et al., 2020). This suggests that future AI tools should incorporate usability constraints and UX data to balance technical and experiential design goals.
Optimizing human-AI collaboration Overreliance on AI reduced interdisciplinary discussions in Team B. To mitigate this, AI must be designed to facilitate structured interactions to ensure AI-assisted workflows retain active human engagement. Based on our findings, proposed solutions include:
-
Transparent and explainable outputs: AI should provide a rationale for recommendations. For example, “This battery supports 8 hours of operation, meeting your requirement for extended use”
-
Interactive AI workflows: During steps like defining engineering specifications, AI can prompt targeted questions to help teams refine requirements, such as, “What is the required detection range for the sensor based on the user’s context?” These queries assist teams in clarifying scenario uncertainties and defining precise parameters collaboratively, aligning with prior studies emphasizing AI’s role in structured decision support (Reference Song, Zhu and LuoSong et al., 2024).
-
AI integration into collaborative platforms: Embedding AI in tools like Miro or Figma can streamline workflows by directly linking functional requirements to user scenarios and defining specifications. For instance, if a designer specifies “sensor range,” AI could auto-fill units (e.g., meters) and suggest feasible values. However, these specifications must be contextually aligned with user scenarios to ensure relevance. This integration reduces inefficiencies caused by switching between tools (e.g., Miro and ChatGPT), as observed in Team B, and facilitates real-time collaboration and seamless design solution sharing.
5. Conclusion
We applied the structured framework to compare AI-assisted and non-AI-assisted workflows for translating user scenarios into engineering specifications. The framework effectively aligned user needs with engineering specifications but presented challenges in later technical stages. AI improved efficiency and supported technical decision-making but led to information overload and reduced collaborative innovation. Non-AI teams demonstrated stronger interdisciplinary integration, balancing creativity and feasibility through human-led discussions. AI’s limitations in contextual specificity and interactive guidance hindered its ability to foster innovation and full collaboration.
Key findings highlight AI’s potential to bridge knowledge gaps, streamline workflows, and support user-centered design. However, successful integration requires enhanced domain-specific training, contextual understanding, and interactive capabilities to balance automation with human creativity.
Future research should expand sample size, participant diversity, and experimental repetitions to improve generalizability, as this study serves as a pilot investigation. Refining AI tools with domain-specific data and developing structured AI-human interaction protocols can enhance automation while preserving human-driven design iteration. Further exploration of AI-assisted workflows in detailed design and validation is needed to optimize AI’s role beyond translating UX outcomes into engineering specifications. AI integration in these stages can provide data-driven insights, support decision-making, and streamline the design process. Addressing these challenges will advance scalable, user-centered methodologies and reinforce the role of structured frameworks and AI in engineering design.
Acknowledgment
This research was supported by a research grant from the Ministry of Trade, Industry and Energy and the Korea Evaluation Institute of Industrial Technology (KEIT) in 2021 (20015056).