1. Introduction
Design decision-making under competition is pivotal in real-world engineering design and product development, where teams must navigate uncertainties, limited resources, and competitive pressures to optimize solutions. Competitive environments compel designers to balance the exploration of novel design alternatives with the exploitation of known solutions, often under stringent deadlines and market constraints. These challenges are further amplified by bounded rationality, where cognitive limitations and imperfect information influence decision-making outcomes (Reference SimonSimon, 1957; Reference Panchal, Sha and KannanPanchal et al., 2017). Understanding how competition shapes the decision-making process is crucial for advancing design theories and creating tools that can support designers in such dynamic contexts.
Team-based design is a common paradigm in engineering, particularly in industries such as aerospace, automotive, and consumer electronics, where complex, large-scale systems always require multidisciplinary collaboration. In aerospace, team-based design facilitates the integration of aerodynamics, propulsion, and structural engineering to ensure safety and efficiency. In the automotive industry, it enables seamless coordination between mechanical, electrical, and software engineers to enhance vehicle performance, sustainability, and user experience. Similarly, in consumer electronics, cross-functional teams drive rapid innovation, balancing hardware, software, and user-centered design to meet evolving market demands. These industries rely heavily on team-based design due to the intricate interplay of diverse engineering disciplines and the need for optimized, high-performance products. Therefore, coordinating team members’ decisions in these scenarios is a fundamental challenge that can significantly influence the effectiveness of design teams (Reference Gyory, Kotovsky and CaganGyory et al., 2021). For instance, research has shown that team interactions and mechanisms have profound impacts on design outcomes (Reference Chen, Bayrak and ShaChen et al., 2025; Reference Gyory, Kotovsky and CaganGyory et al., 2021; Reference McComb, Cagan and KotovskyMcComb et al., 2015). Specifically, Reference McComb, Cagan and KotovskyMcComb et al., (2017) found that the number of interactions and the division of labor within teams play a critical role in optimizing team performance. However, the dynamics of design teams becomes more intricate when competition is introduced, raising questions about how strategic behaviors and resource allocation evolve in competitive environments.
Despite the importance of understanding the decision-making of design teams in competition, existing studies have focused mainly on individual decision-making or isolated aspects of collaboration. For example, (Reference Panchal, Sha and KannanPanchal et al., 2017) developed game-theoretic models to study individual decision-making in competitive environments. Yet, these works do not address the strategic interactions that arise between competing teams. Furthermore, while games and experiments have been widely employed in behavioral research, there are currently no dedicated research platforms that systematically collect data on team decision-making under competition, nor are there studies that develop and test multiple information exchange strategies that investigate diverse real-world scenarios in design teams under competition. This represents a critical gap for empirical studies and model validation.
To address the significant research gap in studying team-based design decision-making under competition, we propose a novel game-theoretic platform. Built on the oTree framework (Reference Chen, Schonger and WickensChen et al., 2016), this platform is designed to abstract and simulate real-world competitive design scenarios through controlled experiments. On such a platform, individual designers collaborate in a team to optimize a design characterized by an unknown function while competing against other teams, allowing researchers to systematically collect data on their decision strategies, resource allocation, and team dynamics under competitive pressures. In addition, the platform enables multiple configurations of information exchange, allowing researchers to create and analyze various information exchange scenarios that mimic real-world problems, such as varying levels of communication and visibility of information within and between teams. Based on this simulation game, we aim to answer two research questions: In teambased design under competition, how does information exchange influence individual performance? What is the impact of cost on participants’ efforts? In general, this platform seeks to serve as a foundational tool for exploring the interplay between collaboration and competition in design, enabling both the study of strategic behaviors and the development of competition-aware AI-assisted design tools and methodologies.
The contributions of this paper are as follows: First, we developed a game-theoretic experimental platform to enable systematic data collection and facilitate further analysis of design team decision-making under competition. Second, we developed five experimental settings to replicate real-world complexities and examine the impact of information exchange on decision-making. These settings include a baseline control session and four treatment sessions, each progressively introducing varying levels of information-sharing mechanisms and team configurations. Third, we conducted a pilot study to validate the platform and the experimental settings, demonstrating the platform’s ability to replicate realistic competitive design scenarios, systematically capturing meaningful insights into team behaviors and strategies. The results highlight the potential of the platform as an effective tool for advancing research on competitive team-based design.
2. The Function Optimization Game
2.1. The Objective of the Game
We have developed a team-based multi-player function optimization game platform using oTree to enable experiments that replicate the decision-making processes involved in design under competitive conditions. In this game, teams are tasked with optimizing a design represented by a single variable x ∊ [a,b], with its performance evaluated through an unknown function f(x) (but known to experimenters), as illustrated in Figure 1. This setup reflects many real-world design challenges in which designers do not fully understand the functional behavior of a system. Members of each team can query the function by selecting a specific value of x, with each query revealing the corresponding value of f(x) at a cost of c. Teams collaborate internally to explore the design space efficiently while competing against an opposing team. At the end of the game, the team whose design reaches the maximal function value is declared the winner and receives the payment described in Section 2.2.

Figure 1. The function optimization game
2.2. Performance-based Payment
The game consists of five distinct sessions, each representing a different mechanism of information exchange, and each session comprises ten rounds, and each round is limited to a maximum duration of 2.5 minutes, simulating the time constraint in design. At the end of each team-based competition, the winning team is determined based on their performance. Within the winning team, the participant who achieves the closest value to the optimal solution receives R tokens as a reward, while their teammates are awarded r tokens. To recognize the varying levels of individual contribution within the winning team, the reward is structured such that R > r. Although this may also introduce intra-team competition (which is true in real world), this distinction incentivizes participants to actively contribute to the success of their team while maintaining a collaborative dynamic.
The final payoff for a participant in round i is calculated using the following formula:

Where Pi represents the participant’s payoff for round i, ∏ is the initial number of tokens provided at the start of each round, n is the number of trials or queries made by the participant, c is the cost per trial, and R is the reward for their contribution within the winning team. We define the reward R using two rules. The first is straightforward: participants in the winning group receive a fixed prize, with the prize value varying. The second is more complex: participants in the winning group earn rewards from the non-winning group’s remaining tokens. A specific portion of these tokens is distributed to the winners, while the rest is retained by the non-winners. This rule mirrors the competitive dynamics of product design in market share competition. The reward structure, as defined by these two rules, directly influences the participants’ decision-making process in each round. By balancing the cost of trials with the potential rewards, participants must strategize their actions to maximize their final payoff. This tradeoff becomes especially critical when considering the competitive dynamics of these two reward systems. For simplicity, we use the first rule in our pilot study in the following section.
At the end of the game, when all rounds are completed, the mean payoff across all rounds is converted into monetary rewards. The conversion is governed by the following formula:

Where
${\bar P }$
denotes the average payoff across all rounds (i.e.,
$\frac{{\mathop \sum \nolimits {P_i} }}\over{{10}}$
), and e is the exchange rate between game tokens and US dollars. The use of a maximum function ensures that no participant receives a negative monetary reward, ensuring that their participation is not penalized. This token-dollar conversion mechanism reflects the principle of real-world reward for effective decision-making and resource allocation during competition.
3. Experimental Framework
3.1. Experimental Settings
In this game platform, we design six game sessions—referred to as the Trial Game, Game 1, Game 2, Game 3, Game 4, and Game 5—to systematically investigate the effects of varying levels of information exchange on design decision-making in competitive team settings. These experimental settings are crafted to mimic real-world scenarios where the visibility of information and uncertainties play a critical role in shaping team strategies and performance. Each session features a one-team-versus-one- team competition, allowing us to explore the strategic interactions and dynamics between competing teams under different information-sharing conditions. Table 1 summarizes the configurations of the six sessions, which are designed to examine distinct information-sharing scenarios.
Table 1. Game configurations and their details. The columns “Teammates” and “Opponents” indicate whether information is shared or not

These scenarios range from no information exchange to information shared within teams, between competing teams, or both.
-
Trial Game: This preliminary session is designed to familiarize participants with the interface and mechanism of the game. It incorporates information obtained from both teammates and opponents, but is limited to a single round.
-
Game 1: This session establishes the baseline for the experimental study, where no information is shared or exchanged among the participants. Consisting of 10 rounds, it emphasizes independent decision-making under conditions of greater uncertainty, as participants lack external information to guide their choices. Despite the absence of information exchange, participants must still engage in strategic decision-making to optimize their outcomes. Game 1 provides a reference point to evaluate the impact of introducing information sharing mechanisms in subsequent games.
-
Game 2: Participants are provided with information about their teammates’ sampled points and the corresponding function values. This session emphasizes collaborative dynamics, allowing participants to gain insights into the decisions and outcomes of their teammates, and therefore a better understanding of the design space. Researchers can use this session to examine how collaboration within teams affects decision-making strategies and outcomes in competition.
-
Game 3: In this session, participants are provided with information only about their opponents’ sample points and the corresponding function values, not their teammates. Game 3 highlights competitive dynamics, allowing participants to gain competitive insight and adjust their strategies based on the actions and outcomes of their opponents. Note that while no explicit information about teammates is displayed, participants can still partially infer their teammates’ decisions because the platform restricts sampling of points already chosen by teammates.
-
Game 4: This session allows for the exchange of information from both teammates and opponents, creating a comprehensive information sharing scenario. Participants gain access to collaborative and competitive insights, allowing researchers to analyze the interplay between teamwork and competition. Game 4 represents a complex decision-making environment, mimicking scenarios in which individuals must balance collaboration within teams and competition between teams.
-
Game 5: Building on the complexity of Game 4, this session increases the team size from n to N participants per group, where N > n. Game 5 explores the dynamics and challenges of larger teams operating under similar competitive conditions, allowing researchers to investigate how scalability could affect team performance, resource allocation, and decision-making strategies.
Figure 2 illustrates a snapshot of a player’s interface in Game 4, which represents one of the more complex information exchange scenarios. When viewed from the top down, the yellow region at the top of the interface displays a countdown timer, indicating how much time remains in the current round. Below the countdown, a brief introduction provides participants with a summary of the basic setup and the goal of the game. Following this, information about the number of tokens the participant has remaining in the current round, the serial number of the round, the group ID, and the participant’s ID within the group is presented sequentially.

Figure 2. The graphic user interface (GUI) of the developed game-theoretic research platform for team-based design under competition. This particular screenshot shows a snapshot of a player in Game 4
After reviewing these elements, participants are prompted with the question: “What value will you choose for x from −10 to 10?” They can enter their chosen value in the input box provided below. By clicking the blue button labeled “Calc F(x),” participants can view the corresponding result in the the x − y coordinate system in the lower-left corner with sampled points shown in real time. For instance, as shown in Figure 2, when −2 is entered into the input box and the “Calc F(x)” is clicked, the system immediately displays the result F(x) = 1.17, and the point (−2,1.17) appears on the coordinate system in real time.
To the right of the scatter plot, two key tables are displayed, corresponding to different levels of information exchange as discussed earlier in this section. These tables enable participants to compare their performance with that of their teammates and opponents. For clarity, we refer to these as Tables I and II, and their detailed descriptions are provided below.
-
Table I (Top Right): Displays the best three and worst three values of f(x), along with the corresponding x values for the participant and their teammate(s).
-
Table II (Bottom Right): Displays the best 3 values of f(x) among the participant and their opponents.
We also used different colors in the tables to distinguish sources of information. Specifically, red indicates values chosen by the participants themselves, while black represents values obtained from others, including opponents or teammates.
3.2. Workflow of An Experiment
The experimental workflow is designed to ensure consistency and reliability while capturing meaningful data and insights into team-based decision-making process under competitive conditions, which involves the following steps, as shown in Figure 3.

Figure 3. Workflow of the experimental game
This workflow represents a structural process for conducting a study involving participants and games. The process begins with the recruitment of participants who are willing to participate in this experimental game. Once the participants are convened, they are introduced to the settings of the game. During this stage, the rules, objectives, and other necessary details are clearly explained to ensure that the participants have a clear understanding of what is expected. In this stage, the consent forms will also be sent to participants to sign according to the approved Internal Review Board (IRB) protocol to ensure that participants are fully informed about the study before proceeding.
Following this, a web page displaying the links to each participant’s individual game test will be sent to everyone, then a trial game is conducted. The purpose of the trial game is to familiarize the participants with the game’s user interface and settings, thus providing them with an opportunity to practice within the experimental framework. This preparation ensures that participants are comfortable and ready for the main phase of the study, which could also minimize the bias and effect caused by differences between individual learning curves.
Once the trial game is completed, the formal games are conducted in order. Each formal game consists of multiple rounds. At the beginning of each round, participants are randomly re-matched to form new teams, which then compete against other newly formed teams. These games are the central part of the experiment, designed to gather data under both the controlled condition and the treatments. After the formal games, the final step involves collecting and analyzing the data generated during these game sessions. Such data serve as the foundation for further analysis and conclusions related to the study’s objectives. The workflow follows a logical progression, ensuring that each stage builds upon the previous one in a systematic manner. An example of the game implementation is demonstrated in Section 4.
3.3. Data Collection Process
The data collection process in the experiment is carefully designed to capture both qualitative and quantitative aspects of the behavior of the participants and their decision making strategies under competitive conditions. It begins by gathering personal background information through a pre-game survey, which collects demographics about participants, such as their major, education level, gender, and prior understanding with optimization related topics. This information is used to contextualize the experiment’s results and to understand how individual characteristics might influence group and individual performance, as well as strategic behavior.
During each game session, the experimental platform implemented by oTree systematically records participants’ behaviors and decisions made. These include the list of values of the decision variable x selected by the participants, the best value of x chosen, the corresponding best function value, and the use of tokens. Additional information includes the payoff calculations based on token consumption and rewards, and the win-or-lose results for each team. Using the observing interface of Game 1 as an example, the screenshot is shown in Figure 4.

Figure 4. Data collection interface of Game 1
Here, the Links Page includes all test links that we need to send to the participants, while the New option on the left provides a function to refresh the links. Continuing to observe from left to right, the Monitor option shows experimenters the current status of each participant during the game in real time, such as which round they are in and what they are doing, sampling, waiting or observing the results. The Data and Payments options, as their names suggest, provide the research data; thus, they are the most important parts in this process. The option on the far right named Description is not useful for research purposes and can be ignored.
After each session ends, post-experiment feedback is collected through a survey to gain qualitative insights into the participants’ sampling strategies during the game. It also captures the personal evaluations of the participants and their perceptions of the experimental setup. Such qualitative data complement the quantitative results and also contribute to refining the platform for future studies, ensuring a broader understanding of participant behavior and strategies in competitive scenarios.
4. Illustrative Example: A Pilot Study
In this section, we introduce a pilot study demonstrating the proposed platform, the workflow of experiments, and our preliminary results based on the data collected from this pilot study.
4.1. Experimental Settings
The subjects participated in a competition where one group faced off against another, with each group consisting of two or four members, to identify the maximal point of a randomly generated function following the form of Equation (3),

Where the parameter a, b ∊ [−10,10] and both a and b take two decimal places. Based on these settings, this function has several local maximum and one unique global maximum, but such information was never revealed to any participant. At the start of each round, the function will be randomly initialized with a new set of a and b values. This ensures that participants won’t learn the pattern and must make dedicated decision in every round to win the game.
The pilot study was carried out with 8 participants, 5 from Lehigh University and 3 from the University of Texas at Austin. In each round of play, participants complete two treatments: 5 rounds for low-cost treatment (c = 10 tokens per sample) and 5 rounds for high-cost treatment (c = 20 tokens per sample). The winning team and the winner (i.e., the participant whose sampled point is the cloest to the true optimum) of the winning team will be determined in every round of play. Within one round, each participant needs to make the two decisions: 1) decision to choose a new value of x and 2) decision to stop, sequentially and multiple times.
When all of the 10 rounds are finished, we pay the participants the actual dollars according to Equation (2). The exchange rate (e) is specified before the experiment starts. In this pilot study, we define e = 10, i.e., 10 tokens = 1 dollar, and only the portion that exceeds the initial 200 tokens can be converted into dollar rewards. Taking two data points from the pilot study as examples, in Game 1, the mean final payoff of Participant 1 is 219 tokens. Therefore, the dollar rewards of P1 is:

At the same time, the mean final payoff of Participant 2 is 164 tokens. Then, the dollar rewards of P2 is:

Participants are allowed to observe their win-or-lose status as well as the amount of final payoff at the Result Page when a round ends, providing feedback to the participants about their performance.
4.2. Data Collection
This pilot study was conducted virtually via Zoom on November 25 2024, with a total experimental duration of one and a half hours. Due to time constraints, we only tested the Trial Game, Game 2, Game 3 and Game 5. Since our research platform is web-based powered by oTree, all experimental data are recorded in real time on the Data Page and can be exported as CSV files. On such a page, data are documented for each round, including the group ID that a participant belongs to, a participant’s ID in the group, their final payoff, their best x and F(x) values, all x values sampled, time spent, the number of trials, their opponents’ group IDs, the randomly generated parameters a and b, the true optimum of the function, and the group’s win/loss status. In addition, on the payment page, the sum of payments from all rounds is listed for each participant in the current game session.
4.3. Prelimenary Results
Since Game 4 (i.e., the control of Game) was not conducted, we only show our results by analyzing the data collected from Games 1, 2 and 3. This allows us to answer two examples of research questions.
Research Question I: In team-based design under competition, how does information exchange influence individual performance?
Here, we use payment to quantify the performance of an individual. The mean final payment (MFP) of all participants in each session is shown in Table 2. All numerical data represent the amount of tokens.
Table 2. Mean Final Payoff (MFP) related information for Game 1, 2, and 3

More rigorously, we test the following hypothesis:
-
H 0 : Participants’ performance in Game 1 has no difference from that of Games 2 or 3
-
Ha : Participants’ performance in Game 1 is significantly different from that of Games 2 or 3
We performed paired t-tests and a one-way ANOVA. The results of the test are shown in Table 3:
Table 3. Paired t-test and ANOVA results summary

Since the p-values for the two-paired t-test are greater than 0.05, it can be concluded that there is no statistical difference between the participants’ performance between Game 1 and Game 2 as well as between Game 1 and Game 3 at a significance level of 0.05. In addition, the results performed by one-way analysis of variance (ANOVA) also confirm the absence of statistically significant differences between the three games. Although in Table 2 we observed some indicators showing that Games 2 and 3 could lead to better payment performance, such differences were not large enough to reach statistical significance under the current sample size. Therefore, the results are not conclusive for RQ1 until a sufficient number of data is collected.
Research Question II: What is the impact of cost on participants’ efforts in team-based design under competition?
In the game, a participant’s effort in reach round can be quantified by their number of tries (i.e., the number of sampled points). Table 4 shows the average number and standard deviation of attempts in different games with low and high-cost settings. In our previous study on individual designer’s decision under competition Sha et al. Reference Sha, Kannan and Panchal(2015), it was observed that, as the unit cost increases, participants tend to spend less effort exploring the design space. To test whether the same phenomenon would be observed in the team-based scenario, we test the following hypothesis for each game:
Table 4. Mean (μ) and standard deviation (σ) of the number of trials in different games with different cost settings

-
H 0 : The average number of tries under high and low cost are the same.
-
Ha : The average number of tries under high cost is significantly smaller than that under low cost.
By conducting three paired t-tests in order, we have the corresponding statistics of p 1 = 0.0488, p 2 = 0.0087, p 3 = 0.1623 where pi represents the p-value for Game i (i=1, 2, 3). Such results indicate that in the team-based scenario, at the level of significance 0.05, the same conclusion remains for Game 1 and Game 2. Specifically, in these two games, regardless of their different levels of information visibility, the number of tries at high cost is significantly lower than in the low-cost setting. However, in Game 3, when the information is shared by opponents rather than teammates, no significant effect of the cost settings was observed on the number of tries among the participants. This implies that when opponents’ information is shared, the game becomes more competitive and thus stimulates participants to win. Also, it could be the reason that when there are fewer uncertainties due to the partial observation of opponents’ design strategies, participants from both teams competing with each other have performance on par with each other, and these individuals have to try more in order to win.
5. Conclusions
This study presents a game-theoretic research platform designed to investigate team-based design decision-making under competition. By abstracting and simulating real-world design scenarios through controlled experiments, the platform enables systematic analysis of collaborative and competitive design behaviors, resource allocation strategies, and decision dynamics across various information-sharing settings. The research platform, along with the experimental framework, is demonstrated through a pilot study, and the preliminary results show its potential to capture critical aspects of competitive design environments, providing insights into group performance under competition.
The key findings of the pilot study indicate that the exchange of information between groups and between competing groups influences the decision-making strategies of the participants. Due to the small number of samples in the pilot study, no statistically significant differences in participants’ performance were observed across varying levels of information visibility. However, the study reaffirms that a higher unit cost in design exploration leads to a significant reduction in the number of tries, a conclusion that is consistent with our prior research in individual design competition games. This highlights that a similar trade-off exists between resource constraints and decision efficiency in team-based competitive environments.
This proposed platform not only bridges existing gaps in empirical studies of team-based design competition, but also sets the groundwork for the development of AI-assisted design tools and methodologies that are aware of competitive environments. Our future research will leverage this platform to explore additional complexities, such as larger team sizes, dynamic information flows, and multi-objective design optimization, fostering a deeper understanding of decision-making processes in competitive design settings.
Acknowledgements
This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-24-1-0265. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the United States Air Force. The authors gratefully acknowledge the partial support from the National Science Foundation through the grants CMMI-2321463 and CMMI-2419423.