1. Introduction
Parametric modeling and generative design methods have significantly advanced fields such as mechanical engineering and product development, where performance criteria—such as weight, structural integrity, or efficiency—are precisely defined and quantifiable (Cross, Reference Cross1982). The deterministic nature of these fields enables systematic, rapid explorations of large design spaces with measurable parameters (Turrin et al., Reference Turrin, Von Buelow and Stouffs2011).
Architecture, however, presents a distinctly different context. Buildings must respond to diverse and sometimes intangible factors: cultural context, experiential quality, occupant well-being, and site-specific environmental constraints (Fitch, Reference Fitch2017). Unlike manufactured products, buildings are seldom mass-produced with identical specifications; predetermined standardized parameters are thus atypical. By the time parametric schemas are established in architectural design, foundational decisions have generally been settled upon, diminishing the perceived value of further generative exploration (Gürsel Dino, Reference Gürsel2012). Moreover, stakeholders or clients may view the exploration of dozens of parametric variations as an impractical expense unless the outcome offers functional or experiential benefits (American Institute of Architects., 2014). While certain building types—like modular housing or multi-family projects—may favor repeated design modules (Levin, Reference Levin2015), these cases remain relatively specialized within the broader architectural industry.
Paradoxically, the true promise of generative methods in architecture lies in engaging them at a point where key ideas and fundamental decisions remain fluid (American Institute of Architects., 2014). If parametric modeling and generative experimentation were introduced at the earliest conceptual phases, one could envision a more expansive and meaningful search through the design space. However, the inherent complexity of scripting and managing parametric workflows frequently deters architects from employing generative techniques in these crucial early phases of uncertainty and ideation.
Recent advances in generative artificial intelligence, particularly large language models (LLMs) exemplified by ChatGPT, suggest new possibilities for overcoming these barriers (Ko & Key, Reference Ko and Key2023). Conversational AI offers an intuitive method of interaction that allows designers to dynamically identify, define, and iteratively refine design parameters through natural-language exchanges rather than complex coding processes (Makatura et al., Reference Makatura, Foshey, Wang, Lein, Ma, Deng, Tjandrasuwita, Spielberg, Owens and Chen2023).
Building on these insights, this paper proposes a conversational design framework specifically tailored for architectural parametric modeling, demonstrated through two complementary scenarios. The first scenario centers on a user-driven process: ChatGPT is integrated with Dynamo in Autodesk Revit (Yori et al., Reference Yori, Kim and Kirby2019), enabling architects to incrementally define, combine, and modify design parameters through natural-language exchanges. Once a desired parametric model is established, AI-based visualization tools such as Veras (VERAS, n.d.) provide rapid feedback on materiality and aesthetics within the BIM environment. In the second scenario, we explore a more AI-driven approach. Here, ChatGPT API is embedded directly into Grasshopper for Rhinoceros (Saremi et al., Reference Saremi, Mirjalili and Lewis2017), allowing the system to autonomously generate key parameters—informed by the user’s high-level design intent—and produce the resulting 3D geometry. Visualization again uses Veras, but in this workflow, the designer's role shifts to guiding and evaluating the AI-suggested shapes rather than directly specifying each parameter.
2. Background and literature review
2.1. Architectural visualization
Architectural visualization has traditionally posed significant challenges for designers. Before the advent of digital platforms, drawings and physical models were the only means of conveying spatial ideas—both labor-intensive and inflexible for exploring multiple options (Yildirim & Yavuz, Reference Yildirim and Yavuz2012). Rendering tools such as V-Ray, Maxwell, and Lumion revolutionized this process by enabling high-fidelity images with sophisticated lighting and material effects (Peddie & Peddie, Reference Peddie and Peddie2019). Yet these programs often carry steep hardware requirements and can consume days of computation to produce final outputs, making rapid iteration difficult. Additionally, mastering their numerous parameters—textures, reflections, and other fine-grained details—requires substantial training (Miller et al., Reference Miller, Przybyla and Pegah2004). Even with more user-friendly systems like Enscape, high-quality results still depend on the architect’s careful tuning of numerous variables (Leitão et al., Reference Leitão, Castelo-Branco and Santos2019).
Recent progress in AI has transformed visualization further, reducing both the expertise and effort previously required. Tools like Veras or Midjourney employ generative AI AI (Ghimire et al., Reference Ghimire, Kim and Acharya2023) to produce conceptual images rapidly, allowing architects to iterate aesthetic decisions in tandem with spatial modeling (Borji, Reference Borji2022). Visualization can be a dynamic partner in exploring form and appearance from the earliest phases beyond a purely downstream process in the conventional architectural design process (Jo et al., Reference Jo, Lee, Lee and Choo2024). By generating multiple styles, moods, or material impressions on demand, AI-driven solutions promise not only to streamline rendering workflows but to redefine how architects conceive and communicate design concepts.
2.2. Parametric environments and AI-enhanced applications
Over the past two decades, parametric modeling platforms (e.g., Rhino+Grasshopper, Autodesk Revit+Dynamo, Blender, Houdini, SketchUp, and AutoCAD) have become indispensable in architectural practice. They offer node-based or scripted methodologies to generate complex geometries and automate repetitive tasks, all while maintaining design flexibility.
Recent advancements in computational design have further fueled this ecosystem with AI-enhanced applications aimed at reducing the technical overhead often associated with parametric workflows.(Gürsel Dino, Reference Gürsel2012). Examples include Finch3D (Optimizing Architecture, 2020), which expedites early space planning by generating rectilinear layouts under user-defined constraints, Hypar (Hypar, n.d.), which employs cloud-based “functions” to automate building layouts or repetitive tasks, or TestFit (TestFit, n.d.), known for configuring multifamily housing or parking arrangements.
However, such solutions, despite being intended to augment architects’ autonomy and creativity, risk a paradox: the AI meant to liberate designers can also limit them. While these tools may useful for automating discrete tasks, they frequently rely on predefined design logics that can inadvertently constrain a design (Bolojan, Reference Bolojan2022), (Ko & Key, Reference Ko and Key2023). Moreover, with technology advancing at such a rapid pace, a static catalog of AI tools risks quickly becoming outdated. New domain-specific plugins are constantly emerging to address particular goals (e.g., architecture-ai (community builder, n.d.)). Next-generation models like Gemini (Team et al., Reference Team, Anil, Borgeaud, Alayrac, Yu, Soricut, Schalkwyk, Dai, Hauth and Millican2023) promise ever more advanced generative capabilities. However, it might not the architects and designers' duty to track every new release or adapt to fleeting trends. A more fundamental question arises: how can these tools truly empower architects as decision-makers, freeing them from repetitive tasks such as modeling or rendering, and supporting creative, context-sensitive judgments instead? Focusing on how AI can enhance designers’ capacity to conceive, evaluate, and refine architecture—rather than merely automate it—may thus be key to sustaining genuine innovation in this evolving landscape.
2.3. Reconceptualizing parametric workflows with AI
Historically, effective utilization of parametric modeling tools has required significant proficiency in programming languages such as Python or C#, resulting in a steep learning curve that often discourages architects and designers. Moreover, the complexity inherent in these tools, coupled with the frequent necessity of code adjustments and debugging, can impede the spontaneous creative exploration that is essential during early design stages (Ko et al., Reference Ko, Ennemoser, Yoo, Yan and Clayton2023). Updating parameters and models to meet evolving design needs is often time-intensive.
Future tools must prioritize intuitive interfaces, reducing reliance on coding and fostering seamless integration with design workflows (Schnabel, Reference Schnabel2007). Recent developments in LLMs, exemplified by ChatGPT, suggest a potential paradigm shift in addressing long-standing challenges associated with parametric modeling and generative design workflows. For instance, Rane present a comprehensive overview of integrating ChatGPT, Bard, and other cutting-edge AI models in diverse design and engineering workflows—ranging from early conceptual studies and structural simulations to project management and construction scheduling (Rane et al., Reference Rane, Choudhary and Rane2023). In a related effort, Lamaakal showed how LLMs can translate plain-language prompts into functional Grasshopper scripts to build a parametric modeling (Lamaakal et al., Reference Lamaakal, Maleh, Makkaoui, Ouahbi, Pławiak, Alfarraj, Almousa and El-Latif2025).
However, architectural design seldom unfolds as a single, discrete task but rather as a confluence of interrelated activities: collaboration, iterative problem-solving, documentation, code generation, and more. Accordingly, it is not sufficient to consider how AI might optimize a single aspect of workflow in isolation. Instead, a broader perspective is required—one that situates generative tools within the full spectrum of architectural design, where multiple parameters, stakeholders, and objectives intersect (Ko et al., Reference Ko, Ennemoser, Yoo, Yan and Clayton2023). Hence, rather than viewing generative AI solely as a tool for scripting or code automation, this paper contends that successful integration must embrace the inherently interconnected nature of architectural work
3. Methodology
This study embeds conversational AI within widely adopted parametric modeling tools to expand architects’ autonomy, allowing them to control building forms and components via natural language. ChatGPT was chosen for its stable API, broad user community, and robust developer ecosystem, reducing integration hurdles and supporting iterative, text-based design exploration. For the parametric modeling, Dynamo in Autodesk Revit and Grasshopper in Rhinoceros 3D were selected because they are long-established standards in architectural practice. Lastly, Veras was employed for near-instant AI-based visualization, rapidly converting geometry into conceptual or stylized imagery.
Two complementary scenarios have been developed (Fig. 1). The first scenario, User-Driven Parameters (ChatGPT + Dynamo) , shows the approach in Autodesk Revit for Building Information Modeling (BIM). When the user (designer) provides natural-language prompts, ChatGPT translates these into Dynamo-compatible scripts, enabling an iterative process of prompt generation, debugging, parameter refinement, and ultimate visualization—all within the same architectural design task.
The second scenario, AI-Driven Parameters (ChatGPT API + Grasshopper) , places ChatGPT in a more proactive, generative role. Here, Rhinoceros 3D and Grasshopper serve as the parametric environment, and ChatGPT integrates directly via an API. Rather than manually specifying parameters or scripting logic, the designer conveys broad design intentions—leading ChatGPT to infer core geometric principles and generate or adapt Grasshopper definitions with minimal human intervention.
In both scenarios, Veras provides rapid rendering or visualization, ensuring that parameter changes translate swiftly into tangible modifications of the built form.

Figure 1. Flowchart for conversational AI in parametric design
4. Results and discussions
4.1. User-driven parameters (ChatGPT + Dynamo)
This section explores the use of ChatGPT (version 4.0) and Veras in the BIM Dynamo design process. ChatGPT enables an interactive design workflow, while Veras provides feedback for refining design solutions
The process begins with defining the design task. ChatGPT generates Python code for Dynamo based on user prompts. The code is tested and refined as needed, ensuring the design process functions as intended. Once the desired outcome is achieved, the interaction with ChatGPT concludes. The framework highlights how ChatGPT interprets architectural intentions, converts them into actionable instructions, and produces a BIM model.

Figure 2. Initial query (Step1, left) and troubleshooting (Step2, right)
• Step 1. Initial Query and User Input Users start by entering design prompts, ranging from general ideas to specific requirements. The task begins with creating a wall and window in Dynamo (Step 1, Fig. 2).
• Step 2. User Feedback and Troubleshooting Verifying and refining the Python code is critical. Errors encountered during execution in Dynamo prompted iterative debugging. The user copied the error messages from Dynamo into ChatGPT, which provided revised code. This process continued until all issues were resolved (Steps 2-1, 2-2, Fig. 2). ChatGPT generated the complete Python code and resolved all debugging issues, except for the manual input of error messages into the chat. Future integration of ChatGPT via an API could automate this process, enabling direct interaction between Dynamo and ChatGPT for seamless troubleshooting.
• Step 3. Chat for Creating BIM of a Simple Example Design The process progressed incrementally, starting with a wall and window, then creating a room with four connected walls, and finally adding a roof and door. Each step built upon the previous outcomes, using ChatGPT-generated Python nodes. Step 3 in Fig. 3 illustrates the creation of the room following the initial wall and window task.
• Step 4. Interactive Developments
ChatGPT demonstrated flexibility in adding new elements to the model. For instance, a request to add a door and roof to the room yielded updated Python code integrating these features. This adaptability underscores ChatGPT’s capability to respond dynamically to evolving design needs, supporting an iterative and user-driven design process (Step 4, Fig. 3).

Figure 3. Initial design process (Step3, left) and design refinement (Step4, right)
This demonstrates ChatGPT's flexibility and adaptability to the evolving design needs and goals of the user. Thus, ChatGPT plays a significant role in the iterative and dynamic process of building design, adjusting its responses and code based on user input and feedback.

Figure 4. Final script and BIM models
Step 5. Implementation and Fine Tuning
Each Python script generated by ChatGPT was implemented in Dynamo through a Python Script node. Successful execution required users to connect the appropriate input and output nodes, aligning with the script’s requirements. Inputs such as wallType, windowType, dimensions, and level were specified, with data types and conditions adjusted accordingly (Fig. 4).
The finalized script produced the desired BIM models, illustrating ChatGPT’s potential to facilitate architectural design. The interaction showcases how AI can assist users in exploring and realizing their design ideas efficiently and effectively (right bottom, Fig. 4).
4.2. Integrating Veras for design ideation and visualization
Following Step 5 of the previous chapter, a simple box-shaped model was produced, featuring four walls (each with a window), a flat roof, and a single door (see three-rendering images at top in Table 1). In this section, we explore how Veras, an AI-powered visualization add-in, integrates with the ChatGPT + BIM workflow to further refine and visualize this basic model.
Table 1. Based on the views from BIM (left column), and the user’s prompts, Veras generated different designs’ visualization (middle and right columns)

Starting with a basic box-shaped model, Veras allows users to specify materials and building purposes through simple prompts. For example, Rendering A (“Wooden retail store”) introduced unique wood patterns and wall variations, while Rendering B (“Concrete office building”) featured dark glazing and setback openings (Table 1).
Interior spaces also benefit from Veras’ feedback. Renderings C and D responded to prompts like “Interior contemporary living room with white paint and wood as major materials,” suggesting changes such as filling sunken spaces or modifying skylights based on ceiling materials (Table 1).
Veras facilitates both exterior and interior design ideation by providing actionable visual feedback. By leveraging its AI-driven visualizations, users can explore a wide range of design possibilities while refining their initial concepts in a more intuitive and efficient manner.
4.3. AI-driven parameters (ChatGPT API + Grasshopper)
This section explores the integration of the ChatGPT API with Rhino Grasshopper for generating 3D architectural models via natural language input. The focus is on automating workflows and refining prompt-based design processes.
• Step 1. Import ChatGPT API in Rhino Grasshopper
A trigger component and data recorder automated iterative design by sending API requests at intervals (e.g., every 2 seconds). The script generated new coordinates in each cycle, creating geometry dynamically.
• Step 2. API as Components
Geometries were generated directly from the API, avoiding manual intervention. Initial prompts produced text-heavy outputs, which were refined with instructions like “No text, only equations.”

Figure 5. Descriptive language implementation in ChatGPT API
Constraints, such as using sine and cosine functions, ensured consistent polynomial curves. Testing descriptive prompts (e.g., “placid, calm waves” vs. “drastic, fluctuating waves”) produced visually distinct curves (Fig. 5), showing ChatGPT's responsiveness to input. This outcome demonstrates that ChatGPT can interpret human language input and generate designs that cater to specific requirements or emotions. It highlights the importance of prompt for unlocking the full potential of ChatGPT-generated architectural design, ultimately expanding the possibilities in design methods.
Table 2. Prompt with topological condition

• Step 3. ChatGPT as Generative Design Tool
To create lofted building forms, prompts specified constraints like convexity and non-intersecting lines. Adjustments ensured stable structures, incorporating mandatory elements like a circular framing radius of 6 units and controlled variability within a defined radius of 3 (Table 2). The prompts were refined to include constraints like “ensuring convex” and “avoiding intersecting lines” (Table 2, row b), enabling ChatGPT to generate dynamic, non-intersecting convex or concave shapes effectively.
Additional parameters for column generation required curves to include a circle with a radius of 6 units for structural stability, while the center point varied within a radius of 3 (Table 2, row c). This introduced controlled randomness, ensuring designs adhered to practical architectural requirements while maintaining structural integrity.
• Step 4. Veras Visualization
Integrating Veras enabled rapid exploration of architectural style, materiality, context, and scale. Initially, a building was rendered without user prompts, resulting in a structure with an aluminum skin on flat terrain at sunset, with Veras autonomously adding a structural column to support the lifted mass (rendering A, Table 3). With the prompt “Black metallic building,” Veras generated a darker, heavier aesthetic, featuring a thick, abstract skin and a defined entry point at ground level, enhancing structural integrity (rendering B). Inspired by architects like Zaha Hadid and Frank Gehry, prompts tailored to their styles resulted in designs emulating their iconic works (renderings C and D). Locality-specific prompts, such as “Building in downtown Los Angeles,” added an LA backdrop to a concrete and metal-skinned structure, while “Zaha Hadid building in Downtown NY” rendered a Manhattan setting with cool tones (renderings E and F). Scale keywords were also tested. The term “pavilion” reduced the scale, focusing on exterior skin and structural details like stair-like openings and support elements (rendering G). Regional and material contexts further demonstrated Veras’s adaptability. For example, a bamboo pavilion rendered with “Africa” featured wide spacing for ventilation, while the same prompt in the “Arctic” produced a denser structure with snow accumulation (renderings H and I).
These results highlight Veras’s ability to generate contextually responsive designs, enabling diverse and iterative exploration of architectural possibilities.
Table 3. Veras renderings for design exploration

• Step 5. Veras Design Feedback
The ability of Veras to provide immediate visual feedback proved highly effective in the iterative design process, particularly for refining complex geometries generated by the ChatGPT API (Table 4). For instance, when rendering a facade created using the prompt “Generate a polynomial curve that has surge, drastic, and crazy fluctuation waves,” Veras translated the design into a shell-shaped roof (middle column, Table 4).
While intriguing, the authors aimed for a more angular, protruding form, prompting a revision to the ChatGPT API prompt to include “interpolated curve degree is 0” (right column, Table 4). This adjustment produced a markedly different output, featuring sharp angles and a fragmented, jagged geometry, illustrating the utility of AI-based visualization tools for rapid feedback and iterative design.
Table 4. Comparative visualization of design iterations: basic prompt(left), addition1(middle), and addition2(right)

Traditionally, architects model building elements and create renderings to visualize and refine designs, repeating this labor-intensive process until achieving satisfactory results. This approach often leads to increased complexity, extended timelines, and potential errors. The integration of ChatGPT, its API, BIM, parametric modeling, and Veras proposed in this study streamlines this process. By generating complex geometric shapes through descriptive prompts and providing immediate visual feedback, Veras reduces the time and effort required for iterative design. This allows designers to focus more on creativity and less on technical scripting.
The study highlights significant advancements in design efficiency and exploration while revealing critical challenges for future research. Generative AI tools currently face limitations in addressing complex, bespoke, or customized designs, which are crucial in architectural practice. Additionally, the physical feasibility of AI-generated designs must be considered, as some outputs, such as Penrose-like constructs (DiPaola et al., Reference DiPaola, Gabora and McCaig2018), may be visually impressive but structurally impossible. Addressing these challenges is essential for fully realizing AI's potential in architectural design.
5. Conclusions
This paper introduced a conversational AI framework for architectural parametric modeling, demonstrating how LLMs can reduce coding barriers in both user-driven (Revit+Dynamo) and AI-driven (Grasshopper) scenarios. By integrating ChatGPT directly into these workflows—and coupling them with on-demand visualization tools—designers can explore fluid, early-phase experimentation without being bound by rigid parametric schemas. The resulting synergy of natural-language prompts and parametric scripts expands architects’ capacity to iterate on form, material aesthetics, and functional constraints in a single, interactive environment.
Several challenges remain. First, while AI-generated geometry can be compelling, it must incorporate regulatory codes and topological constraints to ensure actual buildability—an area not deeply addressed in the present exploration. Second, LLMs sometimes produce unintended or overly abstract shapes, falling short of precise micro-scale detailing. Third, the study’s conceptual approach—though indicative of potential—has not yet been tested in a comprehensive pilot implementation, thus requiring further empirical validation in real-world contexts. Fourth, concerns about data privacy persist, as many generative AI services run on external platforms. Alternative local-model solutions (e.g., Ollama (Marcondes et al., Reference Marcondes, Gala, Magalhães, Perez de Britto, Durães and Novais2025)) may mitigate these risks by preventing sensitive design data from leaving the architect’s environment. Nonetheless, the foundational premise remains: conversational AI can be a powerful partner in early-stage design, facilitating both broad creative exploration and iterative refinement, provided architects maintain critical oversight of structural, regulatory, and contextual factors.