Hostname: page-component-cb9f654ff-r5d9c Total loading time: 0 Render date: 2025-08-27T07:35:33.620Z Has data issue: false hasContentIssue false

Enhancing computer-aided design with deep learning frameworks: a literature review

Published online by Cambridge University Press:  27 August 2025

Sarah Steininger*
Affiliation:
Technical University of Munich, Germany BMW Group, Munich, Germany
Jasmin Zhao
Affiliation:
Technical University of Munich, Germany
Johannes Fottner
Affiliation:
Technical University of Munich, Germany

Abstract:

Generative artificial intelligence (GenAI) has the potential to further revolutionize Computer-Aided Design (CAD) by recognizing patterns, making predictions, and generating automated design suggestions. This paper presents a systematic literature review that examines the current state of research on the use of GenAI in CAD-based product development. With a focus on 3D modelling, it provides an overview of current approaches, most used datasets and commonly used AI models. Four application areas where GenAI can enhance CAD were derived: Design generation, Design reconstruction, Design retrieval, and Design modification. In total, 47 papers were selected, analysed and categorised.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Author(s) 2025

1. Introduction

Geometric design plays a fundamental role in the development of hardware products. Starting with rough concept sketches, the design evolves and refines over time to a detailed production form. During this process, different factors such as manufacturability, assembly feasibility or costs are crucial considerations. The complexity of these requirements makes 3D design a highly iterative process that consumes considerable time and energy (Reference VajnaVajna, 2022). Since around 1960, computer-aided design (CAD) tools are used to support this design process, enabling visualising and developing 3D shapes. To make this 3D design process as efficient as possible, there is a strong emphasis on speeding up and automating the creation and modification of 3D models. Over the past 60 years, CAD tools have evolved considerably, and various approaches have been developed to automate CAD processes. These automation techniques (e.g. parametric design or topology optimisation) have greatly improved the efficiency and quality of the design process, enabling engineers to create more complex products in less time (Reference Vajna, Weber, Zeman, Hehenberger, Gerhard and WartzackVajna et al., 2018). A relatively new approach to CAD automation is generative design, which uses algorithms to generate design options based on defined parameters and constraints. In this area, the rise of Artificial Intelligence (AI) algorithms opens up new opportunities. AI technologies can identify patterns in existing designs, predict the performance and generate automated design suggestions based on large data sets. The implementation of AI in CAD applications therefore has the potential to revolutionize workflows and make them more efficient and cost-effective by saving time. However, with these new opportunities also new questions arise (Reference Steininger, Camci and FottnerSteininger et al., 2024). How can AI algorithms be integrated into the CAD process? How can historical knowledge be effectively incorporated? This paper aims to analyse the current state of research and approaches to the use of AI algorithms in CAD-based product development through a systematic literature review. It will provide an overview of promising approaches, most used datasets and commonly used AI models with focus on 3D design.

2. State of the art

2.1. Geometric modelling in CAD

The computer-aided description of the shape of geometric objects plays a central role in engineering and manufacture of hardware products. There are several different modelling concepts, and the choice of method depends on the specific problem and the objects to be modelled. Furthermore, the different formats influence the further processing with AI algorithms. Based on the mathematical description, a first distinction is made between curve, surface and solid modelling (Reference Abramowski and StephanAbramowski & Stephan, 1991).

Surface Modelling

Surface modelling focuses on the definition of surfaces without describing the volume of the interior of the bodies. Both analytically describable surfaces, such as translational and ruled surfaces, and analytically non-describable surfaces, such as B-spline and NURBS surfaces, are used (Reference BonitzBonitz, 2009). Similar to the mathematical description of curves, surfaces can be represented explicitly, implicitly or parametrically. The most important surface types for geometric modelling include surfaces of revolution (cylinders, cones, spheres, tours, surfaces of revolution), extrusion surfaces, Bézier surfaces (rational, non-rational) and BSpline surfaces (rational, non-rational) (Reference AgostonAgoston, 2005).

Volume Modelling

In volume modelling, the geometry is described as a fixed volume. The solids to be described have a closed and oriented surface and have the common property that for any point it can be clearly stated whether the point is inside, outside or on the surface of the solid. This is often used in mechanical design where precise physical properties such as mass and volume are important.

Solids can be modelled in a variety of ways. Various representations of solids (representation schemes) have been developed for this purpose, differing in their memory requirements, numerical accuracy, complexity and ability to be transformed into other representation schemes. A distinction is made between direct representation schemes (e.g. Constructive Solid Geometry (CSG), standard cell scheme or octal trees), where the volume itself is described, and indirect schemes (e.g. Boundary Representation Scheme (B-rep) or facet model), where the description is made via edges and faces (Reference Abramowski and StephanAbramowski & Stephan, 1991; Reference AgostonAgoston, 2005).

2.2. Fundamentals of Generative Artificial Intelligence models

Generative AI (GenAI) is a subset of deep learning that generates content like text, images or code based on an input. Kretzschmar and Damman (Reference Kretzschmar and Damman2024) give an up-to-date overview and deduce that there are currently five key GenAI architectures. Further investigation into the type of GenAI used shows that text-based applications predominate (Reference Kanbach, Heiduk, Blueher, Schreiter and LahmannKanbach et al., 2024). With regard to the literature research on the topic of AI in CAD, the following architectures are relevant and will be further detailed: (1) Variational Autoencoders (VAE), (2) Transformers, (3) Generative Adversarial Networks (GAN).

Variation Autoencoders (VAE)

VAEs are one of the most frequently used autoencoders because, compared to standard autoencoders, VAEs are more probabilistic in nature, which means that the deterministic latent vector is replaced by a mean and a standard deviation vector, describing the probability distribution of the latent variables. Based on the input variables, the encoder generates a distribution for the latent variables and the decoder takes a sample from the latent distribution to generate a distribution over possible outputs (Reference Vasudevan, raj, Vasudevan, Vasudevan, pulari and VasudevanVasudevan et al., 2021).

Transformers

Transformer architectures are a fundamental concept in deep learning, especially natural language processing, and are widely used in dominant AI models like OpenAI’s GPT-3, AI21, and Google’s BERT. Transformers are based on an encoder-decoder architecture and can be classified as such, where the encoder produces a representation of an input sequence which the decoder uses together with an additional input sequence generated by the decoder in the previous round, to produce the final output. Opposed to convolutional neural networks, the transformer does not use convolutional layers for generating latent vectors but utilizes self-attention mechanisms (Reference HirschleHirschle, 2022). Transformers first gained attention with a now famous paper of Vaswani et al. (Reference Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser and Polosukhin2017).

Generative Adversarial Networks (GAN)

GANs display an interaction between a generator and a discriminator, both built with deep neural networks (Reference Chen and AhmedChen and Ahmed, 2020). While the generator is trained to generate images, the discriminator has to decide whether those images are real or fake. Both parts are trained together so that the generator gets better at producing realistic images. A noise is induced to generate fake samples. By comparing those samples to the training samples, the discriminator checks the difference and evaluates if the sample is real or fake. At first, the discriminator can easily distinguish real samples from fake ones, but the generated samples will improve as the generator network works towards reducing the difference. Both networks compete with each other until the discriminator cannot distinguish which samples are real or fake (Reference Bishop and BishopBishop & Bishop, 2024).

2.3. Application areas of GenAI in CAD

Four areas where GenAI enhance CAD can be derived from different works (Camba et al., Reference Camba, Contero and Company2016; Krahe et al., Reference Krahe, Marinov, Schmutz, Hermann, Bonny, May and Lanza2022; Xingang Li et al., Reference Li, Wang and Sha2023; C. Zhang & Zhou, Reference Zhang and Zhou2019; S. Zhang et al., Reference Zhang, Guan, Jiang, Ning, Wang and Tan2023). Design generation involves the creation of novel designs and the exploration of the design space, while design reconstruction, design retrieval and design modification are based on existing design concepts. In design reconstruction, the designs are rebuilt using a different representation method, while design retrieval refers to the process of finding already existing, similar designs. Design modification refers to the ability to adapt existing CAD models for new applications. The focus in this Literature Review is on the first three areas (see Figure 1).

Figure 1. Application categories of GenAI in CAD

3. Systematic literature review

The methodology of the systematic review is based on two research questions (RQs) that guide the literature search process. To further structure it, the content scope is defined around three primary criteria: artificial intelligence, computer-aided design and product development. Each criterion consists of several keywords and synonyms that are drawn from other foundational works. The literature search was carried out using the Scopus citation database, covering the period from January 2014 to April 2024. Finally, the results were filtered based on their alignment with the research questions and content scope. A further categorisation was performed, focusing on design generation, reconstruction and retrieval.

3.1. Research questions

The literature review is based on two RQs that provide an overview of the topic and keywords that further describe specific areas.

  1. 1. RQ: What artificial intelligence methods can be applied in the product development of CAD models in the following three geometrical design types?

    • a. Design generation

    • b. Design reconstruction

    • c. Design retrieval

  2. 2. RQ: What are the characteristics of the research methods in terms of the network architectures, the datasets, and representation methods?

3.2. Content scope and keywords

We define the content scope using the following three criteria to search the literature relevant to AI in geometrical design: (1) artificial intelligence, (2) Computer-aided Design, and (3) product development. Each criterion consists of several keywords and synonyms, as shown in Table 1. The reason for choosing these keywords originates from Shabestari et al. (Reference Shabestari, Herzog and Bender2019), who provide a brief overview of the current state of research regarding the incorporation of machine learning (ML) in the early phases of product development. Additionally, to adding our own keywords, the authors’ keywords of Seff et al. (Reference Seff, Zhou, Richardson and Adams2021), Xu et al. (Reference Xu, Willis, Lambourne, Cheng, Jayaraman and Furukawa2022), Bickel et al. (Reference Bickel, Goetz and Wartzack2023) and Camba et al. (Reference Camba, Contero and Company2016) were inspected and added if considered suitable considering the two research questions. Our literature research focuses on CAD models and methods that can be applied to CAD and leaves out idea generation and simulations in product development. We also focus on the geometric design part of CAD.

Table 1. Classification of the reviewed papers to their respective design methods

3.3. Literature search process

As shown in Figure 2, finally 47 articles were selected that meet the scope of review. Searches were conducted solely on the Scopus citation database within the time range of Jan 2014 until April 2024 since the most relevant articles, like the initial seed papers (Bickel et al., Reference Bickel, Goetz and Wartzack2023; Camba et al., Reference Camba, Contero and Company2016; Seff et al., Reference Seff, Zhou, Richardson and Adams2021; Shabestari et al., Reference Shabestari, Herzog and Bender2019; Xu et al., Reference Xu, Willis, Lambourne, Cheng, Jayaraman and Furukawa2022), were published after 2016. In total, we used four search strings, consisting of the keywords in Table 1. Each time, we used a different combination of keywords and synonyms of each category, grouping them differently to overview several research areas. We connected the three criteria with “AND”, while we connected the respective synonyms and keywords with “OR”. The initial search of those four strings yielded 1270 seed articles, including duplicates. The exact keywords and their respective grouping for each search string are the following:

  • String 1: TITLE-ABS-KEY((“Artificial Intelligen*” OR “Machine Learning” OR “Neural Network” OR “Deep Learning”) AND (“Computer-Aided Design” OR “CAD Modeling” OR “Modeling” OR “Constr*”) AND (“Product Development” OR “Engineering” OR “Product Design”))

  • String 2: TITLE-ABS-KEY((“Specifications” OR “Topology” OR “Machine Learning” OR “Design” OR “Product Development” OR “System Analysis”) AND (“Design Sketch” OR “Generative Model” OR “Product Engineering” OR “Design intent” OR “Early Design Phasis” OR “Auto-regressive”) AND (“Computer-Aided Design” OR “Object Design” OR “Product Design”))

  • String 3: TITLE-ABS-KEY((“Machine Learning” OR “Artificial Intelligen*” OR “Deep Learning” OR “Generative Model” OR “Auto-regressive”) AND (“Computer-Aided Design” OR “Product Design” OR “Object Design” OR “Design Sketch” OR “Design Intent”) AND (“Product Development” OR “Product Engineering” OR “Early Design Phasis”))

  • String 4: ALL(“Deepcad” OR “deep cad)

Figure 2. Literature search process

3.4. Reduction methodology

The titles and abstracts of all the articles were reviewed, and their importance was evaluated based on the defined content scope and research questions. Ultimately, 47 articles were identified as the most relevant, which were then further reviewed, and the results of the selected literature were compiled for this thesis. The bibliography can be viewed at the following link: Literature List. From the initial 47 articles, 27 papers were selected for the final table analysis in the discussion. Papers providing overviews of other studies in the field or focusing solely on 2D idea generation without extending the framework to 3D shapes were excluded to enable a comparison of various generative modeling frameworks based on characteristics such as representation methods, inputs, outputs, and design methods.

4. Literature results

A classification of all reviewed papers to their respective Design Methods can be found in Table 2.

Table 2. Classification of the reviewed papers to their respective design area

Design Generation

This area aims to bridge the gap between human creativity and the efficiency of computers in creating novel designs. A distinction can be made between random generation and targeted generation of CAD models. The former focuses on the automatic generation of novel 3D shapes for CAD software that have not appeared in the training dataset yet. In this category, Wu et al. (Reference Wu, Xiao and Zheng2021) published “DeepCAD”, which is one of the leading foundational architectures in this area of random design generation. Their dataset (178,238 models) as well as their architecture serve as the basis for many other generative design models such as “SkexGen” (Reference Xu, Willis, Lambourne, Cheng, Jayaraman and FurukawaXu et al., 2022). In their work, they apply a transformer network architecture, commonly used in natural language processing, to the domain of CAD operation sequences. In this way, the generative model generates sequences of CAD operations similar to the way a CAD program works. The second area of targeted model generation deals with the time-consuming process of brainstorming ideas and creating initial design concepts (Reference Li, Wang and ShaXingang Li et al., 2023). Traditionally, designers rely on their knowledge and experience to sketch and describe new ideas. Thus, design creation aims to automate this process with generative models that analyse existing designs to suggest entirely new concepts, enabling the exploration of the design space. Krahe et al. (Reference Krahe, Bräunche, Jacob, Stricker and Lanza2020) takes a first step into this direction by introducing l-GAN, which not only takes the latent vector representation of a 3D object as input but user-specified features as well.

Design Reconstruction

It helps the designer to convert a 2D or 3D input into another representation. Looking at the multiple ways to display an object (point clouds, voxels, meshes, etc.), it becomes clear that it’s a significant research topic to save time and optimize the design process. Analysing the collected literature, it turns out that there are several approaches to tackle this challenge. For example, Hu et al. (Reference Hu, Zheng, Zhang, Yuan, Yin and Zhou2023) convert 2D line drawings from three orthographic views into 3D CAD models with the help of a transformer-based model. The authors argue that 2D engineering drawings are commonly used by designers to realize, update, and share their ideas, especially during the initial design stages. Furthermore, S. Zhang et al. (Reference Zhang, Guan, Jiang, Ning, Wang and Tan2024) developed a transformer-based network which transforms the B-rep model into a sequence of editable parametrized feature based modeling operations.

Design Retrieval

The methods of Design retrieval support designers in the process of finding already existing designs that are similar to newly created designs. Often, products are developed in generations with parts of the product being updated, and other parts that have proven to be good solutions are not much altered. Thus, only fractions of a new product are newly developed parts, and one can build upon existing solutions and knowledge. In order to avoid excess work by constructing each part from scratch again in a CAD program, designers can reuse and alter existing 3D CAD objects to fit the updated conditions, make better use of existing knowledge, and save time and, ultimately, costs during the product development phase (Reference Krahe, Marinov, Schmutz, Hermann, Bonny, May and LanzaKrahe et al., 2022). Most CAD retrieval methods consist of two steps. First, each 3D CAD model is represented by a descriptor, which can be based on geometric shapes (model-based) or object information (semantics-based). Second, the input query descriptor is compared to the database model descriptors to extract the objects that best match the input query (Reference Zhang and ZhouC. Zhang & Zhou, 2019). One example is the work of Krahe et al. (Reference Krahe, Marinov, Schmutz, Hermann, Bonny, May and Lanza2022), which transforms a CAD model into a point cloud and then an autoencoder is appointed to extract the object’s features from it. In a next step, the extracted latent representation, capturing the essential features, is compared to other latent vectors of other objects. Compared to traditional retrieval methods via attributes, the retrieval through latent vectors provides results that consider more detailed features of the object.

5. Analysis of the results

5.1. Network architecture

Network structure for 3D shape synthesis, frameworks often rely on encoder-decoder architectures, GANs, AEs and transformers.

GANs

It was observed that GANs are often used in design generation methods because of their ability to generate novel objects as latent vector representations (e.g. Wu et al. (Reference Wu, Xiao and Zheng2021)). The outputs of GANs are random (e.g. in Wu et al. (Reference Wu, Xiao and Zheng2021), Wu and Zheng (Reference Wu and Zheng2022), S. Zhang et al. (Reference Zhang, Guan, Jiang, Ning, Wang and Tan2024)), which is why conditional GANs, introduced in their framework by Krahe et al. (Reference Krahe, Bräunche, Jacob, Stricker and Lanza2020), have the advantage to explore the design space but also control the output through feature specifications. However, Chen and Ahmed (Reference Chen and Ahmed2020) state, that learning in GANs can be difficult since they are rather unstable to train and often suffer from mode collapse, in which the generator starts producing samples from a few modes only and misses other categories. Therefore, networks like PaDGAN (Reference Chen and AhmedChen & Ahmed, 2020) specifically address the problem of mode collapse of GANs to generate more diverse solutions.

Autoencoder

Besides GANs, autoencoders also pose to be a good alternative and are used for all three areas: design generation, retrieval, and reconstruction. Its usage is versatile because autoencoder networks can convert different kinds of 3D input data into compact latent representations for vector visualization and clustering for design retrieval, as discussed by Krahe et al. (Reference Krahe, Marinov, Schmutz, Hermann, Bonny, May and Lanza2022), generate new data through interpolation of vectors in the latent space for design generation, done by Saha et al. (Reference Saha, Menzel, Minku, Yao, Sendhoff and Wollstadt2020) and Xueyang Li et al. (Reference Li, Xu and Zhou2023), or reconstruct the original CAD model by decoding the quantized code, as implemented by Xu et al., Reference Xu, Jayaraman, Lambourne, Willis and Furukawa2023. VAEs as a subcategory of AEs, are rather used for design generation purposes because of their probabilistic distribution of the latent space and the ability to interpolate new latent vectors.

Transformer

CNNs are mostly used for design retrieval methods because their architecture achieves the highest accuracy for image recognition with relatively low effort. Therefore, they are suitable for view-based 3D CAD model retrieval frameworks like the one of C. Zhang and Zhou, Reference Zhang and Zhou2019. A lot of the used frameworks are also based on transformers (Reference Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser and PolosukhinVaswani et al., 2017) because of their self-attention mechanisms which allow transformers to propagate information of the current state to a next-step prediction module (Reference Seff, Zhou, Richardson and AdamsSeff et al., 2021) and weigh the importance of different input parts dynamically. This mechanism helps to capture dependencies within the data more effectively by considering the entire input context. Transformer-based models were initially developed for natural language processing but because of their ability to handle sequential data, they can also be used for sequence generation tasks (e.g. Xu et al. (Reference Xu, Willis, Lambourne, Cheng, Jayaraman and Furukawa2022), Wu et al. (Reference Wu, Xiao and Zheng2021), Para et al. (Reference Para, Bhat, Guerrero, Kelly, Guibas and Wonka2021)).

5.2. Datasets

Out of the 27 reviewed papers, 17 papers have not specifically mentioned which dataset they used for training. Analyzing the 10 papers which have mentioned their used datasets, the most commonly used datasets for training and testing the generative models are ShapeNet (Reference Chang, Funkhouser, Guibas, Hanrahan, Huang, Li, Savarese, Savva, Song, Su, Xiao, Yi and YuChang et al., 2015), DeepCAD (Reference Wu, Xiao and ZhengWu et al., 2021), Fusion360 (Reference Willis, Pu, Luo, Du, Lambourne, Solar-Lezama and MatusikWillis et al., 2021), and SketchGraphs (Reference Seff, Ovadia, Zhou and AdamsSeff et al., 2020). Figure 3 depicts how often each dataset was used, taking into consideration that some papers have used more than just one dataset. ShapeNet is a frequently used 3D object dataset developed by researchers from Stanford University, Princeton University, and the Toyota Technological Insitute at Chicago. It consists of over 300 million models of 3.135 classes composed of 3D shapes, 3D CAD models and meshes. Because of its largeness, the dataset has several subsets like ShapeNetCore, ShapeNetSem and ShapeNet Parts. In our analysis, ShapeNet is especially used for training and validating design generation frameworks. Opposed to ShapeNet, the other three datasets are categorised as CAD-based datasets. The DeepCAD dataset was newly created by Wu et al., Reference Wu, Xiao and Zheng2021 for training their framework. It is composed of 178.238 CAD models and their CAD construction sequences and is based on the ABC dataset (Reference Koch, Matveev, Jiang, Williams, Artemov, Burnaev, Alexa, Zorin and PanozzoKoch et al., 2019) and Onshape’s CAD repository. Fusion360 is a similar dataset, but compared to DeepCAD, it is much smaller with around 8000 CAD designs. The SketchGraphs dataset and Fusion360 also include constraints defining the relationships between geometric elements, only SketchGraphs introduces a framework for generative modeling providing a structured approach to creating new CAD models from scratch (Reference Para, Bhat, Guerrero, Kelly, Guibas and WonkaPara et al., 2021). DeepCAD and SketchGraphs are mostly used for design generation and design retrieval frameworks.

Figure 3. Frequency of dataset utilization in the analyzed papers (10 Papers)

5.3. Representation methods

The training data as well as the input and output data are presented in different formats. It was examined how geometries are represented so that they can be further processed in geometric deep learning. In our analysis, the networks often use more than one representation method, and the representation method of the input and output often differ. In Figure 4 the distribution of the representation methods for each design method is shown, as well as the overall distribution combining the three design method categories. Object representation as a 3D CAD model occurred the most often, followed by point clouds, sketches, meshes, graphs, and voxels. Especially meshes, point clouds, and voxel grids are popular representation methods in generative modeling because, compared to CAD models they have a lower fidelity with fewer geometric details and less structural information due to their coarse resolution. Point clouds are popular because they can be derived from various 3D formats and are a universal representation of 3D objects. They are flexible, memory-efficient, and compatible with frameworks like AEs, VAEs, and GANs (Reference Saha, Menzel, Minku, Yao, Sendhoff and WollstadtSaha et al., 2020). Nevertheless, the points of point clouds are not sorted in a specific order, so convolutions of CNNs cannot be applied to them. Additionally, point clouds only show the overall shape and cannot portray geometric details and the topology of an object, making it challenging to convert point clouds to meshes.

Figure 4. Distribution of representation methods in the design categories

Voxels approximate their objects through cube-shaped grids, as done by C. Zhang and Zhou (Reference Zhang and Zhou2019). The voxel structures are naturally adapted to CNN models with three-dimensional convolutional kernels (Reference Krahe, Bräunche, Jacob, Stricker and LanzaKrahe et al., 2020). Compared to point clouds, voxels can be converted to mesh shapes for better visualization, but they might have a poor resolution (Reference Li, Wang and ShaXingang Li et al., 2023). Meshes are favored when the topology of an object matters due to their high visual quality. They can also be edited and are widely accepted in computer graphics. Graphs visualize the geometric relationship within an object and are therefore also used for design retrieval by comparing different graphs to one another, as done by Bickel et al. (Reference Bickel, Goetz and Wartzack2023). In conclusion, choosing the proper representation method is challenging and depends on the chosen network, the available datasets for training, as well as the desired output quality and adaptability. Many papers have not mentioned a specific CAD software for their frameworks. Among the remaining 11 Papers, frequently mentioned CAD softwares are commonly used ones like SolidWorks (2x), AutoCAD (2x), Catia (2x) and Creo (2x). Other CAD softwares include NX (1x) and the cloud-based softwares Onshape (1x) and Salome (1x) were also mentioned, serving as a platform for dataset creation.

6. Conclusion and outlook

In this work a selection of papers in generative modeling of 3D shapes, with a focus on 3D CAD models, is reviewed and evaluated on several characteristics. Out of the initial 47 articles, 27 papers were chosen for the final table analysis in the discussion. Papers providing overviews of other studies in this field, or papers solely focusing on 2D idea generation without extending the framework to 3D shapes, were filtered out so that a variety of generative modeling frameworks on characteristics like representation methods, inputs, outputs, and design method could be compared. For future research, it would be worth conducting a similar literature search for the category of ‘design modification’.

The majority of the papers analysed focus on design generation, followed by design retrieval and design reconstruction, which can be seen in Table 2. Exploring novel design concepts and creating 3D shapes seem to be the focal point in the current state of the art in CAD product development. Most research in this area leverages GANs (in combination with AEs) and interpolation of latent vectors for creating new designs. In design reconstructing, it is quite obvious that encoder-decoder-structures are mostly favored, but also frameworks using model trees (Reference Camba, Company and NayaCamba et al., 2022; Reference Plumed, Varley, Company and MartinPlumed et al., 2022) have been developed. Design retrieval models are mostly built with CNN architecture, where graphs or vectors are compared to each other to evaluate their similarity.

A frequently encountered limitation of the frameworks is the quality and sparsity of suitable datasets. The success of AI applications heavily relies on the training quality, the provision of comprehensive and correct data, and the size of the dataset since most AI models are very data-hungry, requiring millions of training examples, unless the model does not depend on datasets (e.g. Wu and Zheng (Reference Wu and Zheng2022)), which is relatively rare. Therefore, the data has to meet the development targets (Reference Kreis, Hirz and RossbacherKreis et al., 2020). The challenge with this is that training datasets often have to be of a specific object category, such as chairs, cars, or mechanical parts. Collecting large sets of the same class is not easy since 3D shapes, CAD models, or scans require manual effort, which is time consuming and expensive. Depending on the datasets, the learned information will be limited to the scope of the dataset and might not be applicable to other object classes or application domains (Reference Wu and ZhengWu & Zheng, 2022). Thus, diversity is important to avoid bias in the results. Even though datasets cover more and more design domains, data representations like graphs are still underrepresented, and even if the datasets exist, they are not necessarily publicly available and free to use (Reference Regenwetter, Nobari and AhmedRegenwetter et al., 2022). To summarise, research in this area is just picking up speed. In order to make progress, it is absolutely necessary to process or generate more data for training.

References

Abramowski, , & Stephan, . (1991). Geometrisches Modellieren.Google Scholar
Agoston, M. K. (2005). Computer Graphics and Geometric Modelling - Implementation & Algorithms. https://doi.org/10.1007/b138805b CrossRefGoogle Scholar
Bickel, S., Goetz, S., & Wartzack, S. (2023). FROM SKETCHES TO GRAPHS: A DEEP LEARNING BASED METHOD FOR DETECTION AND CONTEXTUALISATION OF PRINCIPLE SKETCHES IN THE EARLY PHASE OF PRODUCT DEVELOPMENT. Proceedings of the Design Society, 3, 19751984. https://doi.org/10.1017/pds.2023.198 CrossRefGoogle Scholar
Bishop, C. M., & Bishop, H. (2024). Deep Learning: Foundations and Concepts (1st ed. 2024). Springer International Publishing; Springer. https://ebookcentral.proquest.com/lib/kxp/detail.action?docID=30853138 Google Scholar
Bonitz, P. (2009). Freiformflächen in der rechnerunterstützten Karosseriekonstruktion und im Industriedesign, Grundlagen und Anwendungen. Advance online publication. https://doi.org/10.1007/978-3-540-79440-0 CrossRefGoogle Scholar
Camba, J. D., Company, P., & Naya, F. (2022). Sketch-Based Modeling in Mechanical Engineering Design: Current Status and Opportunities. Computer-Aided Design, 150, 103283. https://doi.org/10.1016/j.cad.2022.103283 CrossRefGoogle Scholar
Camba, J. D., Contero, M., & Company, P. (2016). Parametric CAD modeling: An analysis of strategies for design reusability. Computer-Aided Design, 74, 1831. https://doi.org/10.1016/j.cad.2016.01.003 CrossRefGoogle Scholar
Chang, A., Funkhouser, T., Guibas, L [Leonidas], Hanrahan, P., Huang, Q., Li, Z., Savarese, S., Savva, M., Song, S., Su, H., Xiao, J., Yi, L., & Yu, F. (2015). ShapeNet: An Information-Rich 3D Model Repository.Google Scholar
Chen, W., & Ahmed, F. (2020, February 26). PaDGAN: A Generative Adversarial Network for Performance Augmented Diverse Designs. http://arxiv.org/pdf/2002.11304 CrossRefGoogle Scholar
Herzog, V., & Suwelack, S. (2023). Bridging the Gap between Geometry and User Intent: Retrieval of CAD Models via Regions of Interest. Computer-Aided Design, 163, 103573. https://doi.org/10.1016/j.cad.2023.103573 CrossRefGoogle Scholar
Hirschle, J. (2022). Deep Natural Language Processing: Einstieg in Word Embedding, Sequence-to-Sequence-Modelle und Transformer mit Python. Hanser eLibrary. Hanser. https://www.hanser-elibrary.com/doi/book/10.3139/9783446473904 https://doi.org/10.3139/9783446473904 CrossRefGoogle Scholar
Hu, W., Zheng, J., Zhang, Z., Yuan, X., Yin, J., & Zhou, Z. (2023, August 10). PlankAssembly: Robust 3D Reconstruction from Three Orthographic Views with Learnt Shape Programs. http://arxiv.org/pdf/2308.05744 CrossRefGoogle Scholar
Kanbach, D. K., Heiduk, L., Blueher, G., Schreiter, M., & Lahmann, A. (2024). The GenAI is out of the bottle: generative artificial intelligence from a business model innovation perspective. Review of Managerial Science, 18(4), 11891220. https://doi.org/10.1007/s11846-023-00696-z CrossRefGoogle Scholar
Koch, S., Matveev, A., Jiang, Z., Williams, F., Artemov, A., Burnaev, E., Alexa, M., Zorin, D., & Panozzo, D. (2019). ABC: A Big CAD Model Dataset for Geometric Deep Learning. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).CrossRefGoogle Scholar
Krahe, C., Bräunche, A., Jacob, A., Stricker, N., & Lanza, G. (2020). Deep Learning for Automated Product Design. Procedia CIRP, 91, 38. https://doi.org/10.1016/j.procir.2020.01.135 CrossRefGoogle Scholar
Krahe, C., Kalaidov, M., Doellken, M., Gwosch, T., Kuhnle, A., Lanza, G., & Matthiesen, S. (2021). AI-Based knowledge extraction for automatic design proposals using design-related patterns. Procedia CIRP, 100, 397402. https://doi.org/10.1016/j.procir.2021.05.093 CrossRefGoogle Scholar
Krahe, C., Marinov, M., Schmutz, T., Hermann, Y., Bonny, M., May, M., & Lanza, G. (2022). AI based geometric similarity search supporting component reuse in engineering design. Procedia CIRP, 109, 275280. https://doi.org/10.1016/j.procir.2022.05.249 CrossRefGoogle Scholar
Kreis, A., Hirz, M., & Rossbacher, P. (2020). CAD-Automation in Automotive Development – Potentials, Limits and Challenges. Computer-Aided Design and Applications, 18(4), 849863. https://doi.org/10.14733/cadaps.2021.849-863 CrossRefGoogle Scholar
Kretzschmar, , & Damman, (2024). Evaluating the Current Role of Generative AI in Engineering Development andDesign - A Systematic Review. Advance online publication. https://doi.org/10.35199/norddesign2024.3 CrossRefGoogle Scholar
Li, X [Xingang], Wang, Y., & Sha, Z. (2023). Deep Learning Methods of Cross-Modal Tasks for Conceptual Design of Product Shapes: A Review. Journal of Mechanical Design, 145(4), Article 041401. https://doi.org/10.1115/1.4056436 CrossRefGoogle Scholar
Li, X [Xueyang], Xu, M., & Zhou, X. (2023). Twins-Mix: Self Mixing in Latent Space for Reasonable Data Augmentation of 3D Computer-Aided Design Generative Modeling. In 2023 IEEE International Conference on Multimedia and Expo (ICME).CrossRefGoogle Scholar
Nobari, Heyrani, A., Chen, W., & Ahmed, F. (2021, March 10 ). Range-GAN: Range-Constrained Generative Adversarial Network for Conditioned Design Synthesis. http://arxiv.org/pdf/2103.06230 Google Scholar
Para, W. R., Bhat, S. F., Guerrero, P., Kelly, T., Guibas, L [L.], & Wonka, P. (2021). SketchGen: Generating Constrained CAD Sketches. Advances in Neural Information Processing Systems, 7.Google Scholar
Plumed, R., Varley, P. A., Company, P., & Martin, R. (2022). Extracting datums to reconstruct CSG models from 2D engineering sketches of polyhedral shapes. Computers & Graphics, 102, 349359. https://doi.org/10.1016/j.cag.2021.10.013 CrossRefGoogle Scholar
Regenwetter, L., Nobari, A. H., & Ahmed, F. (2022). Deep Generative Models in Engineering Design: A Review. Journal of Mechanical Design, 144(7), Article 071704. https://doi.org/10.1115/1.4053859 CrossRefGoogle Scholar
Saha, S., Menzel, S., Minku, L. L., Yao, X., Sendhoff, B., & Wollstadt, P. (2020). Quantifying The Generative Capabilities Of Variational Autoencoders For 3D Car Point Clouds. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI).CrossRefGoogle Scholar
Seff, A., Ovadia, Y., Zhou, W., & Adams, R. P. (2020, July 16). SketchGraphs: A Large-Scale Dataset for Modeling Relational Geometry in Computer-Aided Design. http://arxiv.org/pdf/2007.08506 Google Scholar
Seff, A., Zhou, W., Richardson, N., & Adams, R. P. (2021). Vitruvion: A Generative Model of Parametric CAD Sketches. ICLR. http://arxiv.org/pdf/2109.14124 Google Scholar
Shabayek, A., Aouada, D., Cherenkova, K., & Gusev, G. (2020). Towards Automatic CAD Modeling from 3D Scan Sketch based Representation. In Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (pp. 392398). SCITEPRESS - Science and Technology Publications. https://doi.org/10.5220/0009174903920398 CrossRefGoogle Scholar
Shabestari, S. S., Herzog, M., & Bender, B. (2019). A Survey on the Applications of Machine Learning in the Early Phases of Product Development. Proceedings of the Design Society: International Conference on Engineering Design, 1(1), 24372446. https://doi.org/10.1017/dsi.2019.250 CrossRefGoogle Scholar
Skarka, W., & Kadzielawa, D. (2017). Automation of Designing Car Safety Belts. In Chen, Chun-Hsien, Trappey, Amy J. C., Peruzzini, Margherita, Stjepandic, Josip, & Wognum, Nel (Eds.), Advances in Transdisciplinary Engineering, Transdisciplinary Engineering: A Paradigm Shift - Proceedings of the 24th ISPE Inc. International Conference on Transdisciplinary Engineering, TE 2017, Singapore, July 10-14, 2017 (pp. 10411048). IOS Press. https://doi.org/10.3233/978-1-61499-779-5-1041 CrossRefGoogle Scholar
Starly, B., Angrish, A., & Bharadwaj, A. (2020). MVCNN++: CAD Model Shape Classification and Retrieval using Multi-View Convolutional Neural Networks. Journal of Computing and Information Science in Engineering, 21, 127. https://doi.org/10.1115/1.4047486 CrossRefGoogle Scholar
Steininger, S., Camci, H., & Fottner, J. (2024). Current State, Potentials and Challenges for the Use of Artificial Intelligence in the early Phase of Product Development: A Survey. In 2024 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM) (pp. 551555). IEEE. https://doi.org/10.1109/IEEM62345.2024.10857061 CrossRefGoogle Scholar
Vajna, S. (2022). Integrated Design Engineering: Interdisziplinare Und Ganzheitliche Produktentwicklung. Springer.CrossRefGoogle Scholar
Vajna, S., Weber, C., Zeman, K., Hehenberger, P., Gerhard, D., & Wartzack, S. (2018). CAx für Ingenieure, Eine praxisbezogene Einführung (3., vollständig neu bearbeitete Auflage). Springer Vieweg. https://doi.org/10.1007/978-3-662-54624-6 CrossRefGoogle Scholar
Vasudevan, S [Shriram], raj, pulari, S., & Vasudevan, S [Subashri]. (2021). Autoencoders. In Vasudevan, S., pulari, S. raj, & Vasudevan, S. (Eds.), Deep Learning (pp. 185207). https://doi.org/10.1201/9781003185635-8 CrossRefGoogle Scholar
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017, June 12 ). Attention Is All You Need. http://arxiv.org/pdf/1706.03762 Google Scholar
Willis, K., Pu, Y., Luo, J., Du, T., Lambourne, J., Solar-Lezama, A., & Matusik, W. (2021). Fusion 360 Gallery: A Dataset and Environment for Programmatic CAD Construction from Human Design Sequences.CrossRefGoogle Scholar
Wu, R., Xiao, C., & Zheng, C. (2021, May 20). DeepCAD: A Deep Generative Network for Computer-Aided Design Models. http://arxiv.org/pdf/2105.09492 Google Scholar
Wu, R., & Zheng, C. (2022). Learning to Generate 3D Shapes from a Single Example. ACM Transactions on Graphics, 41(6), 119. https://doi.org/10.1145/3550454.3555480 CrossRefGoogle Scholar
Xu, X., Jayaraman, P. K., Lambourne, J. G., Willis, K. D. D., & Furukawa, Y. (2023, June 30). Hierarchical Neural Coding for Controllable CAD Model Generation. http://arxiv.org/pdf/2307.00149 Google Scholar
Xu, X., Willis, K. D. D., Lambourne, J. G., Cheng, C.-Y., Jayaraman, P. K., & Furukawa, Y. (2022). SkexGen: Autoregressive Generation of CAD Construction Sequences with Disentangled Codebooks. http://arxiv.org/pdf/2207.04632 Google Scholar
Yang, Y., & Pan, H. (2022, October 26). Discovering Design Concepts for CAD Sketches. http://arxiv.org/pdf/2210.14451 Google Scholar
Zhang, C., & Zhou, G. (2019). A view-based 3D CAD model reuse framework enabling product lifecycle reuse. Advances in Engineering Software, 127, 8289. https://doi.org/10.1016/j.advengsoft.2018.09.001 CrossRefGoogle Scholar
Zhang, S., Guan, Z., Jiang, H., Ning, T., Wang, X., & Tan, P. (2023). Brep2Seq: a dataset and hierarchical deep learning network for reconstruction and generation of computer-aided design models. Journal of Computational Design and Engineering, 11(1), 110134. https://doi.org/10.1093/jcde/qwae005 CrossRefGoogle Scholar
Zhang, S., Guan, Z., Jiang, H., Ning, T., Wang, X., & Tan, P. (2024). Brep2Seq: A dataset and hierarchical deep learning network for reconstruction and generation of computer-aided design models. Journal of Computational Design and Engineering, 11. https://doi.org/10.1093/jcde/qwae005 CrossRefGoogle Scholar
Figure 0

Figure 1. Application categories of GenAI in CAD

Figure 1

Table 1. Classification of the reviewed papers to their respective design methods

Figure 2

Figure 2. Literature search process

Figure 3

Table 2. Classification of the reviewed papers to their respective design area

Figure 4

Figure 3. Frequency of dataset utilization in the analyzed papers (10 Papers)

Figure 5

Figure 4. Distribution of representation methods in the design categories