1. Introduction
This paper presents a new way to ensure ontological harmony amongst information used in design. The paper begins with a summary of the ongoing developments in the state of the art that make this necessary and possible; proceeds to outline a unique way to approach the problem, and; concludes by laying out the associated benefits and challenges.
The research is at the point that preliminary results are being generated and the technical approach is refined. The novel approach has arisen out of the team’s collective 100+ years working on systems engineering information integration challenges intrinsic to complex systems development, and it is this field of knowledge and practice that we aim to benefit through the research.
1.1. Designing modern systems
Contemporary design teams operate under conditions of unprecedented technological, economic, societal and environmental challenge (Reference ElMaraghy, ElMaraghy, Tomiyama and MonostoriElMaraghy et al., 2012). To succeed, they need ways and means of working together that are matched to the complexity of the design problems they face, or else face debilitation and difficulty.
The systems that today’s design teams produce span an ever-increasing slew of operational settings, constituent elements and disciplines of design. They are increasingly software-intensive, with key functions that require a continuous flow of digital information into and around them. Cyber threat and software-defined function adds to the design challenge, with opportunities and risks in equal abundance. The design teams involved are increasingly distributed, both disciplinarily and geographically, and need to combine niche skills and deep expertise to succeed.
Under such circumstances, efficacious knowledge representations are more indispensable than ever if design teams are to assure that such systems operate safely, effectively and efficiently (Reference Chandrasegaran, Ramani, Sriram, Horváth, Bernard, Harik and GaoChandrasegaran et al., 2013). Managing information of such breadth and depth for maximum advantage is central to effective design teaming and presents significant technical difficulty in its own right.
1.2. The role of ontology
An ontology is usefully defined as “an explicit specification of conceptualization” (Reference GruberGruber, 1993). Conceptual thinking is integral to design; designers objectify aspects of the design in order to develop solutions that satisfy requirements. Multidisciplinary design involves many conceptual frames of the product (Reference Ekaputra, Sabou, Serral Asensio, Kiesling and BifflEkaputra et al., 2017) and complex systems have many interconnected parts that need to be represented in more than one of these viewpoints.
Design failures occur when the resulting conceptual descriptions do not agree with one another in some consequential way or are proven invalid when contacted with phenomenological reality. Such failures can have unforeseen and damaging repercussions during system realisation or operation, which can be particularly severe if safety-critical properties or high-cost decisions are involved. Some such failures occur due to the causal factual disagreements being beyond the state of the art in present modelling practice to establish ahead of time (Reference LancasterLancaster, 2005), but many others could be avoided and are missed due to uncaptured, unpropagated or unclear meaning.
Mitigating such meaning loss requires ontological comprehensibility and compatibility across design information. It follows therefore that assuring global ontological fitness and harmony is germane to a design team’s success. In the past when explicit knowledge representations and computational models were few and generally isolated to single disciplines or concerns, global ontology assurance was done by a process of socially mediated design trading (Reference FallanFallan, 2010). However, trends in systems and technologies make the case for dedicated and technically delivered ontology management hard to ignore.
2. Relevant trends in human systems
Several trends substantiate the relevance of dedicated ontology management:
-
AI Design Tools: Continuing developments - notably in Large Language Models (LLMs) and generative algorithms - are assisting with requirements interpretation, concept generation, design optimisation and auto-code generation (Reference Khanolkar, Vrolijk and OlechowskiKhanolkar et al., 2023). However, these tools can also produce contextually inappropriate outputs and make unjustified or invalid leaps in logic and reasoning (Reference Ayyamperumal and GeAyyamperumal & Ge, 2024). Making the (often implicit) ontologies visible and looking to how they marry with others in the design knowledge space is an essential step to exploiting the significant accelerator of AI in design.
-
Computing: Continued growth and availability of computing resources - including quantum computing becoming more widely accessible and applicable - allows design teams to engage in extensive computational simulation and analysis. However, conceptually rich data that traverse many disciplines of concern and modes of computation is difficult to integrate and interpret in automated ways in part due to ontological gaps and disconnects (Reference Ekaputra, Sabou, Serral Asensio, Kiesling and BifflEkaputra et al., 2017). A robust and agile method for achieving ontological interoperability would be a significant enabler to further deepening and broadening the application of computing in design.
-
Digital Engineering: Digital tooling and methodologies, such as DevSecOps and Model-Based Systems Engineering (MBSE), add value in the design process by merit of the integration of cross-disciplinary perspectives and streamlining the flow of design representation across different modalities and environments (Reference Mordecai and DoriMordecai & Dori, 2016). Explicit control of the ontologies involved helps to ensure that meaning or intent is not lost or corrupted as data flows along these digital conduits.
2.1. AI safety and reliability
Any unmanaged term-space aggregated from descriptions of different origins, without effort to ontologically harmonise and ground, will contain some degree of over-matching on terms, presenting as context slippage (Reference Ayyamperumal and GeAyyamperumal & Ge, 2024). Adequate context intersection describes how LLMs function to add value and while the problem of inadequate contextual ‘Venn-overlap’ recedes with scale, it hasn’t been engineered out altogether (Reference Denning and ArquillaDenning & Arquilla, 2022). If design teams are to be able to confirm model validity as AI moves further and further into super-human intelligence, a method and toolkit for global context transparency and assurance is necessary, and perhaps a primary way to approach the general problem of AI safety.
As the distribution of design agents (human and synthetic) becomes increasingly specialised and utilise sophisticated representations of their spheres of concern, the risk of ‘missing the whole for the parts’ is persistent, presenting as unconstructive knowledge partitioning and lack of critical cross-contextual awareness (Figure 1).

Figure 1. Is context assurance the key to ‘eating the elephant’ of AI safety? (© Illustration by Christophe Vorlet with permission of Estate)
3. Ontology management state of the art
The field of ontology management has become largely synonymous with semantically rigorous methodologies to construct machine-reasonable knowledge representations (Reference Khan, Khan and NaumanKhan et al., 2016). A standard protocol for doing so is the extension of the Resource Description Framework (RDF) language into the Web Ontology Language (OWL/OWL2) (Reference TudoracheTudorache, 2020).
The Semantic Web has been around since 2004 and applications in science and engineering have increased over the past 10-15 years, including for example: the invention of OpenCAESAR at NASA JPL (Reference Elaasar, Rouquette, Wagner, Oakes, Hamou-Lhadj and HamdaqaElaasar et al., 2023); applications in Industry 4.0 (Reference Kumar, Khamis, Fiorini, Carbonera, Alarcos, Habib, Goncalves, Li and OlszewskaKumar et al., 2019), and; the Advancing Clinico-Genomic Trials on Cancer ontology (Reference Smith and BrochhausenSmith & Brochhausen, 2008).
This emerging field of ontologies is a useful and important contribution for managing term-spaces and making context explicit. They allow design teams to map and prescribe both the terms and the relations that are allowable in a given description space, generating firmly enforced graphs that can be inferenced over reliably using protocols such as Shapes Constraint Language (SHACL) and Shapes Protocol and RDF Query Language (SPARQL), as in OpenCAESAR. Such ontological methods furnish design teams with a larger volume of assured knowledge than the content of the atomic stated facts.
However, there are burdens, hurdles and potential drawbacks to implementing such methods at scale that merit consideration by design teams setting out to apply them:
-
If a design team requires conceptual diversity and mutability (affording designers expressiveness quickly), then these methods applied in isolation can create obstacles;
-
They can be costly in terms of time and money, as well as inserting new specialist skilled work and technologies on the critical path for progressing the integrated system design;
-
Design software development companies have little incentive to grant design teams powers of ontological independence, which creates a burden with both establishing and then maintaining interoperability with the Commercial-off-the-Shelf (COTS) tools in use by the design team, and;
-
Once implemented, a formal and technologically instantiated ontology may constrain future knowledge extension, as it institutes foundational archetypal entities and relational constellations that undergird what is possible to state via the graph.
This critique is not intended to deprecate nor discount the power and sophistication of formal ontological methods, which are unmatched in providing robust and specific encapsulation of knowledge domains. Additionally, methods like OpenCAESAR provide design teams with uniquely powerful technological bases to build truly open ‘hub-and-spoke’ integrated digital engineering toolchains (Reference Elaasar, Rouquette, Wagner, Oakes, Hamou-Lhadj and HamdaqaElaasar et al., 2023). The state of the art for creating exhaustively semantically unified ontologies is a powerful one and should be applied where the benefits surpass the costs and the design thinking style is amenable to highly formal and decidable methods. However, ontology management should strike a balance that best enables the totality of thinking styles involved in design. Additionally, the ubiquitous challenge of doing ‘exactly enough’ rather than ‘everything possible’ to assure and inject reliability into the collaborative design of complex systems is a prime motivator for the approach and research.
An additional complicator and opportunity to progress ontology management is the ever-advancing capabilities of AI. However, the continued unreasonable effectiveness of scaling neural networks to resolve capability gaps that were often previously thought to be insurmountable does not guarantee more ease in knowing the extent and limitations of the inseparable ontological sub-structure, indeed it may make it harder. Additionally, as AI failures stemming from inadequate context interlock become a rarer occurrence and more obscure in how they present (and the ubiquity and utility of AI tooling will continue to proliferate) the risks to the integrity of engineering actually increase, as design teams tend to become complacent and dependent (Reference Shukla, Bui, Levy, Kowalski, Baigelenov and ParsonsShukla et al., 2025). It is a false dichotomy to conclude we must either:
-
a) Exhaustively and rigorously ontologically specify the global description landscape (whether by human hands at tools, or by the inevitable introduction of AI-driven automation to formal ontology approaches); or,
-
b) Divorce ourselves of from having any responsibility or sufficiently penetrative means to inspect for ontological robustness, by outsourcing all unwieldly communication gaps to the inferential black-box LLMs to bridge.
3.1. The unique contribution to ontology management
Ensuring that the ontologies implicated in design work coexist harmoniously is different to ensuring comprehensive ontological correspondence. The central proposition behind the research is that there is a need to be able to assure global ontological coherency incrementally, led by value cases as they condense out of design activity. The current state of the art is unable to provide this capability, which we posit is key for modern digital engineering environments to enable the natural dynamics within design teaming and cognition.
The significant and powerful array of ontologically exacting methodologies and technologies are not best suited to this task exactly because of their strengths. If we approach the nature and role of description within complex collaborative human endeavours from a different perspective, a new approach to ontology management becomes possible, giving design teams unique capacities in how they compose their digital information environs.
4. A new approach to ontology management
The approach presented here is built upon and co-evolved with a corresponding philosophical position towards description that is less commonly found in science and engineering disciplines.
4.1. The information philosophy
If all description is recognised as partial, imperfect and constantly morphing, the overarching goal in the management of the information estate is best defined as enabling sufficient descriptive acuity to attain project goals. Seen this way, the state of ontological harmony between constituent descriptive statements takes on a subtly different character to that of the unification of all description with concrete ontological rigour.
Description formalisation and integration inject incontrovertibility, however as the result is still informational, it continues to be a limited reductive grasp of reality. In light of this inescapability, description always may demand extension, refinement, or refactoring in order for a design team to reach their latest goals. Furthermore, it is not a given that future description needs will be most prudently met from where the previous endeavour to formalise and integrate stopped, both in terms of the ontological structure and its instantiation in data and tooling.
The prevailing aim of an ontology management team is to equip the design team with powers of description that enable them to represent all matters critical to achieving design outcomes sufficiently, precisely and consistently.
4.2. Value case for ontology investment
Each area of knowledge sought can be substantiated in terms of its value for pursuing associated project goals, creating a list of description needs. This helps the description project avoid the two extremes of over and under-treating the problem.
Specifically, it helps ensure the overall description effort is not seduced into the classic engineering waste of over-processing design representations (sometimes colloquially called ‘gilding the lily’). Going the other way, it also helps teams quantify the value and justify the cost of the description project. Tool integration is very commonly an underinvested area in projects; the value case can be obscure to those apportioning budget as it lies in the interactions between classic design discipline budgetary areas. NASA JPL’s OpenCAESAR project is proof of the value of adding the specific discipline of ontology management to the systems engineering integration effort (Reference Gregory and SaladoGregory & Salado, 2024).
Description pilotage is a continuous function of all sufficiently complex systems development projects, and one that is best organised to provide the degree of formalisation and descriptive power that is no more or less than sufficient to pursue the known design goals at that time.
4.3. A technical basis for the new approach to delivering ontological harmony
The approach has been implemented through an n-dimensional Footnote 1 untyped graph data environment (Reference Harrison, Whitfield, Odukoya and PowellHarrison et al., 2022), which is in its closed beta. This gives those overseeing the status of design description a space in which to parse all forms of representation in use and make sense of how this collectively exhibits harmony and disharmony:
-
It provides the capacity to make changes at very low cost, to experiment with combination and enrichment to meet description needs without (necessarily) altering the source data;
-
It affords the capacity to overlay multiple sense-making frames (such as different inhomogeneous object models) over the same raw net of terms, allowing the design team to pivot automatically to whatever conceptual lens is most relevant to the matter at hand; and,
-
It provides the ability to overlay archetypal patterns to act as scaffolds for semantic structure, to evaluate the completeness of the present description and indicate scope and structure of description that the design team may want to consider adopting.
In the approach, objects are always post-coordinated - i.e. emergent from patterns in the description, not structurally intrinsic (the raw terms are the only structure) - and are therefore always mutable (including ‘forgettable’). They are never unavoidably enforced in the opposite direction on the information author, however an object frame can be used as the vehicle for eliciting clarification, for example when authors have been unclear or incomplete with their description in important ways. If appropriate - for example if the additional information or context is a one-off question and answer, or the value case for formalising the need is unclear without a sample - this enrichment can be done directly into the graph or via a lightweight Comma Separated Variable (CSV) ported pro forma. This avoids the need to immediately change COTS or proprietary information systems and/or core project data sets or models, but still capturing the progression of the collective description space in a controlled and shareable way.
These capabilities and techniques allow description harmonisation and advancement to be incremental and value led. Design teams can grow, adapt or trim the graph to whatever degree satisfies the description goal. Equally, old frames can be retired or superseded as soon as that becomes prudent, while keeping for posterity if helpful. Achieving this dynamic and transparent balance of description progression and meaning consolidation is what we term “ontological harmonisation”, allowing teams to use lightweight conceptual structures when exploring low maturity ideas and more formal frameworks when stability and assurance are paramount.
4.4. How the approach is executed
The approach blends the strengths of both flexible and formal methods, involving all the following activities that become possible with this unique mix of techniques:
1. Untyped n-dimensional term graph: Ontological harmonisation of the global description landscape begins with the initiation of the raw term-graph, expressing ideas and interrelationships without enforcing strict typologies. This ‘conceptual sandbox’ supports rapid brainstorming from the outset and goes on to enable fluid knowledge capture.
2. Contextual frames and ontology overlays: Teams can capture and/or establish specialised frames or object models as clarity or precision is needed. Different frames can coexist, each acting as a lens onto the underlying graph. For example, a safety-critical subsystem may be expressed with a formal ontology (this would not be hard-enforced on the graph, but achieved via structured input and attested by graph analytics), but exploratory subsystem concepts remain loosely defined, all while drawing from the exact same cluster of terms.
3. Selective formalisation: Expand formal ontology elements incrementally. As project maturity increases and (both design and associated description) requirements stabilise, the design team can invest in more rigorous semantic models. The cost of formalising concepts is justified wherever long-term semantic consistency and automated reasoning underpin essential project and/or product performances.
4. Iterative enrichment and pruning: Continuously refine the graph and its internalised ontological structure based on how it supports description needs and how design goals evolve. Retire outdated frames (such as object models) and enrich valued areas to maintain highest conceptual relevance and reduce information overload.
5. Stakeholder engagement and tool support: Engage design domain experts to ensure that conceptual frames are in synchrony with practical realities. Tooling automations can prompt users to clarify ambiguities and highlight inconsistencies. This feedback loop is part of maintaining ontological coherency as description needs evolve with project execution.
These novel and value-adding design team activities are made possible and complemented by other workflows and automations, such as data ingestion and normalisation pipelines. The overall approach is illustrated in a simplified form in Figure 2 below.

Figure 2. A simplified representation of the novel features of the approach
As indicated at the outset, the full technological and procedural detail of the approach is being refined through testing in the field and will be published in the future. The aim of this presentation of the research is to convey a complete picture of the approach for critical engagement - beyond technical ‘how to’ into motivations, justifications and limitations.
The paper will now turn to considering what advantages implementing the new approach may yield and known challenges with doing so, before going on to describe the plan to take it through to demonstrated effects and full technical repeatability.
5. Approach benefits and challenges
The new approach was composed to address a gap in knowledge and practice related to ontology management that was identified by assessing the state of the art across academia, technology and industry. To make this more relevant to researchers and design teams, the associated benefits and challenges are presented below.
5.1. Benefits
There are three main areas of complex systems design and engineering where it is anticipated the approach will bring significant benefits:
-
c) High-integrity systems: Conceptual design can remain flexible and explore architecture options more thoroughly against whatever cogent definition can be extracted from the early descriptive models and conventions available, for example content from a User Requirement Document (URD). As certain subsystem concepts stabilise (e.g. a fault-tolerant flight control module), these portions of the knowledge base can be formalised to enable advanced reasoning, compliance checks and design assurance.
-
d) Supply chain teaming: For design teams who are particularly dependent on supplier-provided items or sub-systems, the flexible term-graph allows rapid assimilation (and backwards compatibility) with other design teams’ ontologies. Equally, it is possible to enforce core ontologies to ensure consistent gathering and interpretation of critical parameters such as material specifications or maintenance requirements, or even to aid delivery of real-time operational analytics.
-
e) Design tool integration: With the advantage of a flexible basis of integration (through ontological harmonisation rather than unification or domination), it becomes a more realistic proposition to bring together the often large number and diversity of tools used by design teams. The effort spent on description integration can be focused on treating only the specific statements and system aspects that are of value to translate, and this can be incrementally grown as and when need arises.
Whether in these areas of engineering design activity or elsewhere, the approach is expected to return benefits mainly by a handful of general mechanisms. These are extrapolated from how the approach is expected to function and are in various states of evaluation:
-
Clarity and assurance: The approach provides conceptual consistency where it matters most, reducing errors, rework and misunderstandings.
-
∘ State of evaluation : Corollary concept expression and refinement conducted as proof of principle (supported by the term-graph) with several groups of 1-3 design team contributors.
-
-
Scalability and adaptability: By layering and iterating conceptual structures, teams can adapt ontologies to rapidly changing requirements, technology insertions and mission shifts.
-
∘ State of evaluation : Demonstrated technical and methodological viability with representative datasets, validated as having utility with industrial project stakeholders.
-
-
Creative freedom with rigour on demand: Designers gain a ‘best of both worlds’ scenario, retaining the freedom to explore innovative solutions while knowing they can formalise critical aspects as necessary.
-
∘ State of evaluation : Experimental engagements so far lack longitude (duration through time) to evaluate conclusively.
-
-
Cost-efficiency: Instead of uniformly enforcing rigid ontologies from the start, teams invest effort incrementally. This focused approach can reduce total cost of the information estate, speed up early-phase design work and streamline verification activities later.
-
∘ State of evaluation : Early indications are that this functions as expected, however again missing the longitudinal studies necessary to ensure the savings are not just postponed costs.
-
-
AI Integration: A conceptually coherent knowledge space enables more reliable AI-driven automation. This can lead to faster decision-making cycles, improved model validation and more confidence in design outcomes.
-
∘ State of evaluation : This is yet to be proven in full-scale implementation with design teams, but small demonstrators by the researchers have proven some utility and looking to adjacent areas in the field of AI it seems highly likely our unique implementation of graph technologies will produce acceleration similar to other cutting-edge applications.
-
5.2. Challenges
Several challenges associated with implementing the approach need addressing if design teams are to successfully realise and prosecute ontological harmony in the way outlined herein. These challenges cluster around three main themes:
1. Architectural integration / sustainment: The task of designing, building and maintaining an integrated digital engineering environment in a modern complex systems development project already presents design teams with significant difficulty (Reference Ciocoiu, Nau and GruningerCiocoiu et al., 2001). One cause of this difficulty is creating and sustaining an architecture that ensures data management and tooling is mutually complementary and lends maximum advantage to the design team. We have developed multiple architectural integration options based on industrial digital environments encountered, but these have not been implemented at scale, and it is likely doing so will present continuous challenge.
2. Technology maturity / novelty: Our approach critically depends on a unique class of untyped multidimensional graph database. Our approach (Inflexsion and Termscape, Yorkmetrics Ltd.) is in its beta development phase, and we are unaware of an alternative solution with commensurate capability. However, even the established ontology management technologies often use more specialised software, software languages and newer conventions of data representation that can present challenges to design teams to use and blend with their wider technology stack.
3. Ontology skills / familiarity: The field of Ontology Management is still relatively new and niche, which presents associated challenges with practical implementation e.g. resourcing the technical delivery teams and training the wider team as necessary. Furthermore, few teams even at the cutting edge of the practical implementation of ontologies will welcome the new environment and style of treating ontologies that our approach necessitates, particularly when making the value case for the semantically rigorous methodology was often hard won in the first case and given the benefits of doing so are not proven at the time of writing. Adoption of new tools and techniques has historically been relatively slow in engineering and design practice, something that the ongoing pace of digital disruption continues to problematise.
6. Conclusion and next steps
In an age of intensifying complexity and precarity amongst the myriad systems that humanity concerns itself with, the success of design teams is paramount if we are to deliver the effects we set out to. The relevance of highly formalised, semantically rigorous and exhaustively ontologically unified knowledge graphs for complex safety-critical systems is high. However, as this increasingly describes a wide array of systems - often in ‘brownfield’ software-laden environments, with data-driven elements that frequently and chaotically change - it is important to remain proportionate and value-led in our methods, particularly as so many digital engineering transformation efforts fall short of their stated goals.
Further to this, the exploding gamut of LLM-driven ways to integrate and exploit information is seen by many to herald a nearby time when all such problems will dissolve, following many others that at one time were thought intractable but have since fallen to the unreasonable effectiveness of scale in AI. Our assertion is that choosing between these two paths is a false dichotomy likely to waste some precious amount of the always limited opportunity design teams get to strategically invest in our digitally-enabled knowledge systems.
As the performance and perhaps even entire operational modalities we ask of the systems in service will continue to be expected to adapt to the ongoing technology disruption, we will need to fittingly support the spectrum of cognitive capacities that our design teams deploy throughout the lifecycle.
Our incremental ontological harmonisation approach enables design teams to balance flexibility with formal rigor, aligning conceptual frameworks with project maturity and stakeholder needs. This approach helps teams navigate complexity, enhance digital engineering workflows and leverage AI tools more effectively.
It is proposed to complement rather than replace semantically rigorous methodologies, such as OpenCAESAR, as different styles of knowledge system best suit the different problems and opportunities associated with ontology management. However, we would suggest that without a single technologically-enabled and philosophically tolerant place to map and assess the collective harmony of the global description estate, design teams will find it harder to make balanced decisions and likely extend whatever their current ontological management paradigm is to address new description needs.
6.1. Next up for the research
As described earlier, the research is ongoing with many untested propositions, unproven projections and (very likely) undiscovered hurdles on the path to full approach realisation. The aim of sharing now is to expose the research to critique as early as practical.
Research work planned for the immediate future focuses on:
-
Empirical exploration: Conduct case studies to refine the approach in a real-world setting, building towards experimental validation that quantifies the effects that incremental ontology harmonisation actually delivers in practice;
-
Advanced tooling: Develop intuitive user interfaces and automated recommendations that guide designers in refining their conceptual frames, to make the approach more repeatable and lower the entry cost for any who want to embrace in the future; and,
-
Interoperability metrics: Establish metrics to measure how well different conceptual frames harmonise, exploring ways to quantify the assurance of shared meaning, and therefore, how readily new data sources could integrate with existent and evolving ontologies.
The goal of the research is to grant design teams the ability to find the most economically viable path to fulfil the description needs at hand, through insight into the ontological realties of the information landscape as it stands. It is the people and their thinking that the approach aspires to most fully and naturally facilitate and empower. Under seemingly endless and accelerating digital disruption, AI innovation, and ballooning system complexity (including software and data centricity) it is proposed that flexible ontological harmonisation is an ‘idea whose time has come’. If that proves to be the case, design teams will require approaches like the one described herein to prevail.