To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Model order reduction methods for linear and non-linear dynamic systems in general can be classified into two categories [6]:
Singular-value-decomposition (SVD) based approaches
Krylov-subspace-based approaches.
Krylov-subspace-based methods have been reviewed in Chapter 2. In this chapter, we focus on the SVD-based reduction methods. Singular value decomposition is based on the lower rank approximation, which is optimal in the 2-norm sense. The quantities for deciding how a given system can be approximated by a lower-rank system are called singular values, which are the square roots of the eigenvalues of the product of the system matrix and its adjoint. The major advantage of SVD-based approaches over Krylov subspace methods lies in their ability to ensure the errors satisfying an a-priori upper bound. Also, SVD-based methods typically lead to optimal or near optimal reduction results as the errors are controlled in a global way. However, SVD-based methods suffer the scalability issue as SVD is a computational intensive process and cannot deal with very large dynamic systems in general. In contrast, Krylov-subspace-based methods can scale to reduce vary large systems due to efficient computation methods for moment vectors and their orthogonal forms.
SVD-based approaches consist of several reduction methods [6]. In this chapter, we mainly focus on the truncated-balanced-realization (TBR) approach and its variants, which were first introduced by Moore [81].
Complexity reduction and compact modeling of interconnect networks have been an intensive research area in the past decade, owing to increasing signal integrity effects and rising electro and magnetic couplings modeled by parasitic capacitors and inductors. Most previous research works mainly focus on the reduction of internal circuitry by various reduction techniques. The most popular one is based on subspace projection [32, 37, 85, 91, 113]. The projection-based method was pioneered by asymptotic waveform evaluation (AWE) algorithm [91], where explicit moment matching was used to compute dominant poles at low frequency. Later on, more numerical stable techniques were proposed [32,37,85,113] by using implicit moment matching and congruence transformation.
However, nearly all existing model order reduction techniques are restricted to suppress the internal nodes of a circuit. Terminal reduction, however, is less investigated for compact modeling of interconnect circuits. Terminal reduction is to reduce the number of terminals of a given circuit under the assumption that some terminals are similar in terms of performance metrics like timing or delays. Such reduction will lead to some accuracy loss. But terminal reduction can lead to more compact models after traditional model order reduction has been applied to the terminal-reduced circuit, as shown in Figure 6.1.
For instance, if we use subspace projection methods like PRIMA [85] for the model order reduction, a smaller terminal count will lead to smaller reduced models, given the same order of block moment requirement for both circuits.
As VLSI technology advances with decreasing feature size as well as increasing operating frequency, inductive effects of on-chip interconnects become increasingly significant in terms of delay variations, degradation of signal integrity, and aggravation of signal crosstalk [72, 116]. Since inductance is defined with respect to the closed current loop, the loop-inductance extraction needs to specify simultaneously both the signal-net current and its returned current. To avoid the difficulty of determining the path of the returned current, the partial element equivalent circuit (PEEC) model [101] can be used, where each conductor forms a virtual loop with infinity and the partial inductance is extracted.
To model inductive interconnects accurately in the high frequency region, RLCM (M here stands for mutual inductance) networks under the PEEC formulation are generated from discretized conductors by volume decomposition according to the skin-depth and longitudinal segmentation according to the wavelength at the maximum operating frequency. The extraction based on this approach [59, 83, 84] has high accuracy but typically results in a huge RLCM network with densely coupled partial inductance matrix L. A dense inductively coupled network sacrifices the sparsity of the circuit matrix and slows down the circuit simulation or makes the simulation infeasible. Because the primary complexity is a result of the dense inductive coupling, efficient yet accurate inductance sparsification becomes a need for extraction and simulation of inductive interconnects in the high-speed circuit design.
As VLSI technology advances into the sub-100nm regime with increased operating frequency and decreased feature sizes, the nature of the VLSI design has changed significantly. One fundamental paradigm change is that parasitic interconnect effects dominate both the chip's performance and the design's complexity growth. As feature sizes become smaller, their electromagnetic couplings become more pronounced. As a result, their adverse impacts on circuit performances and powers will become more significant. Signal integrity, crosstalk, skin effects, substrate loss and digital and analog substrate couplings are now adding severe complications to design methodologies already stressed by increasing device counts. It was observed that today's high performance digital design essentially becomes analog circuit design [24] as there has been a need to observe a finer level of detail.
In addition to dominant deep submicron effects, the exponential increase of device counts causes a move in the opposite direction: we need to increase the increasing design abstraction levels to cope with the design capacity growth. It was widely believed that behavioral and compact modeling for the purpose of synthesis, optimization, and verification of the complicated system-on-a-chip are viable solutions to address these challenging design problems [66].
In this book, we focus on the compact modeling of on-chip interconnects and general linear time invariant systems (LTI) because interconnect parasitics, which are modeled as linear RLCM circuits, are the dominant factors for complexity growth.
In this chapter, we study the model order reductions on interconnect circuits with many terminals or ports. We show that projection-based model order reduction techniques are not very efficient for those circuits. We then present an efficient reduction method which combines projection-based MOR with a frequency domain fitting method to produce reduced models for interconnect circuits with large terminals.
Introduction
Krylov subspace projection methods have been widely used for model order reduction, owing to their efficiency and simplicity for implementation [32, 37, 85, 91, 113]. Chapter 2 has a detailed review of those methods.
One problem with the existing projection-based model order reduction techniques is that they are not efficient at reducing circuits with many ports. This is reflected in several aspects of the existing Krylov subspace algorithms like PRIMA [85]. First, the time complexity of PRIMA is proportional to the number of ports of the circuits as moments excited by every port need to be computed and matrix-valued transfer functions are generated. Second, the poles of the reduced models increase linearly with the number of ports, and this makes the reduced models much larger than necessary. The fundamental reason is that all the Krylov-based projection methods are working directly on the moments, which contain the information of both poles and residues for the corresponding transfer function.
The integrated circuit industry has continuously enjoyed enormous success owing to its ever increasing large-scale integration. With the advent of system-on-a-chip (SOC) technology [30,133], it requires heterogeneous integration to support different modules within one single silicon chip such as logic, memory, analog, RF/microwave, FPGA, and MEMS sensor. Such a heterogeneous integration leads to highly non-uniform current distribution across one layer or between any pair of layers. As a result, it is beneficial to design a structured multi-layer and multi-scale power and ground (P/G) grid [11] that is globally irregular and locally regular [115] according to the current density. This results in a heterogeneously structured P/G circuit model in which each subblock can have a different time constant. In addition, the typical extracted P/G grid circuits usually have millions of nodes and large numbers of ports. To ensure power integrity, specialized simulators for structured P/G grids are required to efficiently and accurately analyze the voltage bounce or drop using macro-models.
In [139], internal sources are first eliminated to obtain a macro-model with only external ports. The entire P/G gird is partitioned at and connected by those external ports. Because elimination results in a dense macro-model, [139] applies an additional sparsification procedure that is error-prone and inefficient. In addition, [18,95] proposed localized simulation and design methods based on the locality of the current distribution in most P/G grids with C4-bumps.
The systematic use of differential forms in electromagnetic theory started with the truly remarkable paper of Hargraves [Har08] in which the space-time covariant form of Maxwell's equations was deduced. Despite the efforts of great engineers such as Gabriel Kron (see [BLG70] for a bibliography) the use of differential forms in electrical engineering is, unfortunately, still quite rare. The reader is referred to the paper by Deschamps [Des81] for an introductory view of the subject. The purpose of this appendix is to summarize the properties of differential forms which are necessary for the development of cohomology theory in the context of manifolds without getting into the aspects which depend on metric notions. We also develop the aspects of the theory that both depend on the metric and are required for Chapter 7. Reference [Tei01] presents most of the topics in this chapter from the point of view of the numerical analyst interested in network models for Maxwell's equations.
There are several books which the authors found particularly invaluable. These are [War71, Chapters 4 and 6] for a proof of Stokes’ theorem and the Hodge decomposition for a manifold without boundary, [Spi79, Chapters 8 and 11] for integration theory and cohomology theory in terms of differential forms, [BT82] for a quick route into cohomology and [Yan70] for results concerning manifolds with boundary. Finally, the papers by Duff, Spencer, Conner, and Friedrichs (see bibliography) are for basic intuitions about orthogonal decompositions on manifolds with boundary.
The next topic from homology theory which sheds light on the topological aspects of boundary value problems is that of duality theorems. Duality theorems serve three functions, namely to show
(1) a duality between certain sets of lumped electromagnetic parameters which are conjugate in the sense of the Legendre transformation;
(2) the relationship between the generators of the pth homology group of an n-dimensional manifold and the (n−p)-dimensional barriers which must be inserted into the manifold in order to make the pth homology group of the base manifold trivial;
(3) a global duality between compatibility conditions on the sources in a boundary value problem and the gauge transformation or nonuniqueness of a potential.
In order to simplify ideas, the discussion is restricted to manifolds and homology calculated with coefficients in the field ℝ. The duality theorems of interest to us are formulated for orientable n-dimensional manifolds M and have the form
where Ω1 and Ω2 are manifolds having some geometric relation. In general, the geometric relationship of interest to us is that of a manifold and its boundary or the manifold and its complement. A complete development of duality theorems requires the calculus of differential forms, but we will merely state the relevant duality theorems without proof and give examples to illustrate their application. These duality theorems are a result of a nondegenerate bilinear pairing in cohomology classes and integration. The Mathematical Appendix covers details of the exterior product which leads to the necessary bilinear pairing, but for now we note that we note that the exterior product of a p-form and an n − p-form gives an n-form.
In this chapter we consider a formulation for computing eddy currents on thin conducting sheets. The problem is unique in that it can be formulated entirely by scalar functions—a magnetic scalar potential in the nonconducting region and a stream function which describes the eddy currents in the conducting sheets— once cuts for the magnetic scalar potential have been made in the nonconducting region. The goal of the present formulation is an approach via the finite elements to discretization of the equations which come about from the construction of the scalar potentials. Although a clear understanding of cuts for stream functions on orientable surfaces has been with us for over a century [Kle63] there are several open questions which are of interest to numerical analysts:
(1) Can one make cuts for stream functions on nonorientable surfaces?
(2) Can one systematically relate the discontinuities in the magnetic scalar potential to discontinuities in stream functions by a suitable choice of cuts?
(3) Given a set of cuts for the stream function, can one find a set of cuts for the magnetic scalar potential whose boundaries are the given cuts?
In preceding chapters we have alluded to the existence of cuts, though we have not yet dealt with the details of an algorithm for computing cuts. The algorithm for cuts will wait for Chapter 6, but it is possible to answer the questions above. Section 5B gives affirmative answers to the first two questions by using the existence of cuts for the magnetic scalar potential to show that cuts for stream functions can be chosen to be the boundaries of the cuts for magnetic scalar potentials.
The title of this book makes clear that we are after connections between electromagnetics, computation and topology. However, connections between these three fields can mean different things to different people. For a modern engineer, computational electromagnetics is a well-defined term and topology seems to be a novel aspect. To this modern engineer, discretization methods for Maxwell's equations, finite element methods, numerical linear algebra and data structures are all part of the modern toolkit for effective design and topology seems to have taken a back seat. On the other hand, to an engineer from a half-century ago, the connection between electromagnetic theory and topology would be considered “obvious” by considering Kirchhoff's laws and circuit theory in the light of Maxwell's electromagnetic theory. To this older electrical engineer, topology would be considered part of the engineer's art with little connection to computation beyond what Maxwell and Kirchhoff would have regarded as computation. A mathematician could snicker at the two engineers and proclaim that all is trivial once one gets to the bottom of algebraic topology. Indeed the present book can be regarded as a logical consequence for computational electromagnetism of Eilenberg and Steenrod's Foundations of Algebraic Topology [ES52], Whitney's Geometric Integration Theory [Whi57] and some differential topology. Of course, this would not daunt the older engineer who accomplished his task before mathematicians and philosophers came in to lay the foundations.
The three points of view described above expose connections between pairs of each of the three fields, so it is natural to ask why it is important to put all three together in one book.
Homology theory reduces topological problems that arise in the use of the classical integral theorems of vector analysis to more easily resolved algebraic problems. Stokes’ theorem on manifolds, which may be considered the fundamental theorem of multivariable calculus, is the generalization of these classical integral theorems. To appreciate how these topological problems arise, the process of integration must be reinterpreted algebraically.
Given an n-dimensional region Ω, we will consider the set Cp( Ω) of all possible p-dimensional objects over which a p-fold integration can be performed. Here it is understood that 0 ≤ p ≤ n and that a 0-fold integration is the sum of values of a function evaluated on a finite set of points. The elements of Cp( Ω), called p-chains, start out conceptually as p-dimensional surfaces, but in order to serve their intended function they must be more than that, for in evaluating integrals it is essential to associate an orientation to a chain. Likewise the idea of an orientation is essential for defining the oriented boundary of a chain (Figure 1.1).
At the very least, then, we wish to ensure that our set of chains Cp(ω) is closed under orientation reversal: for each c ∈ Cp( Ω) there is also −c ∈ Cp( Ω).