To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It is intended here to offer a possible characterization of the concept of complete ignorance. Like other formulations, the problem is taken to be that of choice of an action from a given set when the consequences of any action are functions of an unknown state of nature. However, the properties regarded as defining an optimal choice are designed to reflect completely the idea that there is no a priori information available which gives any state of nature a distinguished position. Most importantly, the optimality criterion differs from those in the now more standard subjective probability framework by not presupposing a fixed list of states of nature. As we note shortly, the arguments and conclusions are much closer to Shackle's than to those of Ramsey, de Finetti, and Savage.
The axiom systems of these last authors imply the existence of subjective probabilities as weights to be assigned to the different possible states of nature. These authors thus provide a foundation for the centuries-old use of probability as a guide to action. The concept of complete ignorance can be expressed in this subjective probability framework only by the assignment of equal probabilities to all the states of nature, which is the principle of indifference or insufficient reason implicit in the earliest combinatorial probability calculations of Pascal and Fermat and explicit in Jacob Bernoulli, Bayes, and Laplace.
This paper deals with the application of certain computational methods to evaluate constrained extrema, maxima, or minima. To introduce the subject, we will first discuss nonlinear games. Under certain conditions, the finding of the minimax of a certain expression is closely related to, in fact identical with, the finding of a constrained minimum or maximum. Let us consider then a game (in a generalized sense) where player 1 has the choice of a certain set of numbers x1, …, xm that are constrained to be nonnegative for present purposes and player 2 selects numbers y1, …, yn also constrained to be nonnegative but otherwise unrestricted. The payoff of the game, the payment made by player 2 to player 1, will be a function of the decisions made by the two players, the x's and the y's. This pay off will be designated by φ(x1, …, xm; y1, …, yx, …, yn). To play the game in an ideal way is to find the minimax solution; we know this solution exists under certain conditions. That is, we arrive at a choice of strategies by the two players where player 1 is maximizing his payoff given the strategy of player 2, and player 2 is minimizing the payoff, given the strategy of player 1.
The role of price adjustment equations in economic theory
In this essay, it is argued that there exists a logical gap in the usual formulations of the theory of the perfectly competitive economy, namely, that there is no place for a rational decision with respect to prices as there is with respect to quantities. A suggestion is made for filling this gap. The proposal implies that perfect competition can really prevail only at equilibrium. It is hoped that the line of development proposed will lead to a better understanding of the behavior of the economy in disequilibrium conditions.
In the traditional development of economic theory, the usual starting point is the construction for each individual (firm or household) of a pattern of reactions to events outside it (examples of elements of a reaction pattern: supply and demand curves, propensity to consume, liquidity preference, interindustrial movements of capital and labor in response to differential profit and wage rates). This point of view is explicit in the neoclassicists (Cournot, Jevons, Menger, and successors) and strongly implicit in the classicists (from Smith through Cairnes) in their discussion of the motivations of capitalists, workers, and landlords which lead to establishment of the equilibrium price levels for commodities, labor, and the use of land. The basic logic of Marx's system brings it, I believe, into the same category, although some writers have referred to his theories as being “class” economics rather than “individual” economics.
This book draws together a long series of papers by the two senior authors, alone and in collaboration with each other and with other friends and colleagues, to whose thinking and stimulation we are grateful. Both of us have had a strong primary concern with the workings of the economic system as a mechanism for achieving the optimal allocation of resources. The theme is of course an old one in economic thought; its importance was especially reinforced to us through our teachers, Harold Hotelling and Oskar Lange, and our colleague, Jacob Marschak. The very concept of optimization with resource constraints links the theory with the classical mathematical theory of constrained optimization and the more modern versions of so-called mathematical programming, where emphasis has been placed on inequality constraints.
The important property of the market as a resource allocation mechanism is its decentralization. It has always been assumed in the mainstream of economic theory, though frequently only implicitly, that the transmission of detailed information about tastes or technology is costly and that there is a virtue to systems in which decisions are made at the point where the information already existed. This desire for decentralization has, however, to be reconciled with the need for balance in the economy as a whole, most especially, in the need to respect limitations on the overall availability of resources. The mathematical characterizations of constrained optima, at least in the Lagrangian formulation and its generalizations, suggests the possibility of decentralization.
A great deal of work has been done recently on what one may call the static aspects of competitive equilibrium, its existence, uniqueness, and optimality. This work is characterized, in the main, by being based on models whose assumptions are formulated in terms of certain properties of the individual economic units, although in the last analysis it is the nature of the aggregate excess demand functions that determines the properties of equilibria.
With regard to dynamics, especially the stability of equilibrium, much remains to be done. The concept of stability, used already by the nineteenth century economists in its modern sense, did not receive systematic treatment in the context of economic dynamics until Samuelson's paper of 1941. Samuelson, however, did not fully explore the implications of the assumptions underlying the perfectly competitive model. He (as well as Lange, Metzler, and Morishima) focused attention on the relationship between “true dynamic stability” and the concept of “stability” as defined by Hicks in Value and Capital, rather than on whether under a given set of assumptions stability (in either sense) would prevail or not. Even though the Hicksian concept does not, in general, coincide with that of “true dynamic stability,” it is of considerable interest to us for two reasons: first, as shown by the writers just cited, there are situations where the two concepts are equivalent; second, because the equilibrium whose “stability” Hicks studied is indeed competitive equilibrium.
The conditions under which a Walrasian system of multiple markets will be stable have been investigated by a number of authors under the implicit assumption of static expectations. It often has been assumed that expectations based upon an extrapolation of current rates of change (rather than upon the assumption that the future would be like the present) would prevent the system from converging onto its equilibrium position at all. Since interesting results have been scarce and difficult to achieve even in the case of static expectations, it is not surprising that little has been done with the relationship between extrapolative expectations and dynamic stability. In this paper, we shall introduce, under rather restrictive assumptions, a type of extrapolative expectations and we shall test its effects on the stability of a dynamic system.
Excess demands in a multiple market system are usually taken to be functions of the current prices of all goods. Ideally, it would be desirable also to include expected prices for all future time periods and for all individuals and all assets as arguments of the excess demand functions. A theory of such formal generality, however, would be necessarily devoid of much content. Abstractions and simplifying assumptions are necessary. There are many possible expectations functions by which people might relate current and expected prices, and there is a variety of ways to represent plausibly the type of extrapolative expectations which we wish to describe. Our choice was made largely on the grounds of mathematical simplicity.
The increasing span of government control over economic life in the last fifty years has directed the attention of economic theorists to the relative merits of centralization and decentralization in economic decision-making.
If the aim of the economic system is something like the maximization of national income, it may be asked whether it is better to make the economic decisions in a central agency where information relating to the entire system can be used or in the many independent units which characterize a capitalist economy such as ours.
In the course of this great debate, the role of the price system in coordinating many individual decisions has been given stronger and stronger recognition, although the idea itself already appears in Adam Smith's famous “invisible hand.”
There is also growing concern in the field of industrial management with the administration of large business corporations. Again there arises the issue of centralization vs. decentralization. To what extent is it necessary for the efficiency of a corporation that its decisions be made at a high level where a wide degree of information is, or can be made, available? How much, on the other hand, is gained by leaving a great deal of latitude to individual departments which are closer to the situations with which they deal, even though there may be some loss due to imperfect coordination?
A representative collection of management viewpoints on this issue is found in the proceedings of a conference held in The Netherlands some years ago.
Traditionally, economic analysis treats the economic system as one of the givens. The term “design” in the title is meant to stress that the structure of the economic system is to be regarded as an unknown. An unknown in what problem? Typically, that of finding a system that would be, in a sense to be specified, superior to the existing one. The idea of searching for a better system is at least as ancient as Plato's Republic, but it is only recently that tools have become available for a systematic, analytical approach to such search procedures. This new approach refuses to accept the institutional status quo of a particular time and place as the only legitimate object of interest and yet recognizes constraints that disqualify naive Utopias.
A wealth of ideas, originating in disciplines as diverse as computer theory, public administration, games, and control sciences, has, in my view, opened up an exciting new frontier of economic analysis. It is the purpose of this paper to survey some of the accomplishments and to consider outstanding unsolved problems and desirable directions for future efforts.
It is not by accident that the terms “analytical” and “institutional” were only a few words apart in the preceding statement of scientific goals of our inquiry. In the past, especially in the nineteenth century, cleavage developed between analysts who tended to focus on the competitive and monopolistic market models and institutionalists who, either as historians or as reformers, felt the need for a broader framework, but found the existing analytical tools inadequate for their purposes.
The frequent and loud complaints of a shortage of engineers and scientists heard over the past eight years or so might be taken as indicating a failure of the price mechanism and indeed have frequently been joined with (rather vaguely stated) proposals for interference with market determination of numbers and allocation. It is our contention that these views stem from a misunderstanding of economic theory as well as from an exaggeration of the empirical evidence. On the contrary, a proper view of the workings of the market mechanism, recognizing, in particular, the dynamics of market adjustment to changed conditions, would show that the phenomenon of observed shortage in some degree is exactly what would be predicted by classical theory in the face of rapidly rising demands.
In this paper we present a model which explains the dynamics of the market adjustment process and apply the conclusions drawn from this analysis to the scientist-engineer “shortage.”
Equality of supply and demand is a central tenet of ordinary economic theory, but only as the end result of a process, not as a state holding at every instant of time. On the contrary, inequalities between supply and demand are usually regarded as an integral part of the process by which the price on a market reaches its equilibrium position. Price is assumed to rise when demand exceeds supply and to fall in the contrary case.
The model set forth in the preceding chapter pertains to an oligopolistic industry in which the price leader is a member of that one industry alone. As such, it can be used to analyze pricing behavior in a significant portion of the oligopolistic sector. Still, if it is not to be unduly limited in its applicability, the model needs to be modified to encompass other types of industries and other types of firms. This is the task of the present chapter.
The most important of those other types of firms is the conglomerate megacorp – a megacorp that is a member of more than one industry. Such firms are becoming increasingly more common. Indeed, it has been suggested that they represent a higher stage in the evolution of the corporation in the United States. Insofar as one of these conglomerate megacorps is a price leader in one or more industries, its behavior may not be entirely explainable in terms of the model set forth above. At the same time, many firms that might otherwise meet the criteria for being classified as a megacorp are members of regulated industries. Such firms, too, are important – if only because they dominate certain critical sectors of the economy. The question is whether the oligopolistic pricing model set forth above can be applied to these regulated enterprises as well.
For an understanding of how prices are determined under oligopoly it is necessary to examine, not the conditions affecting the individual firm in the short run but rather, the conditions affecting the industry as a whole over the long run. This extenstion of the analysis to multiple periods not only introduces time as a factor, it also means that the pricing decision cannot be divorced from the industry's investment planning.
To speak of the industry in this connection is, of course, to speak of the megacorp-price leader. Its practice of acting as the surrogate for its fellow oligopolists arises out of a real necessity – the need of an industry to avoid price competition that will be destructive to all its members. Still, the question remains of how one firm can decide upon a price that will be acceptable to other firms within the same industry despite the inevitable divergence of interests.
The megacorp-price leader's task in this regard is greatly facilitated by two conditions inherent in the very situation in which it finds itself. The first is the fact, already noted in chapter 2 (pp. 47–8), that when it acts on behalf of the entire industry the megacorp-price leader's own cost and revenue curves can be treated as the marginal portions of the industry supply and demand curves respectively.
The previous chapter, in pointing out the macrodynamic properties of an economy like that of the United States, has suggested that it is possible for a society to exercise some choice, through its political system, as to the rate at which the economy will expand. But is that choice unbounded? This is the question which this chapter starts off by exploring. The concept of a potential growth rate is introduced, with the factors which may determine that potential growth rate – the availability of manpower and the rate of technological change – then analyzed. The conclusion reached is that while the potential growth rate may well exist as an asymptotic limit which the economy can only approach, this is not the reason why the rate of economic expansion is usually held in check by the political authorities. Their reluctance to use the control they have to achieve a higher secular growth rate is due instead to the difficulty of getting the economy off the dead center established by the existing secular growth rate. Indeed, any change in the aggregate growth rate – if it is to lead to a new secular rate of expansion and not just represent another cyclical movement – will require a carefully orchestrated series of adjustments, not only on the part of government but in the other sectors of the economy as well.