To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To most non-economists, and to some economists, the word “strategic” in the title will have either military connotations (securing our supplies of essential materials, denying such materials or sensitive technology to our adversaries) or industrial policy connotations (identification and promotion of sectors that are of special importance to our economy). The role of trade policy in such situations is indeed an interesting question; for recent analyses see Cooper (1987, pp. 305-15) and Krugman (1987), respectively.
However, it has become customary in international trade theory to use the word “strategic” in a different sense - namely, that of game theory. In the working of trade and trade policy, there is the usual structural interaction among the firms and governments: The outcome for each depends on the actions of all. The added element of strategic interaction arises when each decision maker is aware that he faces an environment that is not passive, but composed of other rational decision makers, who in turn are similarly aware, and so forth.
One expects such strategic interactions to arise when there is a small number of large buyers, sellers, or policy makers. During the last four decades, such conditions have arisen due to the growth of large and multinational firms, of public enterprises in major industries, and of large countries and blocks with considerable economic power. Until recently, trade theory neglected these developments. The standard model assumed perfect competition among firms, and allowed only one government to be active in policy making.
Abstract: Three topics are discussed. The first is a research program to establish whether the familiar trading rules, such as sealed-bid and oral double auctions, are incentive efficient over a wide class of economic environments. The second is a review of recent studies of dynamic trading processes, and particularly the effects of impatience and private information on the timing and terms of trade; the main emphasis is on models of bilateral bargaining. The third considers prospects for embedding bargaining and auction models in larger environments so as to endogenize traders' impatience as a consequence of competitive pressures; models of dispersed matching and bargaining and a model of oral bid-ask markets are mentioned.
Introduction
My aim in this chapter is to describe some developments in the theory of exchange. The topics I describe share a common focus, namely the determination of the terms of trade. They also share a common methodology: the application of game theory to finely detailed models of trading processes. The aim of this work is to establish substantially complete analyses of markets taking account of agents' strategic behavior. Typically the results enable two key comparisons. One is the effect of altering the trading rules, and the other is the effect of alterations in the environment, such as changes in the number, endowments, preferences or information of the participants.
Abstract: This is a partial survey of results on the complexity of the linear programming problem since the ellipsoid method. The main topics are polynomial and strongly polynomial algorithms, probabilistic analysis of simplex algorithms, and recent interior point methods.
Introduction
Our purpose here is to survey theoretical developments in linear programming, starting from the ellipsoid method, mainly from the viewpoint of computational complexity. The survey does not attempt to be complete and naturally reflects the author's perspective, which may differ from the viewpoints of others.
Linear programming is perhaps the most successful discipline of Operations Research. The standard form of the linear programming problem is to maximize a linear function cTx (c,xϵRn) over all vectors x such that Ax = b and x ≥ 0. We denote such a problem by (A, b, c). Currently, the main tool for solving the linear programming problem in practice is the class of simplex algorithms proposed and developed by Dantzig [43]. However, applications of nonlinear programming methods, inspired by Karmarkar's work [79], may also become practical tools for certain classes of linear programming problems. Complexity-based questions about linear programming and related parameters of polyhedra (see, e.g., [66]) have been raised since the 1950s, before the field of computational complexity started to develop. The practical performance of the simplex algorithms has always seemed surprisingly good. In particular, the number of iterations seemed polynomial and even linear in the dimensions of problems being solved. Exponential examples were constructed only in the early 1970s, starting with the work of Klee and Minty [85].
The past decade has witnessed a growing interest in contract theories of various kinds. This development is partly a reaction to our rather thorough understanding of the standard theory of perfect competition under complete markets, but more importantly to the resulting realization that this paradigm is insufficient to accommodate a number of important economic phenomena. Studying in more detail the process of contracting - particularly its hazards and imperfections - is a natural way to enrich and amend the idealized competitive model in an attempt to fit the evidence better. At present it is the major alternative to models of imperfect competition; we will comment on its comparative advantage below.
In one sense, contracts provide the foundation for a large part of economic analysis. Any trade - as a quid pro quo - must be mediated by some form of contract, whether it be explicit or implicit. In the case of spot trades, however, where the two sides of the transaction occur almost simultaneously, the contractual element is usually downplayed, presumably because it is regarded as trivial (although this need not be the case; see Section 3). In recent years, economists have become much more interested in long-term relationships where a considerable amount of time may elapse between the quid and the quo. In these circumstances, a contract becomes an essential part of the trading relationship.
Abstract: Increasing returns are as fundamental a cause of international trade as comparative advantage, but their role has until recently been neglected because of the problem of modeling market structure. Recently, substantial theoretical progress has been made using three different approaches. These are the Marshallian approach, where economies are assumed external to firms; the Chamberlinian approach, where imperfect competition takes the relatively tractable form of monopolistic competition; and the Cournot approach of noncooperative quantity-setting firms. This chapter surveys the basic concepts and results of each approach. It shows that some basic insights are not too sensitive to the particular model of market structure. Although much remains to be done, we have made more progress toward a general analysis of increasing returns and trade than anyone would have thought possible even a few years ago.
Since the beginnings of analytical economics, the concept of comparative advantage has been the starting point for virtually all theoretical discussion of international trade. Comparative advantage is a marvelous insight: simple yet profound, indisputable yet still (more than ever?) misunderstood by most people, lending itself both to theoretical elaboration and practical policy analysis. What international economist, finding himself in yet another confused debate about U.S. “competitiveness,” has not wondered whether anything useful has been said since Ricardo?
The last decade has witnessed a renewed interest in econometric methodology and stimulated the articulation of many distinctive viewpoints about empirical modelling and the credibility of econometric evidence. To a non-specialist, the plethora of debates and the rapid evolution of new concepts might suggest that econometricians were in total disarray within their Ivory Tower of Babel. Certainly, an excessive interest in methodology can substitute for more constructive activities; but equally, an inadequate methodological basis can induce gross inefficiency in research. I believe our own debates arose in large measure out of the turbulence of the 1970s, but were sustained because of the extraordinary innovations in computing technology which seem bound to influence most aspects of modern societies. Specifically, the vast reduction in the price per unit of computer power allowed far greater exploration of selection methods and the robustness of results than could have been dreamt about 20 years ago. Thus, the focus shifted from the sheer difficulty of estimating the unknown parameters of theory-based models, where the arcane of one decade became the routine trivia of the next, to evaluating models and thoroughly investigating potential mismatches of theory and evidence. Naturally, different views have evolved both on the causes of model failures in the 1970s and on the credibility of the results of search procedures.
These developments occurred against a background in which economic theory itself evolved rapidly, and the technology of econometric analysis advanced by leaps and bounds.
The purpose of this chapter is to formalise the methodology sketched in Chapter 1 using the concepts and procedures developed and discussed in the intervening chapters. The task of writing this chapter has become considerably easier since the publication of Caldwell (1982) which provides a lucid introduction to the philosophy of science for economists and establishes the required terminology. Indeed, the chapter can be seen as a response to Caldwell's challenge in his discussion of possible alternative approaches to economic methodology:
…One approach which to my knowledge has been completely ignored is the integration of economic methodology and philosophy with econometrics. Methodologists have generally skirted the issue of methodological foundations of econometric theory, and the few econometricians who have addressed philosophical issues have seldom gone beyond gratuitous references to such figures as Feigl or Carnap.…
(See ibid, p. 216.)
In order to avoid long digressions into the philosophy of science the discussion which follows assumes that the reader has some basic knowledge of philosophy of science at the level covered in the first five chapters of Caldwell (1982) or Chalmers (1982).
Let us begin the discussion by considering the textbook econometric methodology criticised in Chapter 1 from a philosophy of science perspective. Any attempt to justify the procedure given in Fig. 1.1 reveals a deeply rooted influence from the logical positivist tradition of the late 1920s early 30s.
The purpose of this chapter is to consider various methods for constructing ‘good’ estimators for the unknown parameters θ. The methods to be discussed are the least-squares method, the method of moments and the maximum likelihood method. These three methods played an important role in the development of statistical inference from the early nineteenth century to the present day. The historical background is central to the discussion of these methods because they were developed in response to the particular demands of the day and in the context of different statistical frameworks. If we consider these methods in the context of the present-day framework of a statistical model as developed above we lose most of the early pioneers' insight and the resulting anachronism can lead to misunderstanding. The method developed in relation to the contemporary statistical model framework is the maximum likelihood method attributed to Fisher (1922). The other two methods will be considered briefly in relation to their historical context in an attempt to delineate their role in contemporary statistical inference and in particular their relation to the method of maximum likelihood.
The method of maximum likelihood will play a very important role in the discussion and analysis of the statistical models considered in Part IV; a sound understanding of this method will be of paramount importance. After the discussion of the concepts of the likelihood function, maximum likelihood estimator (MLE) and score function we go on to discuss the properties of MLE's.
The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher's work on estimation. As in estimation, we begin by postulating a statistical model but instead of seeking an estimator of θ in Θ we consider the question whether θ ∈ Θ0 ⊂ Θ or θ ∈ Θ1, = Θ - Θ0 is mostly supported by the observed data. The discussion which follows will proceed in a similar way, though less systematically and formally, to the discussion of estimation. This is due to the complexity of the topic which arises mainly because one is asked to assimilate too many concepts too quickly just to be able to define the problem properly. This difficulty, however, is inherent in testing, if any proper understanding of the topic is to be attempted, and thus unavoidable. Every effort is made to ensure that the formal definitions are supplemented with intuitive explanations and examples. In Sections 14.1 and 14.2 the concepts needed to define a test and some criteria for ‘good’ tests are discussed using a simple example. In Section 14.3 the question of constructing ‘good’ tests is considered. Section 14.4 relates hypothesis testing to confidence estimation, bringing out the duality between the two areas. In Section 14.5 the related topic of prediction is considered.
Testing, definitions and concepts
Let X be a random variable (r.v.) defined on the probability space (S, ℱ, P(·)) and consider the statistical model associated with X:
(i) Φ = {f(x;θ), θ∈Θ};
(ii) X = (X1, X2, …, Xn)′ is a random sample, from f(x; θ).
By descriptive study of data we refer to the summarisation and exposition (tabulation, grouping, graphical representation) of observed data as well as the derivation of numerical characteristics such as measures of location, dispersion and shape.
Although the descriptive study of data is an important facet of modelling with real data in itself, in the present study it is mainly used to motivate the need for probability theory and statistical inference proper.
In order to make the discussion more specific let us consider the after-tax personal income data of 23,000 households for 1979–80 in the UK. These data in raw form constitute 23,000 numbers between £1000 and £50,000. This presents us with a formidable task in attempting to understand how income is distributed among the 23,000 households represented in the data. The purpose of descriptive statistics is to help us make some sense of such data. A natural way to proceed is to summarise the data by allocating the numbers into classes (intervals). The number of intervals is chosen a priori and it depends on the degree of summarisation needed. In the present case the income data are allocated into 15 intervals, as shown in Table 2.1 below (see National Income and Expenditure (1983)). The first column of the table shows the income intervals, the second column shows the number of incomes falling into each interval and the third column the relative frequency for each interval.