To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Empirical evidence indicates that respondents misperceive their own household after-tax income (see Kapteyn et al., 1988); respondents appear to underestimate their household after-tax income. As will be explained below, this underestimation turns out to have a downward-biasing effect on the subjective poverty line in empirical implementation. Walker (1987) also pointed out that the concept of income the respondent has in mind may not always be the same as the researcher's. In Kapteyn et al. (1988) a method is presented to remedy this bias. One can adjust the responses to subjective questions if these questions are preceded by a question which measures the respondent's perception of his household after-tax income. The misperception of income can be calculated from a comparison of the respondent's perception of the income with the measurement of income as the sum of a lengthy list of components. Next the responses to the subjective questions can be corrected. An alternative is of course to avoid the misperception, by prefacing the subjective questions with the detailed questions about household income components. Here, the focus is on the former case.
Kapteyn et al. (1988) assume that the answers to the subjective questions are biased in the same proportion as income is underestimated by the respondent. In this chapter, this assumption is tested within the context of the so-called Subjective Poverty Line (SPL) (see Goedhart et al., 1977). The second section concisely introduces the SPL concept.
One of the key concepts in economics is utility or welfare. The first thorough introductions of the concept were those by Gossen (1854), Jevons (1871) and Edgeworth (1881). They assumed that a commodity bundle x in the commodity space (R+n) contained an intrinsic utility value U(x). The consumer problem could then be described as looking for the bundle with the highest utility value that could be bought at prices p and income y.
Such a model was able to describe and to predict purchase behaviour. This was the behavioural aspect. But the model was also to be used for normative purposes, where we compare utility differences between bundles x1, x2, x3 for a specific individual. This is called intra-personal comparison. The utility of income levels y1, y2, y3 may be calculated by means of the indirect utility function V(y,p) which is defined as the maximum utility to be derived from income y at given prices p.
This led to the progressive income taxation rules, suggested by Cohen Stuart (1889) among others. Actually the latter use implies also that utility differences are comparable between individuals. This is called inter-personal comparability.
It would then also be possible to define social welfare functions W(U1,…, Un) where social welfare is a function of individual utilities. The most obvious application of that concept is to compare distributions of social wealth and to devise policies which will lead to a better distribution.
Parenthood has costs as well as benefits. Children must be fed, clothed, housed, and educated, and the resulting expenditures leave parents with less to spend on themselves. In addition, because some governments attempt to compensate families for the costs of children, reasonable estimates of the costs of children are a prerequisite for sensible policies.
But how should the costs of children be measured? We might choose, for example, to assume that the costs of a first child are the same for all families, regardless of income, or we might choose to include or disregard the psychic benefits to parents. In addition, because parents cannot typically imagine the childless alternative, interpersonal comparisons of levels of well-being between individuals in different households are necessary for the construction of sensible indexes.
This chapter presents a theoretical framework for indexes of the costs of children using a methodology that is closely related to household equivalence scale techniques (see, as an example, Blackorby and Donaldson, 1988, 1991). Households' demand behaviour is assumed to be rationalised by standard preferences, individuals in each household are assumed to be equally well off, and comparisons of levels of utility between individuals in different households are assumed to be possible (see p. 52 below).
In this framework, we introduce general classes of cost-of-children indexes (p. 53). The first class – relative cost-of-children indexes – regards the cost as equal in percentage terms for all households, while the second class – absolute cost-of-children indexes – regards the cost as equal in absolute terms.
Within both intact and nonintact families in which children are present, the allocation of resources to children is best viewed as the outcome of a complicated process in which love, altruism, investment motives, fairness, and self-interested behaviour on the part of parents all play a role. Except in extreme circumstances, such as child-neglect cases, official agents of the society like courts, social service agencies, and policing institutions interfere little in the intra-household resource allocation process.
In contrast, agents of societal institutions do intervene, at least indirectly, in the interhousehold consumption allocation decisions made by divorced parents. The principal instruments of intervention are the terms and enforcement of legally-stipulated divorce agreements as they relate to custody arrangements and wealth and income transfers between the ex-spouses. In our view, though parents and their legal representatives are able to shape the specifics of a divorce agreement, the environment wherein such agreements are concluded is sufficiently restrictive so as to necessitate our consideration of it as the main determinant of custody arrangements and child support orders (this approach was originally articulated by Mnookin and Kornhauser, 1979). Societal institutions only indirectly affect resource allocations in nonintact families because their interventions principally affect only the income distribution across households and, in the view taken below, the preferences of the parents. Presumably due to the existence of difficult monitoring problems and issues connected with rights to privacy, societal agents rarely prescribe interpersonal resource allocations directly to nonintact or intact families.
The use of observed income and expenditure variables as measures of welfare has a long tradition in studies of taxation and inequality. An objection to the practice is that the variables omit the contribution of nonmarket time. With the increasing availability of household datasets attention has focused on the estimation of labour supply models for the purpose of making welfare comparisons based on a utility function defined on consumption and leisure, with leisure measured as nonmarket time. The approach has been used extensively for analysing reforms to the taxation of married couples. Examples include Arrufat and Zabalza (1986), Blundell et al. (1986, 1988), Zabalza and Arrufat (1988) and Symons and Walker (1990). An obvious criticism of the approach is that the nonmarket activities of married women do not easily fit the conventional notion of ‘leisure’. Much of their time is spent on housework, producing goods and services for which there are substitutes in the market place, and so it may be more appropriate to treat domestic activity in the same way as market activity, as suggested by Becker (1965).
A long-standing and central concern in the taxation of families is that of horizontal equity in the treatment of those with different time allocations to household production and market work. Conflicting interpretations of what the criterion implies for policy typically reflect different weightings on household production in the calculation of the welfare indicator used to assess the distributional merits of a particular reform.
Equivalence scales – index numbers that attempt to measure the cost to a household of a change in its composition – are of considerable importance in the study of poverty and distribution and in the formulation of government policy. Yet there appears to be no consensus on what model of equivalence scales is the most appropriate, or whether, if at all, comparisons of household welfare can be based on household expenditure data. The reader is referred to Coulter, Cowell and Jenkins (1991) and Browning (1991) for recent surveys of the literature.
The model of equivalence scales usually attributed to Engel (1857) is perhaps the simplest and most easy to compute. It relies on the use of the share of food, sometimes broadly interpreted to include other necessities, as an indicator of household welfare. The equivalence scale is simply the ratio of expenditures that imply equal levels of the budget share of food for households of different demographic compositions. In general, it is impossible to test the assumption used to identify household welfare from household behaviour; it is only possible to test the implications for household demand of an identifying assumption. This makes rejections conclusive, but not acceptances.
The restriction on household demand implied by the Engel model have been explicitly spelled out by Deaton (1981) and, more recently, Browning (1988) and Blackorby and Donaldson (1988). In the absence of information on price variation it is not possible in this chapter to test these restrictions explicitly.
The Southern-Cone stabilisation programmes of the late 1970s brought new challenges to economic theory. The underlying idea behind these programmes was that by pegging the exchange rate to the dollar, the inflation rate would rapidly come down to international levels. However - and to the surprise of policymakers - the inflation rate failed to converge quickly to the preannounced rate of devaluation, which resulted in substantial real appreciation of the domestic currency. Real economic activity expanded in spite of the real appreciation. Later in the programmes - and even before the preannounced exchange rate system was abandoned - a recession set in. The eventual slump in economic activity that took place in the Southern-Cone programmes gave rise to the notion of 'recession now versus recession later', in comparing stabilisations based on controlling the money supply with stabilisations based on fixing the (rate of change of the) exchange rate (hereafter referred to as money-based and exchange-rate-based stabilisations, respectively). The idea was that, under money-based stabilisation, the costs (in terms of output losses) would be paid up-front, whereas, under exchange-rate-based stabilisation, the costs would be postponed until a later date. Thus, choosing between the two nominal anchors was viewed as choosing not if but when the costs of bringing down inflation should be borne.
Almost half a decade later, the ‘heterodox’ programmes of Argentina, Israel, and Brazil brought to life once again some of the same - and still mostly unresolved - issues. Real appreciation was very much part of the picture in spite of the use of wage and price controls. More puzzling, however, was the re-emergence of the pattern of an initial boom and a later recession. The Israeli recession was viewed as particularly hard to rationalize because of its occurrence in a fiscally sound and largely successful programme. It then became clear that economic theory had to come to grips with the issue of an eventual recession in an exchange-rate-based stabilization programme.
This chapter reviews dynamic structural econometric models with both continuous and discrete controls, and those with market interactions. Its goal is to highlight techniques which enable researchers to obtain estimates of the parameters of models with these characteristics, and then use the estimates in subsequent descriptive and policy analysis. In an attempt to increase the accessibility of structural modelling, emphasis has been laid on estimation techniques which, though consistent with the underlying structural model, are computationally simple. The extent to which this is possible depends on the characteristics of the applied problem of interest, so the chapter ends up covering more than one topic. To help the reader who has more focussed interests, we now provide an outline of what can be found in the various subsections of the chapter.
Section 2 introduces the examples used to illustrate the points made in the chapter. We begin with single-agent problems involving continuous, as well as discrete, controls, and later place the agent explicitly into a market setting. The availability of continuous controls raises the possibility of using stochastic Euler equations to estimate some of the parameters of the model, and section 3 begins the substantive discussion of the chapter by considering this possibility.
In recent years there has been an increased interest in analysing the effects of political incentive constraints on macro-economic policy. More and more economists are now using elements of public choice and game theory in an effort to better understand why some countries, at some specific moments in time, choose specific macro-economic policies. This new research programme on endogenous economic policy addresses questions such as: why do some countries rely more heavily on the inflation tax than others; why are fiscal deficits so different across countries; why do different countries choose different exchange rate policies, and so on. The answers emphasise the role of government's strategic behaviour, and of institutions that determine policy-making.
In spite of this mounting interest in the political economy of macro-economic policy, until now there has been relatively little empirical work on the subject. The purpose of this chapter is to present the results of a comparative cross-country empirical analysis of the political determinants of the inflation tax. Our analysis differs from previous work in three respects: first, we use a new data set on cross country political events and political institutions. An advantage of using these new data is that they are free from some of the more serious limitations encountered in other data sets which have been previously used by political scientists and economists (including ourselves). Second, in this chapter we use alternative definitions of the inflation tax and of seignorage in an effort to check for the robustness of the results. And third, we try to discriminate empirically between two alternative families of models that emphasise political explanations of inflation: models based on political instability and government 'myopia', and models of decentralised policy making that focus on the relative weakness (or strength) of the government in office.
SOME CHARACTERISTICS OF STRUCTURAL LABOUR SUPPLY MODELS
The choice between 'structural' models and data-descriptive, reduced-form representations of labour supply may appear straightforward. Structural models impose restrictions which may be invalid but, in exchange, provide economic interpretation. Where these structural restrictions are testable a standard approach to model selection is available. However, since labour supply is fundamentally dynamic, structural models usually require the separate specification and identification of expectation processes. Moreover, since many of the endogenous choice variables are censored or discrete, the ability to derive explicit reduced forms corresponding exactly to a given structural model is severely limited. In general, however, it would seem unwise to adopt some particular structural model without recourse to the usual battery of misspecification tests that reduced forms can provide; equally it would be sad not to recover structural parameters. The strategy developed in this chapter is to provide a sequential approach to estimation and testing, avoiding where possible unnecessarily strong structural assumptions. The theme is to assume only what is necessary to identify the structural parameters of interest, at the same time allowing the data the chance to reject the structural assumptions in question.
As economic models of labour supply have become increasingly sophisticated their econometric counterparts have become increasingly dependent on the imposition of structural theoretical restrictions in order to identify the parameters necessary to conduct policy and welfare analysis. This has been effectively illustrated by the results on econometric coherency and theory consistency in the analysis of taxation and labour supply. A well-defined econometric model of discrete or censored labour supply decisions requires a unique solution for labour supply for any wage/income combination.
The systematic study of intertemporal labour supply began only two decades ago. In a remarkably short time the life-cycle model of individual hours choice has moved to the forefront of both micro- and macro-econometric research. This chapter begins with a look at the original questions that first led to the interest in the life-cycle approach. I then present a selective review of the evidence on various dimensions of intertemporal labour supply. I limit my discussion to micro-econometric studies of male labour supply, making no attempt at an exhaustive survey of even this branch of the literature. Rather, my goal is to offer an assessment of the success and/or failure of the life-cycle model in providing a useful framework for understanding the main features of individual labour supply.
I conclude that the life-cycle labour supply literature sheds relatively little light on the questions that first generated interest in a life-cycle approach: What determines the shape of the life-cycle hours profile? How does labour supply respond to aggregate wage changes? What is the source of idiosyncratic changes in year-to-year labour supply? Part of the reason for this stems from a tendency in the literature to concentrate on one aspect of intertemporal hours variation - the response to wage growth along a known life-cycle trajectory - and to ignore another, namely, the response to unanticipated wage innovations. In addition, much of the literature has taken the position that average hourly earnings during the year are a 'sufficient statistic' for hours choices within the year. There is considerable evidence against this narrow reading of the life-cycle model.
This chapter attempts to augment and critique recent econometric work on the relations between aggregate consumptions and equilibrium asset prices. The increased interest in the econometric implications of intertemporal asset pricing relations during the past decade can be traced to several important developments in economic theory and econometric method. In the latter half of the 1970s, Rubinstein (1974, 1976), Cox, Ingersoll and Ross (1985), Lucas (1978) and Breeden (1979) deduced general equilibrium relations between consumption decisions and asset prices under the assumptions that agents had common information sets, access to a complete set of contingent claims markets, identical preferences and equal access to all production technologies. These models served to significantly expand our understanding of the characteristics of asset prices and the nature of hedging demands in dynamic, stochastic models.
However, the implied asset pricing relations were typically highly nonlinear and not easily analysed with existing econometric techniques. Not surprisingly, then, early empirical studies explored the properties of several very special cases of these models. Hall (1978), Sargent (1978) and Flavin (1981), as well as many others subsequently, investigated versions of the permanent income-life-cycle model of consumption. Preferences were assumed to be quadratic and constraints were assumed to be linear. These assumptions imply that interest rates on discount bonds are constants and that consumption follows a random walk. Empirical studies of the permanent income model have typically not supported the random walk implication, and the implication of constant real interest rates is also counterfactual.
The use of dynamic stochastic models in economics has grown very quickly during the last fifteen years. The importance of this type of models became evident in macro-economics after the paper of Lucas (1972); he argued that the basic relations that were taken as given in traditional macro-economic models (whether Keynesian or Monetarist), such as the money demand function, the consumption function, the investment function, were not invariant to the very type of policy intervention that those models were designed to analyse. Furthermore, these relations were often mutually inconsistent. The way around this problem was to study models where objects like preferences of consumers, production technology, information dissemination, etc. were fixed, and where a well-specified concept of equilibrium determined the outcome of the model. The research programme, then, was to analyse the equilibrium of the model under different environments (for example, under different policy rules) in order to study the effect of changes in the economic environment or policy interventions, taking consumption function, money demand, etc. as endogenous. Nowadays, dynamic stochastic models of equilibrium are being used in virtually all fields of economics.
One crucial element of how dynamic models behave is the assumption about how agents form their expectations. Nowadays, the standard assumption is that agents behave as if they had rational expectations. This avoids ad hoc assumptions about expectations that would not be likely to stay constant under policy changes if agents were rational. Furthermore, many recent papers argue that, in many models, the rational expectations equilibrium can be justified as the limit of a learning process.