We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
When agents, due to incomplete preferences, fail to have well-defined marginal valuations for goods, a great many government policies will maximize social welfare or achieve efficiency. Welfare economics then becomes useless as a practical guide to decision-making. For example, the values agents assign to increases in a public good will be discretely smaller than the values they assign to decreases. For society as a whole, a large valuation gap will form and a wide range of quantities of the public good will therefore qualify as optimal. Applied welfare economics and cost–benefit analysis bypass this obstacle by paying attention only to agents’ smallest valuations, thus slanting policymaking against public goods. The multiplicity of preferences that agents view as reasonable also neuters Pareto efficiency as a policy guide: virtually any policy change is likely to harm some of the preferences agents deem reasonable.
The danger to democratic norms aside, this chapter demonstrates that state government is also a needless source of additional regulation, additional taxation, and inefficient duplication of functions – in short, a waste of taxpayer money and a pointless burden on the citizenry. Yet, many of the specific functions currently performed by state governments are essential. The abolition of state government would therefore require the redistribution of those necessary functions between the national government and the local governments. This chapter demonstrates that such a redistribution would be administratively workable. To show this, it formulates general criteria for deciding which functions should go where and offers illustrations of how those criteria might be applied to specific functions in practice.
Chapter 3 discusses the fundamentals of backscatter radio communications, analyzes the RFID backscatter channel, its major limitations and mitigation approaches, and presents recent advances including novel RFID quadrature backscatter modulation techniques.
Randomized response (RR) is a well-known method for measuring sensitive behavior. Yet this method is not often applied because: (i) of its lower efficiency and the resulting need for larger sample sizes which make applications of RR costly; (ii) despite its privacy-protection mechanism the RR design may not be followed by every respondent; and (iii) the incorrect belief that RR yields estimates only of aggregate-level behavior but that these estimates cannot be linked to individual-level covariates. This paper addresses the efficiency problem by applying item randomized-response (IRR) models for the analysis of multivariate RR data. In these models, a person parameter is estimated based on multiple measures of a sensitive behavior under study which allow for more powerful analyses of individual differences than available from univariate RR data. Response behavior that does not follow the RR design is approached by introducing mixture components in the IRR models with one component consisting of respondents who answer truthfully and another component consisting of respondents who do not provide truthful responses. An analysis of data from two large-scale Dutch surveys conducted among recipients of invalidity insurance benefits shows that the willingness of a respondent to answer truthfully is related to the educational level of the respondents and the perceived clarity of the instructions. A person is more willing to comply when the expected benefits of noncompliance are minor and social control is strong.
Data in social and behavioral sciences typically possess heavy tails. Structural equation modeling is commonly used in analyzing interrelations among variables of such data. Classical methods for structural equation modeling fit a proposed model to the sample covariance matrix, which can lead to very inefficient parameter estimates. By fitting a structural model to a robust covariance matrix for data with heavy tails, one generally gets more efficient parameter estimates. Because many robust procedures are available, we propose using the empirical efficiency of a set of invariant parameter estimates in identifying an optimal robust procedure. Within the class of elliptical distributions, analytical results show that the robust procedure leading to the most efficient parameter estimates also yields a most powerful test statistic. Examples illustrate the merit of the proposed procedure. The relevance of this procedure to data analysis in a broader context is noted.
Asymptotic distribution theory of Brogden's form of biserial correlation coefficient is derived and large sample estimates of its standard error obtained. Its efficiency relative to the biserial correlation coefficient is examined. Other modifications of the statistic are evaluated, and on the basis of these results, recommendations for choice of estimator of biserial correlation are presented.
Factor scores are naturally predicted by means of their conditional expectation given the indicators y. Under normality this expectation is linear in y but in general it is an unknown function of y. It is discussed that under nonnormality factor scores can be more precisely predicted by a quadratic function of y.
Methods for the analysis of one-factor randomized groups designs with ordered treatments are well established, but they do not apply in the case of more complex experiments. This article describes ordered treatment methods based on maximum-likelihood and robust estimation that apply to designs with clustered data, including those with a vector of covariates. The contrast coefficients proposed for the ordered treatment estimates yield higher power than those advocated by Abelson and Tukey; the proposed robust estimation method is shown (using theory and simulation) to yield both high power and robustness to outliers. Extensions for nonmonotonic alternatives are easily obtained.
In optimal design research, designs are optimized with respect to some statistical criterion under a certain model for the data. The ideas from optimal design research have spread into various fields of research, and recently have been adopted in test theory and applied to item response theory (IRT) models. In this paper a generalized variance criterion is used for sequential sampling in the two-parameter IRT model. Some general principles are offered to enable a researcher to select the best sampling design for the efficient estimation of item parameters.
Chang and Stout (1993) presented a derivation of the asymptotic posterior normality of the latent trait given examinee responses under nonrestrictive nonparametric assumptions for dichotomous IRT models. This paper presents an extention of their results to polytomous IRT models in a fairly straightforward manner. In addition, a global information function is defined, and the relationship between the global information function and the currently used information functions is discussed. An information index that combines both the global and local information is proposed for adaptive testing applications.
In planned missingness (PM) designs, certain data are set a priori to be missing. PM designs can increase validity and reduce cost; however, little is known about the loss of efficiency that accompanies these designs. The present paper compares PM designs to reduced sample (RN) designs that have the same total number of data points concentrated in fewer participants. In 4 studies, we consider models for both observed and latent variables, designs that do or do not include an “X set” of variables with complete data, and a full range of between- and within-set correlation values. All results are obtained using asymptotic relative efficiency formulas, and thus no data are generated; this novel approach allows us to examine whether PM designs have theoretical advantages over RN designs removing the impact of sampling error. Our primary findings are that (a) in manifest variable regression models, estimates of regression coefficients have much lower relative efficiency in PM designs as compared to RN designs, (b) relative efficiency of factor correlation or latent regression coefficient estimates is maximized when the indicators of each latent variable come from different sets, and (c) the addition of an X set improves efficiency in manifest variable regression models only for the parameters that directly involve the X-set variables, but it substantially improves efficiency of most parameters in latent variable models. We conclude that PM designs can be beneficial when the model of interest is a latent variable model; recommendations are made for how to optimize such a design.
Blocked designs in functional magnetic resonance imaging (fMRI) are useful to localize functional brain areas. A blocked design consists of different blocks of trials of the same stimulus type and is characterized by three factors: the length of blocks, i.e., number of trials per blocks, the ordering of task and rest blocks, and the time between trials within one block. Optimal design theory was applied to find the optimal combination of these three design factors. Furthermore, different error structures were used within a general linear model for the analysis of fMRI data, and the maximin criterion was applied to find designs which are robust against misspecification of model parameters.
Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.
In this chapter, we describe the most important policy evaluation criteria that can be used to choose the appropriate mix of energy and climate policy instruments. We give space to economic efficiency, effectiveness, macroeconomic effects, equity, acceptability, enforceability, and administrative practicability. In the second part of the chapter, we present a simple overview of the most important economic models that can be used to estimate the impact of the introduction of energy and climate policy measures, such as applied general equilibrium models and integrated assessment models. Further, we provide a short introduction to some policy evaluation methods such as randomised controlled trials, difference-in-difference, and regression discontinuity designs that can be used to evaluate policy effectiveness.
One of the biggest challenges as a neurosurgical trainee is to master the handover. This requires developing an organisational efficiency to concisely relay relevant patient information to a suitably qualified person to execute a given task. A trainee can work extremely hard during an on call, making suitable decisions, implementing previous plans to perfection and covering slack in a team. But if the presentation of this work is unclear then it undoes a lot of that hard work and generates an impression of a trainee being disorganised. Success in a handover requires an understanding of whom you are talking to, what you are saying, how you are saying it and if the way you are communicating gains and maintains interest. Above all a handover should ensure the smooth continuity of care of a patient.
The goal of this chapter is to introduce the reader to the wide range of methods used for performance analysis in healthcare. Specifically, it starts with a brief outline of what the authors refer to as the basic analytics of healthcare performance analysis, with an emphasis on hospitals. Although relatively simple, such basic performance analytics are popular approaches in practice: They are useful for exploring and presenting the data before proceeding with more sophisticated methods, and they provide a bridge for communication between practitioners and academics. To facilitate the discussion, they authors provide brief empirical illustrations using real data sets as well as supply the relevant R codes of the examples. They then briefly describe other major approaches for performance analysis in healthcare as a primer for the following chapters.
This chapter addresses the weaknesses of tax penalties in current law as deterrents of high-end tax noncompliance and describes how Congress could introduce tax penalties that vary depending upon taxpayers’ means. The chapter begins with a discussion of the possible motivations for individual tax compliance, including potential adverse consequences of noncompliance and, specifically, civil tax penalties. It then considers why current civil tax penalties often fail to deter high-end tax noncompliance. Finally, the chapter presents means-adjusted tax penalties as a new approach to the design of civil tax penalties, illustrates this approach with several examples, and addresses additional concerns.
The healthcare sector not only plays a key role in a country’s economy but is also one of the fastest growing sectors for most countries, resulting in rising expenditures. In turn, efficiency and productivity analyses of the healthcare industry have attracted attention from a wide variety of interested parties, including academics, hospital administrators, and policy makers. As a result, a large number of studies on efficiency and productivity in the healthcare industry have appeared over the past four decades in a variety of outlets. This chapter presents a performance analysis and science mapping of these studies with the aid of modern machine technology learning methods for bibliometric analysis. This approach revealed patterns and clusters in the data from 1,059 efficiency and productivity articles associated with the healthcare industry produced by nearly 2,300 authors and published in a multitude of Scopus-indexed academic journals from 1983 to 2021. Leveraging such biblioanalytics, which are combined with our own understanding of the field, the authors highlight the trends and possible future of studies on efficiency and productivity in healthcare.
The hospital industry in many countries is characterized by right-skewed distributions of hospitals’ sizes and varied ownership types, raising numerous questions about the performance of hospitals of different sizes and ownership types. In an era of aging populations and increasing healthcare costs, evaluating and understanding the consumption of resources to produce healthcare outcomes is increasingly important for policy discussions. This chapter discusses recent developments in the statistical and econometric literature on DEA and FDH estimators that can be used to examine hospitals’ technical efficiency and productivity. Use of these new results and methods is illustrated by revisiting the Burgess and Wilson hospital studies of the 1990s to estimate and make inference about the technical efficiency of US hospitals, make inferences about returns to scale and other model features, and test for differences among US hospitals across ownership types and size groups in the context of a rigorous, statistical paradigm that was unavailable to researchers until recently.
Conditional frontier models, including full and partial, robust frontiers, have evolved into an indispensable tool for exploring the impact of exogenous factors on the performance of the decision-making units in a fully nonparametric setup. Nonparametric conditional frontier models enable the handling of heterogeneity in a formal way, allowing explanation of the differences in the efficiency levels achieved by units operating under different external or environmental conditions. A thorough analysis of both full and robust time dependent conditional efficiency measures and of their corresponding estimators allows unravelling the compounded impact that exogenous factors may have on the production process. The nonparametric framework does not make assumptions on error distributions and production function forms and avoids misspecification problems when the data-generation process is unknown, as is common in applied studies. This chapter proposes a comprehensive review and journey through the conditional nonparametric frontier models developed so far in the efficiency literature. The authors show how this nonparametric dynamic framework is important for evaluating efficiency in the healthcare sector. They provide numerical illustrations on datasets from the Italian healthcare system, including summaries of practical implementation details.