To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Social choices, by which we mean group or collective decisions, are made in twoways. In political voting, collective decision is typically used to make “political” decisions, and in the market mechanism, collective decision is typically used to make economic decisions. In this chapter, we concentrate exclusively on the former. Collective action (or social [group] choices) should be based on the preferences of individuals (or agents) in society. Therefore, an important aspect of social choice theory is a description and analysis of the way preferences of individual members of a group are aggregated into a decision of the group as a whole (see Arrow 1950; and Taylor 2005). We are interested in aggregating preferences of the agents into a social decision, which essentially means that we consider social choice functions that, for each possible individual preference profile, picks an alternative for society from the set of alternatives. This approach is different from social welfare function (discussed in Chapters 2–6) that for each possible individual preference profile picks an ordering for society (not just an alternative like the social choice function).
Social choice methods can be subject to strategic manipulation in the sense that an agent, by misrepresenting his preference, may secure an outcome he prefers to the outcome which would have been obtained if he had revealed his preference sincerely. This issue was first addressed by Gibbard (1973) and Satterthwaite (1975). They consider a situation where a collective decision has to be made by a group of individuals regarding the selection of an outcome. The choice of this outcome depends on the preferences that each agent has over the various feasible outcomes. However, these preferences are known only to the agents themselves, that is, each agent knows only his preference. The Gibbard–Satterthwaite theorem states that under verymild assumptions, the only procedureswhich will provide incentives for each individual to report his private information truthfully is one where the responsibility of choosing the outcome is left solely to a single individual (the dictator).
The chapter is organized as follows. In Section 10.2, we consider the framework and we also provide the statement of the Gibbard–Satterthwaite theorem.
Kenneth J. Arrow (see Arrow 1950) provided a striking answer to a basic problem of democracy: how can the preferences of a set of individuals be aggregated into a social ordering? The answer, known as Arrow’s impossibility theorem, was that every conceivable aggregation method has some flaw. That is, a handful of reasonable-looking axioms, which one thinks an aggregation procedure should satisfy, lead to an impossibility. This impossibility theorem created a large literature and the major field called social choice theory, which is one of the main subject matters of this book. Specifically, Arrow’s impossibility theorem or the general possibility theorem or Arrow’s paradox is an impossibility theorem stating that when agents face three or more distinct alternatives (options), every social welfare function that can convert the preference ordering of all the individuals into a social ordering while also meeting unrestricted domain, weak Pareto, and independence of irrelevant alternatives must be dictatorial. Unrestricted domain (also called universality) is a property in which all possible preferences of all individuals are allowed in the domain. Weak Pareto requires that the unanimous preferences of individuals must be respected, that is, if every agent strictly prefers one alternative over the other, then society must also strictly prefer the same alternative over the other. Social preference orderings satisfy the independence-of-irrelevant-alternatives criterion if the relative societal ranking of any two alternatives depends only on the relative ranking of those two alternatives of all the individuals and not on howthe individuals rank other alternatives.Adictatorial social welfare function is one in which there is a single agent whose strict preferences are always adopted by the society. In this chapter, we present a detailed analysis of Arrow’s impossibility theorem and then provide two proofs of the theorem.
THE FRAMEWORK
Consider a society with n agents and we denote this society by N = ﹛1, … , n﹜. Each agent i ∊ N has an ordering ≿i defined on A and assume that |A| ≥ 3. The assumption that ≿i is an order on A has implications for the strict preference relation ≿i and the indifference relation ≿i. These implications are summarized in the next proposition.
Social aggregation theory is concerned with investigating methods of clustering values that individuals in a society attach to different social or economic states into values for the society as a whole. Loosely speaking, a social state, a state of affairs, represents a sketch of the amount of commodities possessed by different individuals, quantities of productive resources invested in different productive activities and different types of collective activities (Arrow 1950). The values that individuals attach to different social states are reflections of respective preferences. Consequently, the problem of social aggregation is to combine individual preferences into a social preference in an unambiguous way. In this monograph, we will use the terms “social aggregation” and “social choice” interchangeably.
Modern social aggregation theory started with the publication of Kenneth J. Arrow’s pioneering contribution Social Choice and Individual Values, his PhD dissertation, in 1951. It can be regarded as the foundation to laying the groundwork of social aggregation theory in view of its innovative facture and revolutionary influence. The idea of aggregating individual preferences into a collective choice rule predates Arrow (1950) by more than 150 years. In 1785, the French mathematician and philosopher Marie-Jean de Condorcet considered the problem of collective decision-making with regard to majority voting. According to majority voting, in a choice between two alternatives x and y, x is declared as the winner if it gets more votes than y. He established that the method of pair-wise majority voting may give rise to cyclicality in social preference. This paradoxical result, popularly known as the Condorcet voting paradox, appears to draw inspiration, to a certain extent, from an earlier contribution by the French mathematician Jean-Charles de Borda (de Borda 1781). In this alternative voting system, known as the Borda count method, voters rank candidates in order of preference.
One of the major goals of this monograph is the analysis of the Arrovian approach to the theory of collective aggregation and later developments on it. We, therefore, focus now on Arrow’s impossibility theorem, which is generally acknowledged as the formative basis of modern social aggregation rules.Arrow’s seminal work examines the possibility of the aggregation of individual preferences into a social preference in order to obtain a social ranking of alternative states of affairs.
In this chapter, we restrict our attention to the decisive voting rule with only two contesting candidates (or alternatives). A decisive voting rule maps for every possible vote (or preference) of the set of agents over the two contesting candidates (say, x and y) to either a winner or two non-losers. May’s theorem, which is the main subject matter of this chapter, specifies the importance of majority voting rule by arguing that it is the unique rule that satisfies four important democratic principles. These four key democratic principles are decisiveness of the voting rule, anonymity, neutrality, and positive responsiveness. Decisiveness of the voting rule requires that the voting rule must specify a unique decision even if the decision is indifference for any set of individual preferences over the two contesting candidates. Anonymity (or symmetry across agents) requires that a voting rule must treat all voters alike, in the sense that if any two voters traded ballots, the outcome of the election would remain the same. Neutrality (or anonymity across alternatives) requires that a voting rule must treat all candidates alike, rather than favor one over the other. Finally, positive responsiveness (a type of monotonicity property) requires that if the group decision is indifference or favorable to some alternative x, and if individual preferences remain the same except that a single individual changes his or her vote in favor of x, then the group decision should be x. Formally,May’s theoremstates that among the class of all decisive voting rules, the majority voting rule is the only one that satisfies anonymity, neutrality, and positive responsiveness.
The chapter is organized as follows: Section 3.2 provides the framework. In Section 3.3 we state and prove May’s theorem. In Section 3.4 we check the robustness of the axioms used in May’s theorem.
THE FRAMEWORK
We consider preferences of a finite set of agents N = ﹛1, … , n﹜ of a society voting over two alternatives x and y.
Arrow’s theorem is based on both non-measurability (utility of an individual cannot be measured) and non-comparability (the welfare of two different individuals cannot be compared). The non-measurability is implicit in Arrow’s use of orderings to represent individual preferences and non-comparability is implied by the controversial irrelevance of individual alternatives axiom. If either of these assumptions is relaxed, then Arrow’s theorem does not hold.
Sen (1970) was the first to propose a framework for exploring the consequences of relaxing non-measurability and non-comparability. Instead of assuming the set of all preference ordering as a primitive (as in Arrow), he assumed it to be the set of all possible utility functions. A profile is a list of utility functions, one for each agent. A social welfare functional associates a utility function to each profile.
In this framework, Arrow’s theoremholds if the utility functions are only ordinally measurable and interpersonally non-comparable. Arrow’s impossibility theoremis robust in the sense that weakening the requirement that social preferences are orderings, while preserving non-measurability and non-comparability, leaves little room for sensible social binary relations. Sen (1974, 1977a, 1986a) has provided a taxonomy of different measurability and comparability assumptions. In this chapter, we study Arrow’s theoremunder these measurability and comparability assumptions.
Why should we care about measurability and comparability assumptions? d’Aspremont and Gevers (2002) argue that there is a distinction between the problem of determining the “relationship that collective decisions or preferences ought to have with individual preferences” and the separate problem where “an ordinary citizen attempts to take the standpoint of an ethical observer in order to formulate social evaluation judgements.” The first problem is appropriate when looking at the design of constitutions, for instance. When designing a constitution, we will be concerned with the abilities of voters to manipulate the voting process because voting decisions typically involve groups. We also want to make minimal assumptions on measurability and comparability because preferences cannot be measured.
However, when an ordinary citizen takes the viewpoint of an ethical observer, she can make assumptions about measurability and comparability in order to determine her own voting strategy, or to recommend a voting process for society. d’Aspremont and Gevers (2002) also observe that much of the discussion of social welfare prior to Arrow was concerned with the second problem.
Often the literature on inequality measurement regards income as the only yardstick of well-being. One plausible reason for this assumption is that an increase in a person’s income is likely to increase his level of living. However, in the recent period the literature has observed a shift of emphasis from a single-dimensional structure to a multidimensional framework for the purpose of inequality evaluation. (We analyze this issue in greater detail in the next chapter.) As a starting point, in this chapter we treat income as the only dimension of well-being. This chapter forms the basis of multidimensional approaches to inequality evaluation to be analyzed in Chapter 9.
After presenting the basics and preliminaries in the next section, in Section 8.3 we analyze common features of the inequality evaluators that rely respectively on the direct and inclusive measure of well-being approaches. While in the former outlook a social welfare function is defined directly on the set of income distributions, in the latter individual utilities are aggregated to generate a welfare metric of society. The next two consecutive sections deal explicitly with inequality assessment from these two perspectives. Section 8.6 stipulates how some standard inequality indices can be interpreted within the Harsanyi (1953, 1955) framework (Dahlby 1987).
The current literature on the measurement of inequality at times considers both achievement and shortfall inequalities in some dimension of human well-being and establishes a relation between them. Section 8.7 presents a condensed essay on the related literature. A person often may desire to have access to one or more opportunities available in society from the perspectives of happiness and success. The subject of Section 8.8 is a brief discussion on opportunity equality. In the next two sections we deal with inequality for an ordinal dimension of human well-being and an ordinal approach to inequality evaluation. In the context of network engineering, a fairness indicator determines the extent to which users are receiving fair shares of resources. The concern of Section 8.11 is a brief review of the axiomatic literature developed along this line. As we will argue, the use of conventional equality metrics may not be suitable for the evaluation of fairness.
Survival analysis studies the time-to-event for various subjects. In the biological and medical sciences, interest can focus on patient time to death due to various (competing) causes. In engineering reliability, one may study the time to component failure due to analogous factors or stimuli. Cure rate models serve a particular interest because, with advancements in associated disciplines, subjects can be viewed as “cured meaning that they do not show any recurrence of a disease (in biomedical studies) or subsequent manufacturing error (in engineering) following a treatment. This chapter generalizes two classical cure-rate models via the development of a COM–Poisson cure rate model. The chapter first describes the COM–Poisson cure rate model framework and general notation, and then details the model framework assuming right and interval censoring, respectively. The chapter then describes the broader destructive COM–Poisson cure rate model which allows for the number of competing risks to diminish via damage or eradication. Finally, the chapter details the various lifetime distributions considered in the literature to date for COM–Poisson-based cure rate modeling.
This chapter defines the COM–Poisson distribution in greater detail, discussing its associated attributes and computing tools available for analysis. This chapter first details how the COM–Poisson distribution was derived, and then describes the probability distribution, and introduces computing functions available in R that can be used to determine various probabilistic quantities of interest, including the normalizing constant, probability and cumulative distribution functions, random number generation, mean, and variance. The chapter then outlines the distributional and statistical properties associated with this model, and discusses parameter estimation and statistical inference associated with the COM–Poisson model. Various processes for generating random data are then discussed, along with associated available R computing tools. Continued discussion provides reparametrizations of the density function that serve as alternative forms for statistical analyses and model development, considers the COM–Poisson as a weighted Poisson distribution, and details discussion describing the various ways to approximate the COM–Poisson normalizing function.
This chapter is an overview summarizing relevant established and well-studied distributions for count data that motivate consideration of the Conway–Maxwell–Poisson distribution. Each of the discussed models provides an improved flexibility and computational ability for analyzing count data, yet associated restrictions help readers to appreciate the need for and usefulness of the Conway–Maxwell–Poisson distribution, thus resulting in an explosion of research relating to this model. For completeness of discussion, each of these sections includes discussion of the relevant R packages and their contained functionality to serve as a starting point for forthcoming discussions throughout subsequent chapters. Along with the R discussion, illustrative examples aid readers in understanding distribution qualities and related statistical computational output. This background provides insights regarding the real implications of apparent data dispersion in count data models, and the need to properly address it.
A multivariate Poisson distribution is a natural choice for modeling count data stemming from correlated random variables; however, it is limited by the underlying univariate model assumption that the data are equi-dispersed. Alternative models include a multivariate negative binomial and a multivariate generalized Poisson distribution, which themselves suffer from analogous limitations as described in Chapter 1. While the aforementioned distributions motivate the need to instead consider a multivariate analog of the univariate COM–Poisson, such model development varies in order to take into account (or results in) certain distributional qualities. This chapter summarizes such efforts where, for each approach, readers will first learn about any bivariate COM–Poisson distribution formulations, followed by any multivariate analogs. Accordingly, because these models are multidimensional generalizations of the univariate COM–Poisson, they each contain their analogous forms of the Poisson, Bernoulli, and geometric distributions as special cases. The methods discussed in this chapter are the trivariate reduction, compounding, Sarmanov family of distributions, and copulas.
While the Poisson model motivated much of the classical control chart theory for count data, several works note the constraining equi-dispersion assumption. Dispersion must be addressed because over-dispersed data can produce false out-of-control detections when using Poisson limits, while under-dispersed data will produce Poisson limits that are too broad, resulting in potential false negatives and out-of-control states requiring a longer study period for detection. Section 6.1 introduces the Shewhart COM–Poisson control chart, demonstrating its flexibility in assessing in- or out-of-control status, along with advancements made to this chart. These initial works lead to a wellspring of flexible control chart development motivated by the COM–Poisson distribution. Section 6.2 describes a generalized exponentially weighted moving average control chart, and Section 6.3 describes the cumulative sum charts for monitoring COM–Poisson processes. Meanwhile, Section 6.4 introduces generally weighted moving average charts based on the COM-Poisson, and Section 6.5 presents the Conway–Maxwell–Poisson chart via the progressive mean statistic. Finally, the chapter concludes with discussion.
This chapter introduces the Conway–Maxwell–Poisson regression model, along with adaptations of the model to account for zero-inflation, censoring, and data clustering. Section 5.1 motivates the consideration and development of the various COM–Poisson regressions. Section 5.2 introduces the regression model and discusses related issues including parameter estimation, hypothesis testing, and statistical computing in R. Section 5.3 advances that work to address excess zeroes, while Section 5.4 describes COM–Poisson models that incorporate repeated measures and longitudinal studies. Section 5.5 focuses attention on the R statistical packages and functionality associated with regression analysis that accommodates excess zeros and/or clustered data as described in the two previous sections. Section 5.6 considers a general additive model based on COM–Poisson. Finally, Section 5.7 informs readers of other statistical computing softwares that are also available to conduct COM–Poisson regression, discussing their associated functionality. The chapter concludes with discussion.