Hostname: page-component-68c7f8b79f-mk7jb Total loading time: 0 Render date: 2025-12-19T23:22:32.283Z Has data issue: false hasContentIssue false

Mapping the Political Contours of the Regulatory State: Dynamic Estimates of Agency Ideal Points

Published online by Cambridge University Press:  17 December 2025

ALEX ACS*
Affiliation:
The Ohio State University, United States
*
Alex Acs, Associate Professor, Department of Political Science, The Ohio State University, United States, acs.1@osu.edu.
Rights & Permissions [Opens in a new window]

Abstract

This article introduces a novel empirical method for estimating the ideological orientations of U.S. regulatory agencies across different presidential administrations. Employing a measurement model based on item response theory and analyzing data on planned regulations from the Unified Agenda and the president’s discretionary review of those regulations, as implemented by the Office of Information and Regulatory Affairs, the study provides dynamic estimates of agency ideal points from the Clinton through the Trump administrations. The model uses NOMINATE ideal points of presidents to link the estimated agency ideal points to legislative ideal points. The resulting estimates correlate positively with existing measures of agency ideology, highlight controversial regulators, and demonstrate that agency ideologies shift over time due to emerging issues that divide the parties. The study also finds that agencies located ideologically closer to the president are more productive, as evidenced by their regulatory output.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of American Political Science Association

INTRODUCTION

The centrality of regulation to American partisan politics is undeniable. Although there have been periods when partisan conflict over regulation was less visible, such as during the early 1970s when new regulatory statutes often passed with broad bipartisan support, this détente was short-lived. Opposition to the expansion of federal regulatory power grew steadily throughout the 1970s. By the eve of the Reagan administration, the New York Times declared, “The Heyday Is Over” for prominent regulators like the Occupational Safety and Health Administration (OSHA) and the Environmental Protection Agency (EPA).Footnote 1 Reagan’s confrontational stance marked a new era of conflict over the scope of federal regulatory power. He quickly established the regulatory review function at the Office of Information and Regulatory Affairs (OIRA), proposed cuts to regulatory budgets, and appointed officials who were skeptical of past practices. Subsequent conservative movements echoed this approach. Central tenets of Newt Gingrich’s Contract with America involved Congress clamping down on regulatory activity and imposing significant reforms to limit it, including the passage of the Congressional Review Act in 1996. More recently, the so-called War on Coal by the Obama administration and the subsequent counter-offensive by the first Trump administration exemplify yet another skirmish in the regulatory wars. Generally, the Republican Party targets what it perceives as costly regulatory overreach, especially when these costs fall on businesses, and when regulatory regimes intrude into state-level policymaking. Conversely, the Democratic Party is more eager to use the regulatory state to address perceived harms of the free market system.

If this brief narrative is largely correct and regulation is indeed a deeply divisive force in American politics, regulatory agencies themselves are likely imbued with ideological proclivities. Years ago, the Nixon administration held this view of its administrative bête noire in domestic policymaking, the Department of Health, Education, and Welfare (HEW), an amalgamation of liberal Great Society and New Deal era programs. HEW defended these programs and, in doing so, attracted a workforce largely sympathetic to the department’s goals (Aberbach and Rockman Reference Aberbach and Rockman1976; Nathan Reference Nathan1975). Like HEW, it would not be surprising if many other agencies, particularly those with significant regulatory power, also had ideological biases and, consequently, were held to different standards by presidents depending on the president’s party. Compared to Nixon and Ford, the Carter administration, under the leadership of Joseph Califano, a former aide to President Johnson, was largely supportive of HEW’s Great Society mission (Califano Reference Califano1981).

This article introduces a new empirical approach for estimating the ideological orientations of regulatory agencies, identifying whether they are liberal, moderate, conservative, and so on. The result is a novel measurement model based on item response theory (IRT), along with a set of ideal point estimates for regulatory agencies that vary across presidential administrations. The motivation behind this model is similar to other measurement models: to provide a parsimonious description of policymakers by sacrificing nuanced details in favor of singular estimates. Several studies, which will be discussed in detail shortly, have applied various methods to estimate ideal points for federal agencies, following in the tradition of Poole and Rosenthal (Reference Poole and Rosenthal2000), who first estimated legislator ideal points. The agency estimates in the present article can address questions about the relative ideological orientations of regulators over time and across institutions. For example, was the EPA more liberal under the Clinton or Obama administration? How did its liberalism compare to the median Democrat in the Senate?

The measurement model leverages data generated by the OIRA review process. OIRA, an office within the Executive Office of the President, is empowered by an executive order to review planned regulations, or rules, before they are promulgated. During OIRA review, regulations are scrutinized by OIRA career staff, who are primarily economists with expertise in cost–benefit analysis and regulatory impact analysis. However, when political questions arise, the review process is influenced by a complex interplay of pressures from actors within the White House, interest group allies, members of Congress, and other stakeholders. As a result, controversial rules can be delayed, modified, or effectively vetoed. The OIRA review process is a tool of the presidency, so any evidence that OIRA is responsive to the president, or to groups within the president’s political coalition, as Haeder and Yackee (Reference Haeder and Yackee2015) discover, is by design. The executive order that initiated the modern OIRA review process, EO 12866, states that federal agencies “are responsible for developing regulations and assuring that the regulations are consistent with … the President’s priorities.”

A key reason why the OIRA review process is revealing about the ideological proclivities, or biases, of federal agencies is that not all regulations are subject to review. Since Clinton issued EO 12866 in 1993, only “significant” regulatory actions are subject to review. However, the definition of what constitutes a significant regulation is largely subjective. The executive order specifies criteria for significance, such as overly costly regulations or those presenting coordination challenges with other agencies. In practice, significance often boils down to whether the regulation raises “novel legal or policy issues” relevant to “the President’s priorities.” That’s bureaucratese for a process where “OIRA reviews pretty much anything it wants to review,” as one former EPA official put it (Heinzerling Reference Heinzerling2014, 349). Consequently, the intensity of the review process can vary significantly between agencies often depending on the president’s party affiliation.

An agency’s regulatory agenda is listed in the Unified Agenda of Federal Regulatory Activity (henceforth, the Unified Agenda), where agencies detail their planned regulatory actions prior to OIRA scrutiny and indicate which regulations are significant. OIRA examines the Unified Agenda for rules to consider, but it is not required to accept an agency’s claim that a regulation is significant without question. EO 12866 anticipates possible agency errors and grants OIRA discretion to review the rules it wants: “The Administrator of OIRA may waive review of any planned regulatory action designated by the agency as significant.” Conversely, a regulatory action that an agency lists as not significant can still be reviewed by OIRA. This discretion in the review process leads to obvious strategic behavior. Agencies inclined to disagree with “the President’s priorities” might label their regulations as nonsignificant. However, OIRA and the president are often aware of which agencies are likely to do this, so they have an incentive to scrutinize nonsignificant regulations more closely than they might with agencies aligned with the president’s priorities.

The core of the measurement model is the OIRA review decision—whether to review a specific regulation—conditional on whether the regulation was listed as significant in the Unified Agenda, along with several other control variables. Technically, the measurement model is a variant of the IRT model used to measure ideal points in legislative bodies, modified to fit the regulatory process. Each observation is a regulation in the Unified Agenda. The outcome variable is a binary indicator, with zeros and ones denoting whether a regulation has been reviewed by OIRA. The right side of the equation includes parameters for the ideal points of the presidents and agencies, with the distance between these two actors enclosed in a quadratic function. The model also includes (i) an intercept for each agency that captures the degree to which its regulations are reviewed by all presidents; (ii) a control for whether the regulation was designated as significant by the agency; and (iii) the same significance control included as an interactive effect with the spatial distance. The latter allows for the possibility that significance affects the review decision differently depending on the spatial distance between the president and the agency. For example, given OIRA’s finite resources, OIRA might avoid reviewing some significant regulations from allied agencies, whereas it would hesitate to do the same with an adversarial agency.

To identify the model, the ideal points for the presidents are specified in advance using NOMINATE (Lewis et al. Reference Lewis, Poole, Rosenthal, Boche, Rudkin and Sonnet2021). NOMINATE is a legislative ideal point model that uses nonunanimous roll call votes, or public votes, to estimate the locations of senators and House members. Although presidents do not vote, they are included in the model when they take a public position on pending legislation. These “votes” are used to treat presidents as pseudo lawmakers. Here, presidential ideal points not only identify the agency ideal points but also place those ideal points on the same ideological space as presidents and members of Congress. As a result, agencies like the EPA can, for example, be compared to the median Democrat in Congress.

The model is estimated over four recent presidential administrations, from Clinton through Trump, which yields two Democratic and two Republican presidents. It effectively compares how a Democratic president reviews an agency’s regulatory agenda versus how a Republican president reviews it. This comparison controls for the fact that some agencies’ agendas are reviewed more frequently by all administrations, and also accounts for the proportion of significant regulations in each agency’s agenda. The model is particularly sensitive to instances when OIRA reviews a regulation that is not listed by the agency as significant, or conversely, when OIRA declines to review a regulation listed as significant. These anomalies—one potentially based on suspicion of the agency’s regulatory goals and the other on acceptance of them—can have a substantial impact on the estimated agency ideal points.

So far, this describes a static version of the model, where agency ideal points are assumed to be fixed over time. Although useful as a benchmark model, the assumption of static agency ideal points is quite strong, as there are several reasons why the biases and proclivities of agencies might change over time. Agency missions can evolve, sometimes gradually and other times abruptly, such as following new legislation or a reorganization. This recently happened to the Minerals Management Service (MMS) after its perceived laxity in the wake of the BP oil spill. The agency was reorganized and rebranded as the Bureau of Energy Management (BOEM) early in the Obama administration. Allowing agency ideal points to shift from one administration to the next, as the model does, reveals how MMS (now BOEM) has grown more moderate, or less conservative, since the reorganization.

Agency ideology can also be influenced by changes in personnel. Presidents who aggressively politicize agencies by appointing individuals who share their ideological views may feel comfortable loosening control—that is, reducing OIRA’s role in reviewing an agency’s regulatory agenda. In the model, if OIRA reduced its scrutiny in this way, the agency’s ideal point estimate would shift closer to the president’s. This appears to have been the case for the Trump EPA under Administrator Scott Pruitt, as the EPA shifted strongly in the conservative direction—toward Trump’s ideal point—during Pruitt’s tenure. Pruitt, a former Oklahoma attorney general, was famously loyal to Trump’s environmental agenda, particularly in rolling back federal air pollution and climate change standards, policies that Pruitt had been spearheading conservative opposition to for years, prior to joining the EPA.

The remainder of the article is organized as follows. First, the article reviews the relevant literature and describes the data used in the analysis. This is followed by further detail on the estimation strategy used to derive the agency ideal point estimates. Next comes the presentation of the results and a validation of those results, demonstrating their positive correlation with several existing measures of agency ideology. The article then interprets the results, discussing both why some agencies have significantly changed in their estimated ideology over time and the substantial increase in regulatory productivity that occurs when agencies shift ideologically closer to the president.

CONCEPTUALIZING AND MEASURING AGENCY IDEOLOGY

This article contributes to the growing body of literature that seeks to uncover the latent preferences of bureaucratic agencies within the U.S. executive branch. A central motivation behind these studies is the idea that measuring agency ideology is essential to understanding the scope of the principal-agent problem in bureaucratic politics. If agencies lack stable ideological proclivities of their own, their policymaking may simply mirror that of their political principals, thus dissolving the core tension at the heart of the principal-agent problem.

Previously, researchers have utilized a variety of methods to gauge agency ideology. For instance, survey responses from bureaucrats have been a popular source, as seen in studies by Clinton, Jackman, and Rivers (Reference Clinton, Bertelli, Grose, Lewis and Nixon2012) and Richardson, Clinton, and Lewis (Reference Richardson, Clinton and Lewis2018), who ask federal executives how they perceive the ideology of other federal agencies. Other researchers have aggregated individual-level data on bureaucrats to infer their agency’s ideological leanings, such as campaign contributions (Bonica, Chen, and Johnson Reference Bonica, Chen and Johnson2015; Chen and Johnson Reference Chen and Johnson2015), voter registration files (Spenkuch, Teso, and Xu Reference Spenkuch, Teso and Xu2023), and public statements during congressional hearings (Bertelli and Grose Reference Bertelli and Grose2011). Clinton and Lewis (Reference Clinton and Lewis2008) utilized expert surveys to deduce an agency’s ideological stance, offering a direct but subjective measure of agency ideology. Nixon (Reference Nixon2004) took a novel approach by analyzing cases where an agency head also served in Congress to determine the agency’s ideology. Acs (Reference Acs2020) used a method similar to the current study, albeit with ideal point estimates that are static and limited to two administrations, failing to capture variations across different administrative contexts.

Compared to most existing studies of agency ideology, the approach in this article has several notable distinctions. First, it concentrates on regulation, an important yet arguably understudied domain of federal policymaking. Second, the estimates are grounded in the actual policymaking activities of bureaucrats, not their survey responses, voter registration, campaign contributions, or other peripheral activities. This approach is more similar to the canonical legislative ideal point model, as first developed by Poole and Rosenthal (Reference Poole and Rosenthal2000), which infers the spatial or ideological positions of legislators from their voting records. As such, the method in this article not only provides a more direct measure of agency ideal points, based on policymaking decisions, but also allows for dynamic updating as new administrative data from OIRA become available for future administrations. Finally, this approach estimates ideal points for a comparatively large number of agencies over an extended period of time, reaching back to the Clinton administration.

Despite the growing literature on measuring agency ideal points, important questions remain about what these estimates actually capture. Unlike legislators, agencies are complex organizations whose policymaking is typically shaped by the interaction of career civil servants, political appointees, and the statutes and legal constraints that structure their authority. In any given case, which of these elements contributes most to the observed ideology of the agency? This is a difficult knot to untangle. Nonetheless, the simple premise of this article is that the OIRA review process—specifically, the decision to review a given regulation—reflects a composite of these influences, with their relative weights likely shifting over time and across administrations.

For example, the influence of political appointees may grow stronger in one administration relative to the influence of civil servants or the agency’s foundational statutes. If appointees were to exert near-complete control—such that the agency became a perfect conduit of presidential preferences and policymaking output was fully aligned with those preferences—then the OIRA review process would largely become redundant. In such a case, one would expect patterns of under-reviewing, which would shift the estimated ideal point of the agency closer to the president. In most instances, however, political appointees are unlikely to be perfect agents of the president, and even less likely to fully control career staff or fundamentally redirect the agency’s core policy agenda. The OIRA review process reflects these institutional barriers, as illustrated by the observed patterns of over- and under-review that are explored in this article.Footnote 2

DATA

The agency ideal points are estimated using two publicly available datasets. One is the Unified Agenda, a semi-annual catalog of pending and completed regulatory activities for federal agencies. Agencies are required by the executive order to post their regulatory plans to the Unified Agenda, as articulated in EO 12866: “Each agency shall prepare an agenda of all regulations under development or review, at a time and in a manner specified by the Administrator of OIRA.” OIRA reviews the Unified Agenda to determine which regulations to select for further scrutiny. To aid in this selection, the executive order requires agencies to include specific information for each regulation in the Unified Agenda: “at a minimum, a regulation identifier number, a brief summary of the action, the legal authority for the action, any legal deadline, and the name and telephone number of a knowledgeable agency official.”

The Unified Agenda aims to provide transparency in the regulatory process, allowing OIRA, political overseers in the White House and Congress, and other stakeholders to track forthcoming regulations. Prior to review, however, the details of each regulation remain unknown. It is only when OIRA selects a regulation for review that the agency must submit a draft regulation to OIRA, along with any requested analysis, such as a cost–benefit analysis or a statement on regulatory flexibility for small businesses.

Once the review is underway, OIRA can share these materials with interested parties within the president’s circle, typically beginning with a “heads-up” memo to alert White House stakeholders. This step is crucial because OIRA review is not merely an assessment by neutral bureaucrats; rather, it is a political review conducted within the president’s Office of Management and Budget (OMB), where OIRA resides, as part of the larger Executive Office of the President and the White House’s political staff.

The second dataset is the OIRA docket, which lists the regulations reviewed. For each regulation, the docket indicates when the review started, when it ended, whether it was for a proposed or final rule, and the outcome of the review (e.g., “accepted without change” or “returned”). As mentioned, Clinton’s EO 12866 dramatically altered the OIRA docket. Prior to EO 12866, OIRA reviewed virtually every regulation that agencies promulgated, or at least every regulation was entered into OIRA’s docket. Facing resource constraints, the OIRA offices of the Reagan and George H.W. Bush administrations focused their attention on politically significant regulations, often ignoring rules with little political consequence. To improve the efficiency of the OIRA review process, EO 12866 introduced the practice of selective review, reducing the docket to only those regulatory actions deemed significant by OIRA. As noted, however, few clear criteria exist to define what makes a regulatory action significant.

The final dataset was created by merging the Unified Agenda and the OIRA docket using the eight-digit identifier for each regulation, known as the regulation identifier number (the RIN).

With the merged dataset, a binary review variable was created for each RIN to indicate whether the associated rule was reviewed at least once—that is, whether it appears in the OIRA docket. The analysis focuses on rules with a planned Notice of Proposed Rulemaking (NPRM), the stage in the rulemaking process when agencies publish a proposed regulation in the Federal Register and solicit public comments. Whether an agency planned to publish an NPRM is determined by entries in the Unified Agenda.

When a rule is reviewed, the vast majority of first reviews (over 90%) occur before the NPRM is published. It is primarily during this first review that OIRA’s decision depends most heavily on the agency’s reputation, a point revisited in the next section because it is central to the theoretical claim that the review decision sheds light on the ideological distance between the president and the relevant agency. OIRA also reviews final rules, but in most cases, the rule has already been reviewed prior to the NPRM.Footnote 3 In addition, some agencies issue prerule documents that OIRA may review—such as an ANPRM, or Advance Notice of Proposed Rulemaking—before the NPRM, typically to solicit input on specific elements of the proposal. In a small number of cases, OIRA’s first review is of an ANPRM.

Several types of entries in the Unified Agenda were excluded from the dataset to prevent overcounting the pool of potentially reviewable rules. For each rule, the administration during which the relevant agency first planned to publish its NPRM was identified. Planned NPRM dates often appear in the Unified Agenda six months to a year before the NPRM is published, as EO 12866 requires an entry for “all regulations under development.” Planned NPRMs were excluded if they were withdrawn or abandoned by the agency prior to OIRA review or publication of the NPRM. Rules were also excluded if they were reviewed in one administration but initially planned by a prior administration. Such inherited rules are interesting, but they are likely to be viewed with heightened suspicion by the new administration and would, all else equal, be reviewed at a higher rate.

The dataset includes 69 administrative units—that is, regulatory agencies—that were active during the Clinton, George W. Bush, Obama, and Trump administrations. Each unit is identified by the numeric identifier found in the first four digits of each RIN. In some cases, the unit is an office within a larger agency, such as the EPA’s Office of Air or its Office of Water. In other cases, the agency is an office within a department, such as the Agricultural Marketing Service within the Department of Agriculture. Sometimes, the unit refers to an entire department, as with the Department of Veterans Affairs.Footnote 4 For better estimation of the measurement model, regulatory agencies that did not issue a regulation in each administration or issued fewer than 40 regulations during the entire period (averaging less than 10 per administration) were excluded. Supplementary Appendix A lists all 69 regulatory agencies used in the measurement model.Footnote 5

For each rule, a binary indicator variable was created to show whether the agency listed the rule as significant in the Unified Agenda. This indicator collapses the five-category “priority” variable for each rule into two categories. The original variable categories are (i) economically significant, (ii) significant, (iii) substantive but not significant, (iv) routine, and (v) informational. Following the EO 12866 requirement that OIRA should review “significant” regulatory actions, the first two categories were coded as significant (significant = 1), while the latter three categories were coded as nonsignificant (significant = 0). Since there is no objective difference between the second and third categories—a “significant” regulatory action versus a “substantive but not significant” action—OIRA still reviews “substantive” rules, particularly when it is suspicious of the agency.

Using the review indicator variable described above, a review rate can be calculated for each agency, yielding the share of an agency’s regulatory agenda that OIRA reviewed. Supplementary Appendix A displays the review rates for each agency and administration. The table also includes columns for the proportion of rules listed as significant and the total number of proposed rules listed in the Unified Agenda. If OIRA reviewed all the rules listed as significant, the review rate and the proportion of rules listed as significant, or significance rate, would be identical. When the review rate exceeds the significance rate, it suggests, all else equal, that the agency is being over-reviewed. Conversely, when the review rate falls below the significance rate, it suggests that the agency is being under-reviewed.

The Unified Agenda is not the only source of rulemaking activity, as all proposed and final rules are published in the Federal Register. However, for the purposes of the measurement model, the Unified Agenda is a better source of rulemaking activity, despite criticisms of its incompleteness.Footnote 6 For one, the Federal Register does not include planned NPRMs that were ultimately not published, so it lacks information about the many rules that OIRA reviews at the proposed rule stage and that are subsequently withdrawn or abandoned before publication as proposed rules. Moreover, even if the Unified Agenda does not provide a complete accounting of all rules, it arguably captures the rules that OIRA is aware of, as OIRA oversees the development of the agenda (Copeland Reference Copeland2015). In the data collected for this article, all rules reviewed by OIRA were also present in the Unified Agenda, a fact that was easily confirmed, as the rules are identified in both data sets by their RIN number. This correspondence between reviewed rules and the Unified Agenda is expected, as the Unified Agenda serves as a written record that OIRA uses to identify rules for review before their publication as NPRMs. Thus, if a regulation were not published in the Unified Agenda, it would potentially escape OIRA’s attention. Although such an evasive maneuver is intriguing and worthy of further study, it is not an issue for the measurement model; the focus here is on OIRA’s review decisions based on the planned regulations that OIRA is aware of, not on those that might be concealed.

THEORETICAL FRAMEWORK AND STATISTICAL MODEL

The measurement model is based on a simple theory of the OIRA review process, presented here informally to facilitate interpretation. The theory revolves around a two-player game involving the President and an Agency. Here, the Agency represents the regulator, including its appointees, and the President represents the centralized regulatory oversight process led by OIRA. The President is aware when the Agency is developing a regulation but knows only the Agency’s general reputation as a liberal or conservative regulator, represented by the Agency’s ideal point, rather than the specific content of the regulation. (As mentioned, OIRA may, in practice, have some limited knowledge of the regulation’s content from the summary statement included in each Unified Agenda entry.)

The President can choose to incur a cost to review the regulation and thereby learn its content. If the President reviews the regulation, there are two ways to model the President’s influence over it: (i) the President may revise the regulation to a preferred policy, presumably aligned with the President’s ideal point, or (ii) the President is limited to accepting or rejecting the regulation. Each assumption reflects a different level of presidential power. The second assumption suggests a more constrained presidency, acknowledging that presidents often lack the expertise to develop regulations independently. Consequently, the OIRA review process cannot simply transform a liberal regulation into a conservative one, or vice versa; instead, OIRA serves primarily as a tool to block unwanted regulations. The choice between these two assumptions makes little difference to the theoretical results, so the analysis that follows focuses on the simpler case of OIRA as a veto player.

The players’ strategies consist of the following choices. The Agency decides whether to regulate and, if so, whether to make the regulation acceptable to the President—ensuring that, if reviewed, the President would approve it—or to align the regulation with the Agency’s own preferred policy or ideal point. This decision is private information and not available to the President prior to review. The President, upon observing a planned regulation, decides whether to review it. If the President reviews the regulation, the President must then decide whether to accept or reject it. If the regulation is rejected, a status quo policy prevails, and the President imposes a penalty on the Agency. In practice, this penalty could take the form of personnel or budgetary changes or a reduction in discretion or trust in future regulatory actions. Regardless of its specific form, the penalty creates an incentive for the Agency to ensure the policy is acceptable from the outset.

When the players have different ideological preferences, they find themselves in a zero-sum game that mirrors the strategic dilemma in game theory known as “matching pennies” (Gibbons Reference Gibbons1997). If the Agency believes the President will review the regulation, it has an incentive to propose a regulation that aligns with the President’s preferred policy to avoid the penalty. Conversely, if the Agency believes the President will not review the regulation, it will propose its preferred policy based on its ideal point. The President’s decision to review depends on the Agency’s actions. For example, if the President believes the Agency’s proposal is acceptable, there is no incentive to incur the cost of reviewing it. However, if the President decides not to review, the Agency is incentivized to propose its preferred policy. This cycle, in turn, prompts the President to reconsider and review the regulation. This circular reasoning embodies the essence of the matching pennies game. There is no equilibrium in pure strategies; the cycling between strategies would continue ad infinitum, without reaching a stable conclusion.

To reach an equilibrium, the actors adopt a different approach and choose their strategies probabilistically, incorporating an element of randomness. The President optimizes by reviewing the Agency’s regulation with a probability that makes the Agency indifferent between proposing its own preferred policy or a policy acceptable to the President—that is, a policy the President views more favorably than the status quo. Similarly, the Agency proposes this policy, as opposed to its most preferred policy, with a probability that makes the President indifferent to reviewing the proposal. The greater the difference in policy preferences between the President and the Agency, the higher the President’s review rate must be to make the Agency indifferent. This is because an Agency with divergent preferences has a stronger incentive to propose its own policy rather than one acceptable to the President. Conversely, if the President and the Agency have similar preferences, a lower review probability will suffice to make the Agency indifferent about which policy to pursue.

This game-theoretic reasoning implies that the observed OIRA review rate of an agency reflects the divergence in preferences between OIRA—working at the behest of the president—and the agency. As this divergence grows more pronounced, the agency’s incentive to propose its preferred policies, rather than those of the president, increases. To counteract this, OIRA has an incentive to review the agency’s proposals at a higher rate. The implication is a positive correlation between review rates and ideological distance.Footnote 7 This model resembles “police patrol” oversight, as described by McCubbins and Schwartz (Reference McCubbins and Schwartz1984) in the congressional context and by Acs (Reference Acs2018) in the context of OIRA review. That is, given the president’s finite resources, OIRA reviews are targeted where they are most needed, specifically at agencies most inclined to avoid review, similar to how strategic police patrols focus on areas where the incentives for criminality are strongest.Footnote 8

Statistical Model

OIRA review rates can be analyzed statistically in a model that estimates unobserved agency ideal points. Based on the theoretical discussion above, the probability that OIRA reviews proposal $ i $ depends on the spatial distance between the president and the agency, $ {X}_p-{X}_a $ , where $ {X}_p $ is the ideal point of the president and $ {X}_a $ is the ideal point of the agency. The OIRA review decision also depends on several additional factors that can be modeled:

  1. 1. A binary indicator, $ {S}_i\in \left\{0,1\right\} $ , for whether the agency listed proposal $ i $ as significant in the Unified Agenda;

  2. 2. An intercept for each agency, $ {\alpha}_a, $ which captures unobserved, nonspatial factors that result in different average review rates for agency $ a $ . This parameter could be interpreted as a measure of agency “quality” or professionalization, unrelated to the dimension of ideological conflict; an agency trusted more by all presidents to develop sound regulatory plans would have a lower value of $ {\alpha}_a $ , all else equal, reflecting a lower average review rate for the agency.

Each factor is included additively in a model of OIRA’s binary review decision, $ {Y}_i $ ,

(1) $$ \mathit{\Pr}\left({Y}_i=1\right)=\Phi \left({\alpha}_a+{\beta}_{S_i}\right), $$

where the review probability is modeled as a Bernoulli distribution with the probability given by the cumulative distribution function, $ \Phi \left(\cdot \right) $ , of the standard normal distribution. The parameter $ {\alpha}_a $ captures unobserved, nonspatial factors that influence the OIRA review decision, and $ {\beta}_{S_i} $ represents the effect of rule significance and spatial distance as follows:

(2) $$ {\beta}_{S_i}={\gamma}_{S_i}+{\eta}_{S_i}{\left({X}_p-{X}_{a,t}\right)}^2. $$

Spatial distance is determined by $ {X}_p $ , the ideal point of each president, and $ {X}_{a,t} $ , the agency ideal point in administration $ t $ . The effect of rule significance is reflected in the parameters $ {\gamma}_{S_i} $ and $ {\eta}_{S_i} $ , each of length two, where $ {\gamma}_{S_i} $ is the independent effect of rule significance on the review decision and $ {\eta}_{S_i} $ is the effect when interacted with the distance measure. Since significance is a binary variable, the IRT model estimates $ {\gamma}_0 $ , $ {\gamma}_1 $ , $ {\eta}_0 $ , and $ {\eta}_1 $ . Significance is a strong predictor of OIRA review, so it is not surprising that estimates of $ {\gamma}_1 $ and $ {\eta}_1 $ , the effects of rule significance on the review decision, are both positive and substantially larger than $ {\gamma}_0 $ and $ {\eta}_0 $ , their respective counterparts for nonsignificant rules.

The last term in Equation 2, the interaction effect between $ \eta $ and the president-agency spatial distance, requires further elaboration. As mentioned, the model will estimate $ {\eta}_1>{\eta}_0 $ , with $ {\eta}_0 $ being positive but small (i.e., close to zero). This is intuitive, given that rule significance is a strong predictor of review and that the vast majority of nonsignificant rules are not reviewed. The estimates of $ \eta $ are, by definition, average effects and are not specific to any particular agency.

Now consider a hypothetical agency where all significant rules are reviewed, and a sizeable number of nonsignificant rules are also reviewed. This is an agency that is being over-reviewed. To account for this phenomenon, the model will adjust the estimate of $ {X}_a $ further away from $ {X}_p $ , ensuring that nonsignificance is more predictive of review in the formulation $ {\eta}_0{\left({X}_p-{X}_{a,t}\right)}^2 $ .

Conversely, consider the opposite case: a hypothetical agency where nonsignificant rules are never reviewed, and even some significant rules are not reviewed. This agency is being under-reviewed. To account for this, the model will adjust the estimate of $ {X}_a $ closer to $ {X}_p $ , ensuring that significance is less predictive of review in the formulation $ {\eta}_1{\left({X}_p-{X}_{a,t}\right)}^2 $ .

The model incorporates the dynamic nature of regulatory politics by allowing each agency’s ideal point, $ {X}_{a,t} $ , to evolve over time across presidential administrations. These ideal points follow a random walk process where the ideal point in administration $ t $ has the mean of the ideal point from the previous period and a variance of $ {\sigma}^2 $ . Formally, the random walk is specified as follows:

$$ {\displaystyle \begin{array}{l}{X}_{a,1}\sim \mathcal{N}\left(0,1\right),\\ {}{X}_{a,t}\sim \mathcal{N}\left({X}_{a,t-1},{\sigma}^2\right)\mathrm{for}t=2,\dots, T.\end{array}} $$

Allowing agency ideal points to vary across administrations reflects the inherently dynamic nature of the regulatory process, shaped by a shifting political landscape as administrations and appointees cycle in and out of office. The random walk can be viewed as a middle ground between estimating ideal points independently during discrete time periods, such as for pairs of administrations (e.g., Clinton–Bush or Obama–Trump), and using a static model that constrains agency positions to remain fixed across each administration.

To identify the model, the presidential ideal points need to be anchored. The presidential ideal points used in this analysis are drawn from NOMINATE, which estimates ideal points by inferring the president’s ideological position based on pseudo-votes or inferred positions related to congressional roll-call votes.Footnote 9 While this approach has limitations—particularly because it estimates presidential ideal points through pseudo-votes rather than actual voting behavior—it provides a consistent method of inferring the location of presidents over time. Notably, it also results in presidential ideal points that are close to the corresponding party medians in the House of Representatives during the relevant Congresses: Clinton: $ -0.438 $ (House Democrats: $ -0.38 $ ); Bush: $ 0.69 $ (House Republicans: $ 0.38 $ ); Obama: $ -0.36 $ (House Democrats: $ -0.39 $ ); Trump: $ 0.40 $ (House Republicans: $ 0.50 $ ).Footnote 10

In addition to the 69 agencies, a hypothetical benchmark agency was added to the sample. This benchmark exhibits no under- or over-reviewing. Artificial data were generated for the benchmark agency such that it issued 100 proposed rules in total, 25 in each administration—a productivity close to the median agency in the sample. Half of these “rules” were randomly selected to be significant, all of which were indicated as reviewed. As will become clear, this benchmark represents the prototypical moderate agency.

The model was estimated using Hamiltonian Monte Carlo methods in the Stan programming language (Betancourt and Girolami Reference Betancourt and Girolami2015; Stan Development Team 2024). Supplementary Appendix B includes the Stan code used for estimation and details on the distributional assumptions and priors of all model parameters. In total, when including the benchmark agency, the model estimated 355 parameters of interest, including 280 agency ideal points, 70 agency intercepts (each $ {\alpha}_a $ ), 2 independent significance effects ( $ {\gamma}_1 $ and $ {\gamma}_0 $ ), 2 additional interactive significance effects ( $ {\eta}_1 $ and $ {\eta}_0 $ ), and the standard deviation of the random walk ( $ \sigma $ ). The Hamiltonian Monte Carlo sampler was run for 7,000 iterations, of which the first 1,000 were used for warmup and adaptation.Footnote 11 To analyze model convergence, four chains were used and $ \hat{R} $ statistics were calculated for each parameter. The $ \hat{R} $ statistic calculates the ratio of the variance between the multiple chains to the average variance within the chains. When the $ \hat{R} $ statistic equals one, the model has converged, although values as large as 1.1 still suggest convergence. The max $ \hat{R} $ value across all parameters after estimation was 1.00, suggesting that all parameters converged.

EMPIRICAL ESTIMATES OF AGENCY IDEOLOGY

The point estimates for each agency ideal point are presented in Figure 1, grouped by the Clinton, Bush, Obama, and Trump administrations. Each estimate represents the mean of the posterior distribution for that parameter. The uncertainty associated with each estimate is depicted by 90% Bayesian credible intervals, shown as horizontal line segments extending from each point estimate. These intervals correspond to the 5th and 95th percentiles of the posterior distribution.Footnote 12 For reference, the figure also includes the benchmark agency, indicated by a horizontal dashed line, showing the position of a hypothetical agency that exhibited no over- or under-reviewing.

Figure 1. Dynamic Estimates of Agency Ideal Points with Credible Intervals

Note: Please see Table A1 in the Supplementary Appendix A for full names of all agencies mentioned in this figure.

In each plot, a vertical dashed line at zero marks the ideological center, where many “moderate” agencies are located, such as NASA, the National Park Service, the Patent and Trademark Office, and the Bureau of Economic Analysis. This raises an important question about how to interpret moderate agencies within this framework. Technically, these agencies are reviewed by OIRA at similar rates regardless of the president’s ideal point—possibly due to their relatively noncontroversial regulatory agendas or the limited partisan interest they attract from the White House. Notably, the benchmark agency is located almost exactly at zero, serving as a reference point for ideologically neutral or moderate agencies.

This empirical pattern differs from how moderation is understood in other ideal point models. In the legislative context, ideological moderation typically results from patterns of cross-partisan voting. Moderate lawmakers may include independently minded mavericks as well as conciliatory centrists inclined toward bipartisan compromise. Ideological moderation in agencies, by contrast, requires a different interpretation. Many of these agencies engage in activities that are rarely the subject of partisan debate—such as collecting data, making maps, or approving patents—and are thus reviewed at similar rates by OIRA, including under presidents of varying ideological commitments (Richardson Reference Richardson2024).Footnote 13

More controversial regulators are located at the ideological extremes of the distribution. Well-known examples include the EPA’s Office of Air, the central hub for federal climate change policy, and OSHA, an agency so embroiled in partisan politics that the director of legislation for the National Federation of Independent Business (NFIB) has described OSHA as “the worst four-letter swear word in small business’s vocabulary.”Footnote 14 Not surprisingly, the model estimates OSHA to be more liberal than the most liberal Senate Democrat (see Figure 2). Other agencies on the left side of the ideological spectrum are associated with key liberal interests, such as entitlement programs (e.g., Social Security Administration), racial and ethnic minority issues (e.g., Bureau of Indian Affairs), government healthcare (e.g., Centers for Medicare and Medicaid Services and the Department of Veterans Affairs), foreign aid (e.g., USAID), and immigration court proceedings (e.g., Executive Office for Immigration Review).

Figure 2. Agency Ideal Points Distributed within Departments

Note: Agency ideal points are derived from averages across administrations using the static model. Each department is ordered from top (liberal) to bottom (conservative) based on a weighted average of the agencies’ ideal points in each department, with weights determined by the number of proposals made by each agency. The congressional ideal points were estimated using an IRT voting model on the 112th Congress.

Compared to the liberal agencies, there are fewer conservative agencies—that is, those skewed to the far right of the distribution—and no agency is positioned as far to the right as some liberal agencies are to the left. These general trends reflect the liberal orientation of the federal bureaucracy as found in other studies, such as those reported by Richardson, Clinton, and Lewis (Reference Richardson, Clinton and Lewis2018). The most conservative agency, averaging across presidential administrations, is the Small Business Administration, which works, in part, to reduce regulatory barriers for small firms, followed by the Food and Drug Administration (FDA), a public health agency with close ties to the industry it regulates, according to surveys of FDA staffers.Footnote 15

The 276 separate agency ideal points can be overwhelming, so simplifying the data through higher levels of aggregation is beneficial for expository reasons. One approach is to average the estimates across administrations, reducing the 276 points to 69 agency ideal points. To compute an average ideal point, the model was modified to fix the agency ideal points during the study period, resulting in a single set of 69 static ideal points for each agency, hereafter referred to as static ideal points. Footnote 16 Further aggregation is achieved by averaging these static estimates within their parent departments. This level of aggregation is depicted in Figure 2, which plots the distribution of agency ideal points across 16 separate departments. This excludes independent agencies not part of any department structure, except for the EPA.Footnote 17 In the figure, departments are sorted from the most liberal at the top to the most conservative at the bottom based on a weighted average of their constituent agencies. Each weight is proportional to the number of regulatory proposals issued, so more active regulators contribute more to the department’s ranking. Two health-related departments, Veterans Affairs (VA) and Health and Human Services (HHS), top the list as the most liberal departments, while the Department of the Interior appears at the bottom as the most conservative. The figure also highlights ideological variation among the bureaus within several departments. For example, although the Interior Department is conservative overall, its Bureau of Indian Affairs is one of the most liberal agencies in the sample. In the figure, several departments are represented by a single vertical dash. For the Department of Energy, this is because it has only one constituent agency in the sample—the Office of Energy Efficiency and Renewable Energy, a left-of-center agency. For the VA and State Department, this is because they have no constituent agencies or offices in the Unified Agenda data. Finally, the figure compares the agency estimates to legislative ideal points from the 112th Senate, a time corresponding to roughly the midpoint of the study period. Interestingly, no agency is estimated as more conservative than the chamber’s most conservative member, but several agencies are more liberal than its most liberal member, reflecting the liberal orientation of the federal bureaucracy.Footnote 18

By aggregating the ideal points at the department level, they can be compared to the results from several other studies, which typically estimate static ideal points at this aggregate level. Interestingly, the department ranking depicted in Figure 2 largely aligns with the results from other studies, despite each employing different methodologies for estimating ideal points. Figure 3 illustrates the extent to which these studies concur on whether a department ranks in the top half or bottom half of the “most liberal” categorization.Footnote 19 The rankings show several notable similarities. All five studies place the EPA and the departments of Labor and Health in the top liberal half and the departments of Defense and Homeland Security in the bottom conservative half. Complete rankings from each study are presented in Supplementary Appendix D, which also includes a correlation plot showing a positive correlation among all the measures.

Figure 3. Department Ranking Consensus Across Studies (Ranking Liberal to Conservative)

Notes: Each study estimated an ideal point for each of the 16 departments. To construct the figure, the estimates were ranked from liberal to conservative, and the rankings were then compared to evaluate the degree of consensus about which departments ranked in the top, liberal half (top eight), and which ranked in the bottom, conservative half (bottom eight).

For a more granular comparison, without aggregating ideal points at the department level, the static estimates can be directly compared to those in Richardson, Clinton, and Lewis (Reference Richardson, Clinton and Lewis2018) (hereafter referred to as RCL), an impressive study that estimates agency-level ideal points at the subdepartment level. Of the 69 agencies included in the present study, 61 can be matched to the RCL estimates. Although the correlation between these sets is positive, the strength of the correlation is limited by four outlier agencies: Veterans Affairs (VA), Fish and Wildlife Service (FWS), the Defense Department’s Office of Health Affairs (DOD-OASHA), and the Drug Enforcement Administration (DEA). Excluding these outliers, the correlation is 0.42, whereas including them drops the correlation to 0.11.Footnote 20

The presence of such outlier agencies underscores an important distinction about the methodology used in this study. Most other studies fundamentally rely on information about agency employees, as measured by survey responses, voter registration, or campaign contributions. While this approach is generally reasonable, discrepancies can arise between the ideology of agency employees and the actual policy goals of the agency. For instance, conservative employees may advance a liberal policy mission—a paradox evident in several of these outlier agencies. The VA, for example, is generally considered conservative or right-of-center based on its employees’ views.Footnote 21 Even the RCL estimates—which are based on survey respondents’ perceptions of an agency’s policy views—rank the VA as the 130th most conservative agency out of 165, potentially reflecting the fact that the agency has close ties to the military, an institution generally more conservative than the broader federal bureaucracy. However, the VA’s actual policy mission—providing government healthcare to veterans—arguably pushes its policies in a liberal direction. This may explain why, during Republican administrations, OIRA tends to review VA regulations more aggressively than during Democratic administrations. A similar case is found with the Defense Department’s Office of Health Affairs (DOD-OASHA). Although engaged in a liberal policy mission—government-run healthcare—the agency’s placement within the DOD may attract more conservative employees and create a perception of conservatism. RCL ranks the Defense Health Agency, which reports to OASHA, as the 134th most conservative agency out of 165, but the estimates here show OASHA to be consistently liberal, likely due to the nature of its actual policy mission, as perceived by OIRA.

Another distinction between this study and others is its reliance on rulemaking activity. While this approach has its advantages, it also introduces several caveats. For instance, although it is rare in practice, some agencies may choose to bypass the rulemaking process, particularly when advancing their more controversial policies, which could make their ideal points appear more moderate than expected. The Department of Education (ED) serves as a notable example. Historically, ED has favored aggressive federal interventions to advance liberal causes (Califano Reference Califano1981; Melnick Reference Melnick2018). Other studies consistently find ED’s employees to be liberal, ranking it as the third most liberal department in two studies and fifth in another (see Supplementary Appendix D). However, the model here estimates ED as relatively moderate (ninth out of 16 departments) likely due, in part, to its history of issuing informal guidance documents to schools and colleges instead of promulgating its controversial policies as formal rules—a tactic particularly evident in the Office of Civil Rights’ regulation of school sexual harassment policies, which were effectively binding in practice (Melnick Reference Melnick2018). More recently, increased political pressure has led ED to move its policy changes through the formal rulemaking process, which could impact ED’s estimated ideal point in future administrations, potentially shifting it to the left.Footnote 22 In addition to the deceptive practice of using guidance documents to avoid scrutiny, agencies may also attempt to advance regulatory policy changes by classifying rules as “direct final” or “interim final”—a strategy that may succeed if interest groups do not subsequently file comments that would trigger the normal notice-and-comment rulemaking process (Nou Reference Nou2012).

Temporal Dynamics of Agency Ideal Points

The study period witnessed significant shifts in the ideal points of many agencies. The first part of this section examines the nature of these shifts, predominantly in the conservative direction, and how they align with contemporaneous factors, such as agency politicization, evolving agency missions, and shifts in the partisan alignment around specific policies. The second part explores an important implication of these shifts, demonstrating that agencies become more productive regulators as they move toward a president ideologically and, conversely, their productivity declines as they shift away.

A prominent trend in the data is the shift toward conservatism. Notably, 40% of agencies were more conservative during the Trump administration than during any previous period, making it the most conservative administration in terms of the average ideal point across all agencies. The primary disruptor of this secular conservative trend is the influence of the president’s party on agency ideology; agencies under Republican administrations tend to be more conservative, on average, than those under Democratic administrations.Footnote 23 This partisan pattern can largely be explained by politicization, where the president’s appointees incrementally shift agency policies closer to those of the president. The broader conservative trend in agency ideology over time remains an intriguing puzzle.

To explore what might explain this conservative trend, it is helpful to focus on the agencies that shifted the most. Among the top 10 agencies that experienced the largest shifts, seven moved in a conservative direction, as detailed in Table 1.Footnote 24 To better understand these conservative shifts, Figure 4 plots the trajectory of each of the seven agencies over the study period, revealing two core patterns: a gradual shift in the conservative direction from the Clinton to the Trump administrations, evident among both liberal and conservative agencies (the first and third panels), and an intensified conservative shift during the Trump administration (the middle panel).

Table 1. Agencies with Largest Shifts in the Conservative Direction

Figure 4. Conservative Trends in Agency Ideal Points

What explains these patterns? In the cases of the Office of Personnel Management (OPM) and the EPA’s Office of Air, which both shifted significantly toward conservatism during the Trump years, politicization is a likely explanation. Under the Trump administration, EPA Administrator Scott Pruitt, known for his resistance to the agency’s established policy agenda, aggressively blocked new regulatory efforts and rolled back Obama-era climate policies. Pruitt’s reputation for conservative, pro-energy policies during his tenure as Oklahoma Attorney General, coupled with his outspoken skepticism of federal climate change policies, likely diminished OIRA’s scrutiny during the Trump administration, as he was viewed as a reliable ally. This decrease in OIRA scrutiny, as reflected in lower review rates, is precisely what drives the model to estimate the EPA as a more conservative agency, more aligned with the Trump administration’s positions. Interestingly, this pattern contrasts with the EPA during the Bush administration, which also experienced tension with the Office of Air. The estimates suggest that Bush relied more on OIRA reviews to manage the EPA, rather than direct politicization. While Bush appointees were conservative, including former Utah governor Mike Leavitt and Christine Todd Whitman, who identified herself as a Rockefellar Republican, arguably none matched the deregulatory zeal of Scott Pruitt.Footnote 25 A similar phenomenon occurred at OPM, a traditionally liberal agency that was heavily politicized during the Trump administration, to the extent that several high-level appointees advocated for its dissolution. The shifting ideal point estimates indicate OPM moved distinctly in the conservative direction during the Trump years, suggesting that Trump’s OIRA had greater trust in the agency than previous Republican administrations.Footnote 26

Another pattern observed among agencies with significant ideological shifts is a gradual turn toward conservatism. This trend occurred both in agencies initially liberal during the Clinton administration, such as the VA and DOD’s Health Affairs Office, and in moderate agencies, including the Treasury’s Financial Crimes Enforcement Network (FINCEN), and DOD’s Defense Acquisition Regulation Council (DARC). One explanation for these shifts is the changing politics surrounding key policy debates. For instance, FINCEN evolved from a moderate position during the Clinton era to a more conservative agency under subsequent presidents, all of whom served after the attacks of 9/11. During this period, FINCEN’s authority expanded under the Patriot Act of 2001 to investigate financial transactions for terrorism links, a development that alarmed civil liberties groups within the Democratic coalition, such as the ACLU. Procurement regulation also became more politically charged during this period, especially following the wars in Iraq and Afghanistan, which may account for DARC shifting in a conservative direction—that is, generating more scrutiny from OIRA under Democratic administrations than previously. A similar pattern occurred for both the VA and DOD’s Health Affairs Office, both of which were initially quite liberal under Clinton but faced more scrutiny under Obama’s OIRA, shifting their estimated ideal points to the right. The VA’s conservative shift during the Trump administration also coincides with the landmark VA MISSION Act of 2018, a law supported by Republicans and segments of the medical establishment—including the American Medical Association—for expanding veterans’ access to private-sector health care.

However, several agencies bucked the trend of becoming more conservative. Notably, the Minerals Management Service (MMS), discussed earlier, was at its most conservative during the Clinton administration and became more liberal during the Obama and Trump administrations. This reverse trend likely reflects major regulatory reforms in offshore energy oversight following the Deepwater Horizon oil spill. In response to the disaster, the Obama administration implemented a suite of changes aimed at reducing industry influence and increasing environmental safeguards, culminating in the creation of the Bureau of Ocean Energy Management (BOEM).Footnote 27 Before and during the fallout from the spill, MMS was frequently accused of being heavily influenced by the conservative-leaning oil and gas industry. This helps explain the agency’s conservative orientation during the Clinton era and the subsequent push by the Obama administration in 2011 to implement stronger oversight and insulate the agency from industry capture.

One way to validate these over-time trends in agency ideology is to examine whether public comments submitted by stakeholders align with the estimated directional shifts. Stakeholders are often explicit in their feedback on proposed rules, clearly indicating their support or opposition. As such, the content and tone of public comments offer a useful source of external validation. For instance, one would expect reliably liberal stakeholders, such as public sector unions like the American Federation of Government Employees (AFGE), to express greater support for OPM rulemakings during the Obama administration than during the Trump administration, when the agency shifted in a more conservative direction, as shown in Figure 4. This approach serves as a form of convergent validity assessment—validating the estimates against independent, behaviorally observable indicators—and complements the earlier validation strategy based on comparison to existing ideal point estimates (McMann et al. Reference McMann, Pemstein, Seim, Teorell and Lindberg2022).

Three of the agencies mentioned above were selected for closer examination based on the presence of active stakeholder groups with reliably identifiable ideological orientations.Footnote 28 For the EPA Office of Air, comments were analyzed from the U.S. Chamber of Commerce, a consistently pro-business organization that generally favors deregulatory efforts. The Chamber reliably opposed Office of Air regulations during the Obama administration and supported them during the Trump administration. For the Office of Personnel Management (OPM), comments from the American Federation of Government Employees (AFGE) and the Alliance for Federal Employees (AFFE) were examined, both of which advocate for strong protections for federal workers. These unions supported Obama-era rules but expressed consistent opposition to OPM’s rules during the Trump administration, aligning with the rightward shift in the agency’s ideal point. For the Department of Veterans Affairs (VA), comments from the American Medical Association (AMA) were reviewed, a prominent physician organization that has historically supported efforts to privatize core VA services. The AMA was more engaged with, and more supportive of, Trump rulemakings than those under Obama. Across all three agencies, the directional shift in stakeholder support is broadly consistent with the conservative movement estimated in the ideal points, offering external validation of the measurement model.

Regulatory Productivity

Shifts in agency ideology have important policymaking implications: regulatory productivity varies with how closely an agency’s position aligns with the president’s. For example, during the Trump administration, the Office of Personnel Management gained more political leeway to advance its agenda as its ideological position moved closer to the president’s—greater leeway than if it had remained more liberal. More generally, when an agency aligns with the president, regulatory activity tends to increase; when their preferences diverge, productivity diminishes. These findings validate the estimates by showing that the ideal points are correlated with substantively meaningful variation in regulatory activity. The next section considers how these ideal point estimates can be applied more broadly in empirical research.

To empirically test the hypothesis that ideological distance is inversely related to regulatory productivity, the variable President-Agency Distance was constructed. This variable measures the absolute value difference between the estimated ideal points of an agency in administration $ t $ and the president’s ideal point during the same period. It is then employed as a predictor in the following negative binomial model with mean $ \mu $ and overdispersion parameter $ \theta $ :

(3) $$ {\displaystyle \begin{array}{c}{\mathrm{Regulatory}\ \mathrm{Activity}}_{at}\sim \mathrm{NegBin}\left({\mu}_{at},\theta \right)\\ {}\ln \left({\mu}_{at}\right)={\pi}_a+\beta \cdot {\mathrm{PresidentAgencyDistance}}_{at}\end{array}} $$

where Regulatory Activity quantifies the regulatory output promulgated by agency $ a $ in administration $ t $ and $ {\pi}_a $ represents a fixed effect for each agency. Including a fixed effect for each agency allows the model to isolate changes in an agency’s position relative to the president, rather than comparing across different agencies. The data encompass 276 unique president-agency distance measures, derived from 69 agency estimates across 4 administrations ( $ 69\times 4=276 $ ).

There are several ways to measure regulatory activity, the dependent variable, using information about the page length of final rules published in the Federal Register. Footnote 29 Generally, a greater number of pages indicates greater regulatory activity. Although this measure is not without flaws, it arguably provides a more precise estimate of regulatory activity than simply counting the number of final rules, as individual rules can vary significantly in their scope and societal impact. In this analysis, Regulatory Activity for each administration is quantified as the average page length per year of all final rules published during the years of that administration.Footnote 30

It is important to acknowledge that the empirical relationship examined here does not necessarily imply causality. The aim, rather, is to highlight patterns between the distance measure and regulatory activity, in part as a validation exercise to show that the ideal points are correlated in intuitive ways with substantively important patterns of policymaking. Several knotty econometric issues nonetheless persist, and the analysis here cannot thoroughly resolve them. For one, the distance measure is predominately based on OIRA’s decision to review proposed rules. When the distance is large, it generally indicates that the agency has been over-reviewed by OIRA, all else equal. The dependent variable, on the other hand, is the volume of regulatory activity, measured by the page length of final rules, a fraction of which are reviewed by OIRA. Although the final rule length occurs downstream of the review decision, both variables are derived from the same regulatory process. This overlap could be problematic if OIRA’s review of proposed rules systematically reduces the scope of the rule. For example, if OIRA—staffed largely by economists—has a bias toward reducing economic costs, then a greater distance measure could correlate with less regulatory activity, as measured by shorter or less stringent regulations. What appears to be an effect of ideological distance limiting regulatory activity could instead reflect the consequence of the OIRA review process itself. One alternative is to measure the dependent variable in a way that excludes rules reviewed by OIRA. Although not perfect, this alternative approach helps to ensure that the distance measure is not simply capturing the direct effects of OIRA review. Both options are considered in the following analysis.

Five different regression models were estimated, each measuring the dependent variable in a distinct manner. The first three measures are more inclusive. Model 1 (“Formal”) includes final rules provided that they went through the notice-and-comment process and were not temporary. Model 2 (“Formal and Significant”) builds on Model 1 by further requiring the rule to be listed as significant. Model 3 (“Formal, Significant, and Regulatory”) combines the criteria of Models 1 and 2, excluding deregulatory rules.Footnote 31 The final two models exclude rules reviewed by OIRA during the proposed rule stage. Model 4 (“Formal”) includes rules that underwent the notice-and-comment process and were not temporary. Model 5 (“Formal and Regulatory”) further excludes rules identified as deregulatory. (Among these last two models, it does not make sense to additionally exclude nonsignificant rules, since these constitute the majority of the sample.)Footnote 32

The results from estimating Equation 3 are presented in Table 2. Across all models, a negative association is observed between President-Agency Distance and Regulatory Activity. Specifically, a one unit increase in the distance between a president and an agency is associated with roughly a 5%–20% decrease in the agency’s regulatory output, with variations depending on the model.

Table 2. Ideological Distance and Regulatory Productivity (Negative Binomial)

Note: *p $ < $ 0.05. Models 4 and 5 exclude rules reviewed by OIRA at the proposed rule stage. Models 3 and 5 also exclude deregulatory rules. Weights equal to the inverse variance of each agency ideal point are included.

What it means, substantively speaking, to shift the distance measure warrants some explanation. A unit shift in the distance measure corresponds almost exactly to the average change observed across the four administrations—that is, the typical shift expected to be seen in an agency. This average is calculated by taking the mean of the absolute values of the maximum and minimum distance measures for each agency.

Applying Agency Ideal Points in Empirical Research

The above analysis represents just one of many potential empirical applications for the agency ideal points. These estimates could be used to further study the OIRA review process, including the factors that determine the outcome of reviews. Are ideologically distant agencies more likely to have their regulatory proposals rejected or delayed by OIRA, conditional on undergoing review? (Note that the ideal points are based on the decision to review a rule, which occurs upstream of the final outcome of the rule.) The estimates could also inform research on congressional oversight of the regulatory process. Why are some agencies more likely to face scrutiny from Congress—such as through oversight hearings, budget cuts, or use of the Congressional Review Act—depending on which party controls the House or Senate? Although the factors shaping such interventions are complex, ideological distance is likely an important driver. Additionally, researchers might examine judicial decisions on regulatory cases, particularly those involving agencies like the EPA, where significant and controversial regulations often end up in court (Revesz Reference Revesz1997). Since judges’ ideological leanings can be inferred from the partisanship of the appointing president, one can ask whether judges are more likely to side with agencies whose ideological positions align with their own.

As in the previous section, care must be taken when testing theories related to the regulatory process. As discussed, the ideal points are derived from a facet of regulatory policymaking—OIRA’s decision to review a rule—which can introduce issues of endogeneity in some empirical applications, particularly when the outcome variable of interest is directly affected by the review process itself. This concern parallels debates in legislative studies about whether legislator ideal points, which are derived from lawmaking behavior, can validly test theories about the legislative process. Ensuring valid inference requires addressing potential endogeneity, as discussed by McCarty, Poole, and Rosenthal (Reference McCarty, Poole and Rosenthal2001) and Hirsch (Reference Hirsch2011), among others.

CONCLUSION

This study analyzes the ideological orientation of U.S. regulatory agencies across recent presidential administrations, applying a dynamic measurement model based on item response theory to estimate a unique ideal point for each agency in each administration. By leveraging data from the Unified Agenda and OIRA’s discretionary review process, the model offers new insights into ideological shifts within the regulatory state. Although the estimates align with established measures of agency ideology, the model advances previous work by capturing the dynamic nature of regulatory politics, which is shaped by factors such as politicization, evolving agency missions, and emerging policy debates.

Beyond identifying these ideological shifts and their implications for regulatory productivity, this study presents a flexible framework that can be applied to administrative data in future presidential administrations. In principle, the model could also be adapted to other institutional settings where a centralized, discretionary review process is in place, even beyond the regulatory domain. Although oversight bodies in some European countries partially resemble OIRA—such as Germany’s National Regulatory Control Council and the UK’s Regulatory Policy Committee—these institutions are typically embedded within ministerial or advisory structures and are not subject to direct executive control. For the methodology developed here to be applied in another context, it is also critical that the review decision is discretionary and thus has the potential to be shaped by political considerations, rather than being purely procedural.

Finally, because this study focuses on regulatory policymaking, the estimates presented here should be viewed as a complement to, rather than a replacement for, existing approaches to measuring agency ideology. Although this analysis is based on revealed preferences, not all agency preferences are necessarily expressed through the OIRA review process. As discussed, some agencies may have latent ideological leanings that remain unobserved because their regulatory functions do not attract political scrutiny. For such agencies—where regulatory activity rarely intersects with politics—the model tends to estimate moderate ideal points, for better or worse. These estimates should be interpreted within the context of regulatory policymaking and with appropriate caution, particularly when rulemaking is not central to an agency’s institutional mission.

SUPPLEMENTARY MATERIAL

To view supplementary material for this article, please visit http://doi.org/10.1017/S0003055425101391.

DATA AVAILABILITY STATEMENT

Research documentation and data that support the findings of this study are openly available at the American Political Science Review Dataverse: https://doi.org/10.7910/DVN/C7S2SI.

ACKNOWLEDGMENTS

For their helpful comments and suggestions, I am grateful to Roy Gava and Mark Richardson; the seminar participants at the American Politics Research Group at the University of North Carolina, the 2024 Conference on Regulatory Governance in a Changing World at the University of Pennsylvania Law School, and at MPSA 2025; and the editor of the American Political Science Review together with several anonymous referees. Any errors or omissions remain my own.

CONFLICT OF INTEREST

The author declares no ethical issues or conflicts of interest in this research.

ETHICAL STANDARDS

The author affirms this research did not involve human participants.

Footnotes

Handling editor: Daniel Pemstein.

1 Edwin McDowell, “OSHA, E.P.A.: The Heyday Is Over,” New York Times, January 4, 1981.

2 The conceptual focus here is on agencies changing over time. Presidents change as well—across administrations and, at times, within an administration. The maintained assumption in this paper, as discussed below, is that these presidential changes are reflected, by and large, in the president’s ideal point.

3 In some instances, OIRA does not review a proposed rule but subsequently reviews the final rule. The binary review variable can be constructed either by counting these rules as reviewed ( $ \mathrm{review}=1 $ ) or not ( $ \mathrm{review}=0 $ ). The correlation between the two measures is 0.93, and both yield similar estimates of agency ideal points.

4 If several agencies are involved in a single rulemaking under the same RIN, the analysis focuses on the agency identified by the RIN in the OIRA docket.

5 Several agencies underwent reorganizations and name changes during the study period—for example, the Minerals Management Service was reorganized as the Bureau of Ocean Energy Management, and many agencies were affected by the creation of the Department of Homeland Security. An agency was retained in the analysis if its rulemaking activity maintained the same four-digit identifier. In these cases, the name shown in the Supplementary Appendix A is not necessarily the name used throughout the study period. The resulting ideal points should be interpreted cautiously.

6 An analysis by the Congressional Research Service found that among the agencies subject to OIRA review—a list that includes virtually all agencies except the independent regulatory commissions—an estimated 7% of the final rules listed in the Federal Register could not be found as entries in the Unified Agenda. This suggests that some agencies neglected or avoided listing their regulations in the Unified Agenda.

7 A more elaborate model could allow the Agency to send messages to the President, indicating whether a proposal is liberal, conservative, moderate, and so on. Such messaging would fall into the category of signaling games known as “cheap talk,” where informative communication is only possible when the players have similar preferences. When actors have divergent preferences, however, messages typically do not influence the receiver’s strategy, suggesting that strategic messaging on the part of the agency would not fundamentally alter OIRA’s incentive to review proposals from distant agencies at a higher rate.

8 The role of interest groups in shaping the review decision is an area for future study. Although plausible—especially for high-profile rules—group pressure in the review decision is not typically observable. Nonetheless, since interest groups within the president’s coalition likely cast a shadow over many decisions in the regulatory process, it seems likely that OIRA, in serving to advance the president’s priorities, is at least indirectly advancing the interests of favored groups, whether acting directly at their behest or not.

9 These estimates are technically DW-NOMINATE ideal points obtained from voteview.com (Lewis et al. Reference Lewis, Poole, Rosenthal, Boche, Rudkin and Sonnet2021).

10 In a critique, Treier (Reference Treier2010) finds that presidential ideal points can appear artificially extreme in certain Congresses, diverging from expectations based on their record of signing bills. However, using all the pseudo-votes over each president’s tenure helps mitigate this problem, resulting in estimates that tend to place presidents near their party median.

11 To improve convergence, initial values for the model parameters were estimated using variational inference in Stan. In addition, the model employs a non-centered parameterization of the random walk, as described in the Stan User’s Guide (“Non-Centered Parameterization,” https://mc-stan.org/docs/2_18/stan-users-guide/reparameterization-section.html). This reparameterization introduces hierarchical hyperparameters for the ideal points and agency intercepts.

12 Uncertainty in the estimates, reflected in the width of the credible intervals, is shaped by several factors. First, the number of proposed rules used to estimate each agency’s ideal point varies, and there is a negative relationship between the credible interval width and the number of proposed rules an agency issues. Supplementary Appendix C illustrates this association with a scatterplot. Second, uncertainty increases when patterns of over- and under-reviewing occur randomly—that is, not systematically related to changes in the president’s ideal point.

13 In a survey of federal executives, Richardson (Reference Richardson2024) fielded the question: “How strongly do Republicans and Democrats in Washington disagree over what [your agency] should do?” He finds that many agencies are not the subject of partisan disagreement, particularly if their missions steer clear of controversial regulatory mandates or redistributive programs. The fact that many agencies are positioned close to the ideological center also echoes an observation by Cameron (Reference Cameron2000, 36) about federal policymaking: “there is a little secret about Congress that is never discussed in the legions of textbooks on American government: the vast bulk of legislation produced by that august body is stunningly banal.” For those trained to follow the scent of politics, much of the output from the regulatory process may be understood similarly.

14 Edwin McDowell, “OSHA, E.P.A.: The Heyday Is Over,” New York Times, January 4, 1981.

15 For example, a 2011 survey of over 900 FDA staffers by the Union of Concerned Scientists (“Voices of Scientists at the FDA: Measuring Progress on Scientific Integrity,” published March 8, 2012) found that 40% of respondents believed business influence on regulatory decisions was too high. The survey also revealed that 25% of respondents had previously worked in the industry they now regulated at the FDA.

16 To estimate static ideal points, Equation 2 is modified so that $ {X}_a $ is no longer indexed by $ t $ : $ {\beta}_{S_i}={\gamma}_{S_i}+{\eta}_{S_i}{\left({X}_p-{X}_a\right)}^2 $ .

17 The EPA, although not technically a department, is included because it has a department-like structure in the data; estimates are available for several of its constituent offices, including Air and Radiation; Water; Solid Waste and Emergency Response; and Prevention, Pesticides, and Toxic Substances.

18 An IRT model was used to estimate the Senate ideal points, as in Clinton, Jackman, and Rivers (Reference Clinton, Jackman and Rivers2004). This approach relies on the same identifying assumptions as the agency ideal points—namely, that the ideal points have a standard normal distribution—which makes the scale more comparable than if NOMINATE were used.

19 Including the EPA, there are 16 “departments,” with the top half defined as the “liberal” top eight and the bottom half as the “conservative” bottom eight. Ideal points from Chen and Johnson (Reference Chen and Johnson2015) are averaged across administrations. Ideal points from Clinton et al. (Reference Clinton, Bertelli, Grose, Lewis and Nixon2012) are derived from survey responses from career bureaucrats.

20 Supplementary Appendix D includes a scatterplot of the two sets of ideal points with each agency labeled.

21 Clinton et al. (Reference Clinton, Bertelli, Grose, Lewis and Nixon2012) find the VA to be the 17th most conservative out of 26 agencies based on surveys of career staff. Chen and Johnson (Reference Chen and Johnson2015) rank the VA as the 42nd most conservative agency out of 79 based on employee campaign contributions.

22 See, for example, Melnick, R. Shep. “Analyzing the Department of Education’s Final Title IX Rules on Sexual Misconduct.” Brookings, June 11, 2020.

23 The average agency ideal point during the Clinton administration was $ -0.18 $ , Obama $ -0.13 $ , Bush $ -0.11 $ , and Trump $ -0.05 $ .

24 The other three agencies in the top ten shifted in a liberal direction: the Small Business Administration, the General Services Administration, and the Fish and Wildlife Service. An analysis of these shifts is a topic for future studies.

25 Bush appeared to rely more on politicization in other instances. For example, appointees from the Fish and Wildlife Service (FWS) during the Bush administration were noted for their conservative stances, and all FWS regulations required preclearance by an assistant secretary within the Interior Department, thus diminishing the need for OIRA’s scrutiny. This helps explain why the Bush era was a peak of conservatism for FWS. For more detail, see Laura Peterson, “FWS Must Restore ‘Lost Credibility,’ New Director Says,” New York Times, August 26, 2011.

26 See, for example, Daniel Lippman, “OPM Chief Dale Cabaniss Abruptly Resigns,” Politico, March 17, 2020.

27 In the Unified Agenda, MMS and BOEM maintained the same 4-digit identifier despite the renaming, facilitating a comparison of the agency before and after the regulatory overhaul.

28 These agencies were chosen because they met the following criteria: metadata from Regulations.gov was of consistently usable quality; an interest group with a known ideological proclivity frequently commented on the agency’s proposed rules; and the agency changed its estimated ideological position across the last two administrations, a period in which Regulations.gov data are reliable. Details on the case selection, methodology, and results are available in Supplementary Appendix E.

29 Data on the page length of final rules were collected using the Federal Register’s application programming interface (API). See https://www.federalregister.gov/developers/documentation/api/v1 for details on this data source. Data were collected from the API using a modified version of the search program found here https://github.com/rOpenGov/federalregister, as described in the Dataverse replication materials (Acs Reference Acs2025).

30 The dataset necessary to estimate Equation 3 was created by merging the distance measure with the Federal Register data, utilizing the first four digits of each entry’s RIN. Executive Order 12866 mandates that RIN numbers be assigned to rules deemed consequential enough to be included in the Unified Agenda and potentially subject to the OIRA review process. See Section 4 of EO 12866, which states: “The description of each regulatory action shall contain, at a minimum, a regulation identifier number…” Rules of minor consequence, which are less likely to have been assigned a RIN, are thus excluded from the analysis during the data merge.

31 To identify deregulatory rules, rule summaries in the Federal Register were searched for keywords indicative of deregulation, including “deregulate,” “remove,” “eliminate,” “repeal,” “reduce,” “simplify,” “streamline,” “ease,” “exempt,” “flexibility,” “burden reduction,” “relief,” “sunset,” “expiration,” “reform,” “modernize,” “cost savings,” “efficiency,” “remove barriers,” “rescind,” “decreased oversight,” “less supervision,” and “voluntary compliance.” Rules containing these keywords were flagged as deregulatory, and their page counts were excluded in Models 3 and 5.

32 The results are robust to removing final rules that were promulgated during the first year of each administration, since these are more likely to have been initiated in the prior administration.

References

REFERENCES

Aberbach, Joel D., and Rockman, Bert A.. 1976. “Clashing Beliefs Within the Executive Branch: The Nixon Administration Bureaucracy.” American Political Science Review 70 (2): 456468.10.2307/1959650CrossRefGoogle Scholar
Acs, Alex. 2018. “Policing the Administrative State.” Journal of Politics 80 (4): 12251238.10.1086/698846CrossRefGoogle Scholar
Acs, Alex. 2020. “Ideal Point Estimation in Political Hierarchies: A Framework and an Application to the US Executive Branch.” Journal of Law, Economics, and Organization 36 (1): 207230.Google Scholar
Acs, Alex. 2025. “Replication Data for: Mapping the Political Contours of the Regulatory State: Dynamics Estimates of Agency Ideal Points.” Harvard Dataverse. Dataset. https://doi.org/10.7910/DVN/C7S2SI.CrossRefGoogle Scholar
Bertelli, Anthony M., and Grose, Christian R.. 2011. “The Lengthened Shadow of Another Institution? Ideal Point Estimates for the Executive Branch and Congress.” American Journal of Political Science 55 (4): 767781.10.1111/j.1540-5907.2011.00527.xCrossRefGoogle Scholar
Betancourt, Michael, and Girolami, Mark. 2015. “Hamiltonian Monte Carlo for Hierarchical Models.” In Current Trends in Bayesian Methodology with Applications, eds. Satyanshu K. Upadhyay, Umesh Singh, Dipak K. Dey, and Appaia Loganathan, 79–102. New York: Routledge.Google Scholar
Bonica, Adam, Chen, Jowei, and Johnson, Tim. 2015. “Senate Gate-Keeping, Presidential Staffing of “Inferior Offices,” and the Ideological Composition of Appointments to the Public Bureaucracy.” Quarterly Journal of Political Science 10 (1): 540.10.1561/100.00012085CrossRefGoogle Scholar
Califano, Joseph A. Jr. 1981. Governing America: An Insider’s Report from the White House and the Cabinet. New York: Simon & Schuster.Google Scholar
Cameron, Charles M. 2000. Veto Bargaining: Presidents and the Politics of Negative Power. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Chen, Jowei, and Johnson, Tim. 2015. “Federal Employee Unionization and Presidential Control of the Bureaucracy: EStimating and Explaining Ideological Change in Executive Agencies.” Journal of Theoretical Politics 27 (1): 151174.10.1177/0951629813518126CrossRefGoogle Scholar
Clinton, Joshua, Jackman, Simon, and Rivers, Douglas. 2004. “The Statistical Analysis of Roll Call Data.” American Political Science Review 98 (2): 355370.10.1017/S0003055404001194CrossRefGoogle Scholar
Clinton, Joshua D., Bertelli, Anthony, Grose, Christian R., Lewis, David E., and Nixon, David C.. 2012. “Separated Powers in the United States: The Ideology of Agencies, Presidents, and Congress.” American Journal of Political Science 56 (2): 341354.10.1111/j.1540-5907.2011.00559.xCrossRefGoogle Scholar
Clinton, Joshua D., and Lewis, David E.. 2008. “Expert Opinion, Agency Characteristics, and Agency Preferences.” Political Analysis 16 (1): 320.CrossRefGoogle Scholar
Copeland, Curtis W. 2015. “The Unified Agenda: Proposals for Reform.” Technical Report.Google Scholar
Gibbons, Robert. 1997. “An Introduction to Applicable Game Theory.” Journal of Economic Perspectives 11 (1): 127149.10.1257/jep.11.1.127CrossRefGoogle Scholar
Haeder, Simon F., and Yackee, Susan Webb. 2015. “Influence and the Administrative Process: Lobbying the U.S. President’s Office of Management and Budget.” American Political Science Review 109 (03): 507522.10.1017/S0003055415000246CrossRefGoogle Scholar
Heinzerling, Lisa. 2014. “Inside EPA: A Former Insider’s Reflections on the Relationship between the Obama EPA and the Obama White House.” Pace Environmental Law Review 31 (1): 325–69.10.58948/0738-6206.1741CrossRefGoogle Scholar
Hirsch, Alexander V. 2011. “Theory Driven Bias in Ideal Point Estimates—A Monte Carlo Study.” Political Analysis 19 (1): 87102.CrossRefGoogle Scholar
Lewis, Jeffrey B., Poole, Keith, Rosenthal, Howard, Boche, Adam, Rudkin, Aaron, and Sonnet, Luke. 2021. “Voteview: Congressional Roll-Call Votes Database.” https://voteview.com.Google Scholar
McCarty, Nolan, Poole, Keith T., and Rosenthal, Howard. 2001. “The Hunt for Party Discipline in Congress.” American Political Science Review 95 (3): 673687.CrossRefGoogle Scholar
McCubbins, Mathew D., and Schwartz, Thomas. 1984. “Congressional Oversight Overlooked: Police Patrols versus Fire Alarms.” American Journal of Political Science 28 (1): 165179.10.2307/2110792CrossRefGoogle Scholar
McMann, Kelly, Pemstein, Daniel, Seim, Brigitte, Teorell, Jan, and Lindberg, Staffan. 2022. “Assessing Data Quality: An Approach and an Application.” Political Analysis 30 (3): 426449.10.1017/pan.2021.27CrossRefGoogle Scholar
Melnick, R. Shep. 2018. The Transformation of Title IX: Regulating Gender Equality in Education. Washington, DC: Brookings Institution Press.10.5040/9798765194690CrossRefGoogle Scholar
Nathan, Richard P. 1975. The Plot That Failed: Nixon and the Administrative Presidency. New York: John Wiley & Sons.Google Scholar
Nixon, David C. 2004. “Separation of Powers and Appointee Ideology.” Journal of Law, Economics, and Organization 20 (2): 438457.10.1093/jleo/ewh041CrossRefGoogle Scholar
Nou, Jennifer. 2012. “Agency Self-Insulation under Presidential Review.” Harvard Law Review 126 (7): 17551837.Google Scholar
Poole, Keith T., and Rosenthal, Howard. 2000. Congress: A Political-Economic History of Roll Call Voting. New York: Oxford University Press.Google Scholar
Revesz, Richard L. 1997. “Environmental Regulation, Ideology, and the D. C. Circuit.” Virginia Law Review 83 (8): 17171772.10.2307/1073657CrossRefGoogle Scholar
Richardson, Mark D. 2024. “Characterizing Agencies’ Political Environments: Partisan Agreement and Disagreement in the US Executive Branch.” Journal of Politics 86 (3): 11101114.10.1086/729954CrossRefGoogle Scholar
Richardson, Mark D., Clinton, Joshua D., and Lewis, David E.. 2018. “Elite Perceptions of Agency Ideology and Workforce Skill.” Journal of Politics 80 (1): 303308.10.1086/694846CrossRefGoogle Scholar
Spenkuch, Jorg L., Teso, Edoardo, and Xu, Guo. 2023. “Ideology and Performance in Public Organizations.” Econometrica 91 (4): 11711203.10.3982/ECTA20355CrossRefGoogle Scholar
Stan Development Team. 2024. RStan: The R interface to Stan. https://mc-stan.org/.Google Scholar
Treier, Shawn. 2010. “Where Does the President Stand? Measuring Presidential Ideology.” Political Analysis 18 (1): 124136.CrossRefGoogle Scholar
Figure 0

Figure 1. Dynamic Estimates of Agency Ideal Points with Credible IntervalsNote: Please see Table A1 in the Supplementary Appendix A for full names of all agencies mentioned in this figure.

Figure 1

Figure 2. Agency Ideal Points Distributed within DepartmentsNote: Agency ideal points are derived from averages across administrations using the static model. Each department is ordered from top (liberal) to bottom (conservative) based on a weighted average of the agencies’ ideal points in each department, with weights determined by the number of proposals made by each agency. The congressional ideal points were estimated using an IRT voting model on the 112th Congress.

Figure 2

Figure 3. Department Ranking Consensus Across Studies (Ranking Liberal to Conservative)Notes: Each study estimated an ideal point for each of the 16 departments. To construct the figure, the estimates were ranked from liberal to conservative, and the rankings were then compared to evaluate the degree of consensus about which departments ranked in the top, liberal half (top eight), and which ranked in the bottom, conservative half (bottom eight).

Figure 3

Table 1. Agencies with Largest Shifts in the Conservative Direction

Figure 4

Figure 4. Conservative Trends in Agency Ideal Points

Figure 5

Table 2. Ideological Distance and Regulatory Productivity (Negative Binomial)

Supplementary material: File

Acs supplementary material

Acs supplementary material
Download Acs supplementary material(File)
File 403.3 KB
Link
Submit a response

Comments

No Comments have been published for this article.