Hostname: page-component-6bb9c88b65-vpjdr Total loading time: 0 Render date: 2025-07-27T18:59:16.642Z Has data issue: false hasContentIssue false

Estimating the Local Average Treatment Effect Without the Exclusion Restriction

Published online by Cambridge University Press:  21 July 2025

Zachary Markovich*
Affiliation:
https://ror.org/042nb2s44 Scientist, Uber Technologies , Seattle, WA, USA

Abstract

Existing approaches to conducting inference about the Local Average Treatment Effect or LATE require assumptions that are considered tenuous in many applied settings. In particular, Instrumental Variable techniques require monotonicity and the exclusion restriction while principal score methods rest on some form of the principal ignorability assumption. This paper provides new results showing that an estimator within the class of principal score methods allows conservative inference about the LATE without invoking such assumptions. I term this estimator the Compliance Probability Weighting estimator and show that, under very mild assumptions, it provides an asymptotically conservative estimator for the LATE. I apply this estimator to a recent survey experiment and provide evidence of a stronger effect for the subset of compliers than the original authors had uncovered.

Information

Type
Article
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of The Society for Political Methodology

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable

Footnotes

Edited by: Jeff Gill

References

Andrews, I., Stock, J. H., and Sun, L.. 2019. “Weak Instruments in Instrumental Variables Regression: Theory and Practice.” Annual Review of Economics 11: 727753.10.1146/annurev-economics-080218-025643CrossRefGoogle Scholar
Angrist, J. D., Imbens, G. W., and Rubin, D. B.. 1996. “Identification of Causal Effects Using Instrumental Variables.” Journal of the American statistical Association 91 (434): 444455.10.1080/01621459.1996.10476902CrossRefGoogle Scholar
Aronow, P. M., and Carnegie, A.. 2013. “Beyond LATE: Estimation of the Average Treatment Effect with an Instrumental Variable.” Political Analysis 21 (4): 492506.10.1093/pan/mpt013CrossRefGoogle Scholar
Bein, E. 2015. “Proxy Variable Estimators for Principal Stratification Analyses.” Technical report. Abt Associates Working Paper.Google Scholar
Belloni, A., Chernozhukov, V., and Hansen, C.. 2014. “Inference on Treatment Effects After Selection Among High-Dimensional Controls.” The Review of Economic Studies 81 (2): 608650.10.1093/restud/rdt044CrossRefGoogle Scholar
de Benedictis-Kessner, J., Baum, M. A., Berinsky, A. J., and Yamamoto, T.. 2019. “Persuading the Enemy: Estimating the Persuasive Effects of Partisan Media with the Preference-Incorporating Choice and Assignment Design.” American Political Science Review 113 (4): 902916.10.1017/S0003055419000418CrossRefGoogle Scholar
Coppock, A., Hill, S. J., and Vavreck, L.. 2020. “The Small Effects of Political Advertising are Small Regardless of Context, Message, Sender, or Receiver: Evidence From 59 Real-Time Randomized Experiments.” Science Advances 6 (36): eabc4046.10.1126/sciadv.abc4046CrossRefGoogle ScholarPubMed
Ding, P., and Lu, J.. 2017. “Principal Stratification Analysis Using Principal Scores.” Journal of the Royal Statistical Society Series B: Statistical Methodology 79 (3): 757777.10.1111/rssb.12191CrossRefGoogle Scholar
Feller, A., Mealli, F., and Miratrix, L.. 2017. “Principal Score Methods: Assumptions, Extensions, and Practical Considerations.” Journal of Educational and Behavioral Statistics 42 (6): 726758.10.3102/1076998617719726CrossRefGoogle Scholar
Gerber, A. S., Gimpel, J. G., Green, D. P., and Shaw, D. R.. 2011. “How Large and Long-Lasting are the Persuasive Effects of Televised Campaign Ads? Results From a Randomized Field Experiment.” American Political Science Review 105 (1): 135150.10.1017/S000305541000047XCrossRefGoogle Scholar
Guess, A., and Coppock, A.. 2020. “Does Counter-Attitudinal Information Cause Backlash? Results From Three Large Survey Experiments.” British Journal of Political Science 50 (4): 14971515.10.1017/S0007123418000327CrossRefGoogle Scholar
Guess, A. M., Barberá, P., Munzert, S., and Yang, J.. 2021. “The Consequences of Online Partisan Media.” Proceedings of the National Academy of Sciences 118 (14): e2013464118.10.1073/pnas.2013464118CrossRefGoogle ScholarPubMed
Hernán, M. A., and Robins, J. M.. 2017. “Per-Protocol Analyses of Pragmatic Trials.” The New England Journal of Medicine 377 (14): 13911398.10.1056/NEJMsm1605385CrossRefGoogle ScholarPubMed
Hill, J., Waldfogel, J., and Brooks-Gunn, J.. 2002. “Differential Effects of High-Quality Child Care.” Journal of Policy Analysis and Management: The Journal of the Association for Public Policy Analysis and Management 21 (4): 601627.10.1002/pam.10077CrossRefGoogle Scholar
Jo, B., and Stuart, E. A.. 2009. “On the Use of Propensity Scores in Principal Causal Effect Estimation.” Statistics in Medicine 28 (23): 28572875.10.1002/sim.3669CrossRefGoogle ScholarPubMed
Joffe, M. M., Small, D., and Hsu, C.-Y.. 2007. “Defining and Estimating Intervention Effects for Groups that will Develop an Auxiliary Outcome.” Statistical Science 22 (1): 7497.10.1214/088342306000000655CrossRefGoogle Scholar
Kalla, J. L., and Broockman, D. E.. 2018. “The Minimal Persuasive Effects of Campaign Contact in General Elections: Evidence from 49 Field Experiments.” American Political Science Review 112 (1): 148166.10.1017/S0003055417000363CrossRefGoogle Scholar
Kalla, J. L., and Broockman, D. E.. 2022. ““outside Lobbying” Over the Airwaves: A Randomized Field Experiment on Televised Issue Ads.” American Political Science Review 116 (3): 11261132.10.1017/S0003055421001349CrossRefGoogle Scholar
Kane, J. V., and Barabas, J.. 2019. “No Harm in Checking: Using Factual Manipulation Checks to Assess Attentiveness in Experiments.” American Journal of Political Science 63 (1): 234249.10.1111/ajps.12396CrossRefGoogle Scholar
Lal, A., Lockhart, M., Xu, Y., and Zu, Z.. 2023. “How Much Should We Trust Instrumental Variable Estimates in Political Science? Practical Advice Based on Over 60 Replicated Studies.” arXiv preprint arXiv:2303.11399.Google Scholar
Marbach, M., and Hangartner, D.. 2020. “Profiling Compliers and Noncompliers for Instrumental-Variable Analysis.” Political Analysis 28 (3): 435444.10.1017/pan.2019.48CrossRefGoogle Scholar
Markovich, Z. 2025. “Estimating the Local Average Treatment Effect Without the Exclusion Restriction.” https://doi.org/10.7910/DVN/Y6JJ0A.CrossRefGoogle Scholar
Montgomery, J. M., Nyhan, B., and Torres, M.. 2018. “How Conditioning on Posttreatment Variables Can Ruin Your Experiment and What to Do About It.” American Journal of Political Science 62 (3): 760775.10.1111/ajps.12357CrossRefGoogle Scholar
Nie, X., and Wager, S.. 2021. “Quasi-Oracle Estimation of Heterogeneous Treatment Effects.” Biometrika 108 (2): 299319.10.1093/biomet/asaa076CrossRefGoogle Scholar
Porcher, R., Leyrat, C., Baron, G., Giraudeau, B., and Boutron, I.. 2016. “Performance of Principal Scores to Estimate the Marginal Compliers Causal Effect of an Intervention.” Statistics in Medicine 35 (5): 752767.10.1002/sim.6735CrossRefGoogle ScholarPubMed
Reuther, A., et al. 2018. “Interactive Supercomputing on 40,000 Cores for Machine Learning and Data Analysis.” In 2018 IEEE High Performance extreme Computing Conference (HPEC), 16. IEEE.Google Scholar
Robins, J. M., and Finkelstein, D. M.. 2000. “Correcting for Noncompliance and Dependent Censoring in an AIDS Clinical Trial With Inverse Probability of Censoring Weighted (IPCW) Log-Rank Tests.” Biometrics 56 (3): 779788.10.1111/j.0006-341X.2000.00779.xCrossRefGoogle Scholar
Rubin, D. B. 1974. “Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies.” Journal of Educational Psychology 66 (5): 688.10.1037/h0037350CrossRefGoogle Scholar
Sasaki, T. n.d.Bayesian Approach to Estimating the Average Treatment Effect in the Presence of Noncompliance.” In Annual Meeting of the Society of Political Methodology.Google Scholar
Stuart, E. A., and Jo, B.. 2015. “Assessing the Sensitivity of Methods for Estimating Principal Causal Effects.” Statistical Methods in Medical Research 24 (6): 657674.10.1177/0962280211421840CrossRefGoogle ScholarPubMed
Wang, C., Zhang, Y., Mealli, F., and Bornkamp, B.. 2023. “Sensitivity Analyses for the Principal Ignorability Assumption Using Multiple Imputation.” Pharmaceutical Statistics 22 (1): 6478.10.1002/pst.2260CrossRefGoogle ScholarPubMed
Wittenberg, C., Tappin, B. M., Berinsky, A. J., and Rand, D. G.. 2021. “The (Minimal) Persuasive Advantage of Political Video Over Text.” Proceedings of the National Academy of Sciences 118 (47): e2114388118.10.1073/pnas.2114388118CrossRefGoogle ScholarPubMed
Wood, T., and Porter, E.. 2019. “The Elusive Backfire Effect: Mass Attitudes’ Steadfast Factual Adherence.” Political Behavior 41: 135163.10.1007/s11109-018-9443-yCrossRefGoogle Scholar
Supplementary material: File

Markovich supplementary material

Markovich supplementary material
Download Markovich supplementary material(File)
File 892.4 KB
Supplementary material: Link

Markovich Dataset

Link