Essays in Econometrics and Public Finance

In recent years, natural experiments and randomized control trials (RCTs) have become increasingly common. Econometric evaluation of these data allows economic researchers and policymakers to assess the treatment effect of some intervention. Traditional approaches were often developed under the assu...

Full description

Bibliographic Details
Main Author: Sun, Liyang
Other Authors: Mikusheva, Anna, Andrews, Isaiah, Abadie, Alberto, Massachusetts Institute of Technology. Department of Economics
Format: Thesis
Language:unknown
Published: Massachusetts Institute of Technology 2021
Subjects:
DML
Online Access:https://hdl.handle.net/1721.1/139033
Description
Summary:In recent years, natural experiments and randomized control trials (RCTs) have become increasingly common. Econometric evaluation of these data allows economic researchers and policymakers to assess the treatment effect of some intervention. Traditional approaches were often developed under the assumption of homogeneous treatment effect. In this thesis, I investigate whether these approaches remain reliable for estimating the average treatment effect in a more realistic setting of heterogeneity, and in cases not, I propose more accommodating estimation methods. In the first chapter, I focus on the important problem of efficient allocation of an intervention when it is infeasible to reach everyone due to limited resources. An overlooked aspect of the existing approach is that the cost of the intervention can also be heterogeneous and requires estimation. I find the direct extension to the existing approach does not account for the uncertainty of the estimated cost, and can lead to infeasible allocations. I provide policymakers with new approaches to allocations that account for imperfect information about feasibility. Treatment effects heterogeneity can affect individuals’ decisions to comply with the intervention. Their take-up decisions are unobserved, and economic researchers would like to estimate which subpopulations are most likely to comply with the intervention. Traditional approaches focus on settings with low-dimensional observed confounders. In the second chapter, Rahul Singh and I develop methods to characterize compilers while adjusting for high-dimensional observed confounders. In our approach, the adjustment is itself performed by machine learning, a variant called automatic de-biased machine learning (Auto-DML), and avoids the ad hoc trimming or censoring of a learned propensity score. In the third chapter, joint with Sarah Abraham, we examine two-way fixed effects regressions that include leads and lags of the treatment, a popular approach to estimating the effect of dynamic shocks and ...