Policy evaluation beyond the average
​
As public expectations for government performance rise with rapid technological progress, governments face the challenge of enacting evidence-based policies that allocate resources efficiently and deliver customised services. Yet, current evaluation tools are inadequate because they often overlook that individuals differ in their responses even when treated equally. Instead, policy decisions routinely rely on average treatment effects, disregarding potential collateral damage and benefit.
My research aims to shape the future of policy evaluation and development by applying and extending cutting-edge methods to unveil the entire distribution of treatment effects. Exposing this distribution allows us to determine winners and losers of policy interventions while quantifying their impact on economic inequality. Establishing frontier methods will strengthen our capacity to evaluate economic policy interventions, leading to better targeted government policies that ultimately benefit public well-being and reduce unnecessary public spending.
My work builds on a newly developed method for identifying quantiles of the distribution of treatment effects under modest assumptions. This method can be used to answer a range of policy-relevant questions such as: Does a welfare program increase the average income despite hurting most people and benefitting only a few? Does an educational intervention that increases average test scores lead to higher inequality? What are the unintended consequences of an active labour market policy? How do alternative health insurance plans affect the way medical spending is distributed? What is the effect of a reminder from the tax office on the total amount of tax collected?
While my current research agenda focuses on applications in economics, it applies innovative methods that have immediate relevance for other research domains, such as political science, sociology, and psychology, as well as biomedical science and neuroscience. For instance, the methodological advancements are particularly suitable for biomedical data as they can reveal the proportions of patients who do and do not benefit from a specific health intervention.
​
Quantiles of the distribution of treatment effects
​
My newly developed method allows researchers to understand, in unprecedented detail, the distribution of treatment effects and parameters that depend on it, including the fraction of beneficiaries (winners) and those harmed (losers), and the gains and losses resulting from policy interventions.
Estimating quantiles of the distribution of treatment effects (QDTE) under plausible assumptions is ground-breaking as it reveals if an intervention affects people equally or if some benefit while others are being harmed. This is crucial, for example, if a welfare reform increases earnings of workers for certain segments of the distribution while simultaneously reducing earnings for other segments. Relying solely on average treatment effects (ATE) might obscure this heterogeneity.
The study of heterogeneous treatment effects has traditionally relied on average treatment effects within subgroups that share the same observed characteristics. Recently, more adaptable approaches for estimating subgroup-specific average treatment effects have emerged, often leveraging machine learning techniques that rely on large-scale datasets. While subgroup analysis can yield valuable insights, it may overlook critical aspects of heterogeneity when essential information is unobserved. Most notably, it cannot be used to determine the distribution of treatment effects and the parameters derived from it.
​
Researchers have also made use of quantile treatment effects (QTE) to study heterogenous treatment responses. QTE are defined as differences between quantiles of separate marginal distributions of treatment and control outcomes. While experiments are informative about ATE without further assumptions, QTE rely on a rank invariance assumption. Rank invariance is a strong assumption because it implies that observation units maintain their relative position in the potential outcomes distribution regardless of whether they are being assigned to the treatment group or the control group. QTE are different from QDTE if the rank invariance assumption is violated. (More precisely, QDTE are only identical to QTE if the rank invariance assumption holds and if the QTE are monotonically increasing along the distribution.)
​
My newly developed method can be used to identify QDTE under plausible assumptions that can either be tested or subjected to sensitivity checks to assess their realism. Specifically, the method uses rank correlation coefficients between actual and predicted control state outcomes under the testable assumption that highly predictive covariates are available. In the case of random assignment, these rank correlation coefficients are identical, regardless of whether they are calculated using in-sample predictions or out-of-sample predictions. Assuming that all permutations of observation units satisfying this property are equally likely, it becomes possible to identify the rank correlation coefficient between potential treatment and control outcomes, along with the corresponding QDTE.
Reweighting techniques can be applied to extend the method to situations in which treatment assignment is as good as random after controlling for a set of covariates. Monte Carlo simulations demonstrate that the estimators are unbiased, consistent, and asymptotically normal. Rearranging QDTE yields generalised quantile treatment effects (GQTE), which do not require a rank invariance assumption but align with conventional QTE under rank invariance.