
In this article, I introduce the package opl for optimal policy learning, facilitating ex ante policy impact evaluation within the Stata environment. Despite theoretical progress, practical implementations of policy-learning algorithms are still poor within popular statistical software. To address this limitation, opl implements three popular policy-learning algorithms in Stata—threshold based, linear combination, and fixed-depth decision tree—and provides practical demonstrations of them using a real dataset. I also present policy-scenario development proposing a menu strategy, which is particularly useful when selection variables are affected by welfare monotonicity. Overall, this article contributes to bridging the gap between theoretical advancements and practical applications in the field of policy learning.
Optimal policy learning, Machine learning, Optimal treatment assignment, Ex-ante policy evaluation
Optimal policy learning, Machine learning, Optimal treatment assignment, Ex-ante policy evaluation
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
