计量、金融与大数据分析workshop: Data-driven Policy Learning for a Continuous Treatment

发布日期:2024-04-12 12:00    来源:

Data-driven Policy Learning for a Continuous Treatment

主讲人:解海天,北京大学光管管理学院

主持老师:(北大经济学院)王熙

参与老师:  (北大经院)王一鸣、王法、刘蕴霆

(北大国发院)黄卓、张俊妮、孙振庭

(北大新结构)胡博

时间:2024年4月12日(周五) 10:00-11:30

地点(线下): 北京大学经济学院606会议室

报告摘要:

This paper studies policy learning under the condition of unconfoundedness with a continuous treatment variable. Our research begins by employing kernel-based inverse propensity-weighted (IPW) methods to estimate policy welfare. We aim to approximate the optimal policy within a global policy class characterized by infinite Vapnik-Chervonenkis (VC) dimension. This is achieved through the utilization of a sequence of sieve policy classes, each with finite VC dimension. Preliminary analysis reveals that welfare regret comprises of three components: global welfare deficiency, variance, and bias. This leads to the necessity of simultaneously selecting the optimal bandwidth for estimation and the optimal policy class for welfare approximation. To tackle this challenge, we introduce a semi-data-driven strategy that employs penalization techniques. This approach yields oracle inequalities that adeptly balance the three components of welfare regret without prior knowledge of the welfare deficiency. By utilizing precise maximal and concentration inequalities, we derive sharper regret bounds than those currently available in the literature. In instances where the propensity score is unknown, we adopt the doubly robust (DR) moment condition tailored to the continuous treatment setting. In alignment with the binary-treatment case, the DR welfare regret closely parallels the IPW welfare regret, given the fast convergence of nuisance estimators.

主讲人简介:

解海天2023年毕业于美国加州大学圣地亚哥分校。主要研究方向为因果推断理论,包括工具变量、断点回归等因果推断方法的非参数/半参数识别与估计,以及基于因果模型的政策分析评估、策略学习与统计决策等。研究成果发表于Journal of Business and Economic Statistics, Oxford Bulletin of Economics and Statistics等国际期刊。

 


分享到: