Abstract

A dynamic treatment regime (DTR) is a sequence of decision rules that adapt to the time-varying states of an individual. Black-box learning methods have shown great potential in predicting the optimal treatments; however, the resulting DTRs lack interpretability, which is of paramount importance for medical experts to understand and implement. We present a stochastic tree-based reinforcement learning (ST-RL) method for estimating optimal DTRs in a multistage multitreatment setting with data from either randomized trials or observational studies. At each stage, ST-RL constructs a decision tree by first modeling the mean of counterfactual outcomes via nonparametric regression models, and then stochastically searching for the optimal tree-structured decision rule using a Markov chain Monte Carlo algorithm. We implement the proposed method in a backward inductive fashion through multiple decision stages. The proposed ST-RL delivers optimal DTRs with better interpretability and contributes to the existing literature in its non-greedy policy search. Additionally, ST-RL demonstrates stable and outstanding performances even with a large number of covariates, which is especially appealing when data are from large observational studies. We illustrate the performance of ST-RL through simulation studies, and also a real data application using esophageal cancer data collected from 1170 patients at MD Anderson Cancer Center from 1998 to 2012. Supplementary materials for this article are available online.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call