Robust estimation of optimal dynamic treatment regimes for sequential treatment decisions.
A dynamic treatment regime is a list of sequential decision rules for assigning treatment based on a patient's history. Q- and A-learning are two main approaches for estimating the optimal regime, i.e., that yielding the most beneficial outcome in the patient population, using data from a clinical trial or observational study. Q-learning requires postulated regression models for the outcome, while A-learning involves models for that part of the outcome regression representing treatment contrasts and for treatment assignment. We propose an alternative to Q- and A-learning that maximizes a doubly robust augmented inverse probability weighted estimator for population mean outcome over a restricted class of regimes. Simulations demonstrate the method's performance and robustness to model misspecification, which is a key concern.
Duke Scholars
Published In
DOI
ISSN
Publication Date
Volume
Issue
Location
Related Subject Headings
- Statistics & Probability
- 4905 Statistics
- 3802 Econometrics
- 1403 Econometrics
- 0104 Statistics
- 0103 Numerical and Computational Mathematics
Citation
Published In
DOI
ISSN
Publication Date
Volume
Issue
Location
Related Subject Headings
- Statistics & Probability
- 4905 Statistics
- 3802 Econometrics
- 1403 Econometrics
- 0104 Statistics
- 0103 Numerical and Computational Mathematics