A Penalized Shared-Parameter Algorithm for Estimating Optimal Dynamic Treatment Regimens
A dynamic treatment regimen (DTR) is a set of decision rules to personalize treatments for an individual using their medical history. The Q-learning-based Q-shared algorithm has been used to develop DTRs that involve decision rules shared across multiple stages of intervention. We show that the existing Q-shared algorithm can suffer from non-convergence due to the use of linear models in the Q-learning setup, and identify the condition under which Q-shared fails. We develop a penalized Q-shared algorithm that not only converges in settings that violate the condition, but can outperform the original Q-shared algorithm even when the condition is satisfied. We give evidence for the proposed method in a real-world application and several synthetic simulations.
Duke Scholars
Published In
DOI
EISSN
ISSN
Publication Date
Related Subject Headings
- 4905 Statistics
- 3102 Bioinformatics and computational biology
Citation
Published In
DOI
EISSN
ISSN
Publication Date
Related Subject Headings
- 4905 Statistics
- 3102 Bioinformatics and computational biology