On incomplete learning and certainty-equivalence control
We consider a dynamic learning problem where a decision maker sequentially selects a control and observes a response variable that depends on chosen control and an unknown sensitivity parameter. After every observation, the decision maker updates his or her estimate of the unknown parameter and uses a certainty-equivalence decision rule to determine subsequent controls based on this estimate. We show that under this certainty-equivalence learning policy the parameter estimates converge with positive probability to an uninformative fixed point that can differ from the true value of the unknown parameter; a phenomenon that will be referred to as incomplete learning. In stark contrast, it will be shown that this certainty-equivalence policy may avoid incomplete learning if the parameter value of interest "drifts away" from the uninformative fixed point at a critical rate. Finally, we prove that one can adaptively limit the learning memory to improve the accuracy of the certainty-equivalence policy in both static (estimation), as well as slowly varying (tracking) environments, without relying on forced exploration.
Duke Scholars
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Operations Research
- 3507 Strategy, management and organisational behaviour
- 1503 Business and Management
- 0802 Computation Theory and Mathematics
- 0102 Applied Mathematics
Citation
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Operations Research
- 3507 Strategy, management and organisational behaviour
- 1503 Business and Management
- 0802 Computation Theory and Mathematics
- 0102 Applied Mathematics