Too many attributes: A test of the validity of combining discrete-choice and best-worst scaling data
Background Best-practice guidelines for stated-preference methods suggest there is a limit to the number of attributes respondents can reliably evaluate. This study explores a cost-effective solution to combining elicitation formats from a single study to obtain more preference information from a given sample while limiting respondents' cognitive burden. Methods A stated-preference survey combined both discrete-choice experiment (DCE) and best-worst scaling (BWS) elicitation formats to Alzheimer's disease caregivers. DCE questions elicited attribute-level preferences for one subset of attributes, and object-case BWS elicited overall relative attribute importance for another subset of attributes, with two overlapping attributes in both designs. Two alternative joint models combined preferences from the BWS and DCE data. One model controlled for confounding between response-error variance and preference parameters in the DCE model, and the other did not. Results About 400 caregivers completed the survey. We estimated attribute-level preference parameters for 17 attributes, 9 of which were directly estimated using the DCE data, and 8 of which were extrapolated based on the overall relative importance estimated using the object-case BWS data. Results from both joint models and individual models indicate that relative preferences from the two question formats were the same up to a scale factor. Conclusion Our results suggest that combining DCE and object-case BWS is a cost-effective solution to the need for more information when study resources are limited. Moreover, for these data at least, researchers' concerns about serious confounding between DCE model estimates and response-error variance appear unwarranted.