Skip to main content
Journal cover image

Too many attributes: A test of the validity of combining discrete-choice and best-worst scaling data

Publication ,  Journal Article
Zhang, J; Reed Johnson, F; Mohamed, AF; Hauber, AB
Published in: Journal of Choice Modelling
June 1, 2015

Background Best-practice guidelines for stated-preference methods suggest there is a limit to the number of attributes respondents can reliably evaluate. This study explores a cost-effective solution to combining elicitation formats from a single study to obtain more preference information from a given sample while limiting respondents' cognitive burden. Methods A stated-preference survey combined both discrete-choice experiment (DCE) and best-worst scaling (BWS) elicitation formats to Alzheimer's disease caregivers. DCE questions elicited attribute-level preferences for one subset of attributes, and object-case BWS elicited overall relative attribute importance for another subset of attributes, with two overlapping attributes in both designs. Two alternative joint models combined preferences from the BWS and DCE data. One model controlled for confounding between response-error variance and preference parameters in the DCE model, and the other did not. Results About 400 caregivers completed the survey. We estimated attribute-level preference parameters for 17 attributes, 9 of which were directly estimated using the DCE data, and 8 of which were extrapolated based on the overall relative importance estimated using the object-case BWS data. Results from both joint models and individual models indicate that relative preferences from the two question formats were the same up to a scale factor. Conclusion Our results suggest that combining DCE and object-case BWS is a cost-effective solution to the need for more information when study resources are limited. Moreover, for these data at least, researchers' concerns about serious confounding between DCE model estimates and response-error variance appear unwarranted.

Duke Scholars

Published In

Journal of Choice Modelling

DOI

ISSN

1755-5345

Publication Date

June 1, 2015

Volume

15

Start / End Page

1 / 13
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Zhang, J., Reed Johnson, F., Mohamed, A. F., & Hauber, A. B. (2015). Too many attributes: A test of the validity of combining discrete-choice and best-worst scaling data. Journal of Choice Modelling, 15, 1–13. https://doi.org/10.1016/j.jocm.2014.12.001
Zhang, J., F. Reed Johnson, A. F. Mohamed, and A. B. Hauber. “Too many attributes: A test of the validity of combining discrete-choice and best-worst scaling data.” Journal of Choice Modelling 15 (June 1, 2015): 1–13. https://doi.org/10.1016/j.jocm.2014.12.001.
Zhang J, Reed Johnson F, Mohamed AF, Hauber AB. Too many attributes: A test of the validity of combining discrete-choice and best-worst scaling data. Journal of Choice Modelling. 2015 Jun 1;15:1–13.
Zhang, J., et al. “Too many attributes: A test of the validity of combining discrete-choice and best-worst scaling data.” Journal of Choice Modelling, vol. 15, June 2015, pp. 1–13. Scopus, doi:10.1016/j.jocm.2014.12.001.
Zhang J, Reed Johnson F, Mohamed AF, Hauber AB. Too many attributes: A test of the validity of combining discrete-choice and best-worst scaling data. Journal of Choice Modelling. 2015 Jun 1;15:1–13.
Journal cover image

Published In

Journal of Choice Modelling

DOI

ISSN

1755-5345

Publication Date

June 1, 2015

Volume

15

Start / End Page

1 / 13