Attribute weighting methods and decision quality in the presence of response error: A simulation study
This paper uses a simulation approach to investigate how different attribute weighting techniques affect the quality of decisions based on multiattribute value models. The weighting methods considered include equal weighting of all attributes, two methods for using judgments about the rank ordering of weights, and a method for using judgments about the ratios of weights. The question addressed is: How well does each method perform when based on judgments of attribute weights that are unbiased but subject to random error? To address this question, we employ simulation methods. The simulation results indicate that ratio weights were either better than rank order weights (when error in the ratio weights was small or moderate) or tied with them (when error was large). Both ratio weights and rank order weights were substantially superior to the equal weights method in all cases studied. Our findings suggest that it will usually be worth the extra time and effort required to assess ratio weights. In cases where the extra time or effort required is too great, rank order weights will usually give a good approximation to the true weights. Comparisons of the two rank-order weighting methods favored the rank-order-centroid method over the rank-sum method. © 1998 John Wiley & Sons, Ltd.
Jia, J; Fischer, GW; Dyer, JS
Volume / Issue
Start / End Page
International Standard Serial Number (ISSN)
Digital Object Identifier (DOI)