Attribute weighting methods and decision quality in the presence of response error: A simulation study

Published

Journal Article

This paper uses a simulation approach to investigate how different attribute weighting techniques affect the quality of decisions based on multiattribute value models. The weighting methods considered include equal weighting of all attributes, two methods for using judgments about the rank ordering of weights, and a method for using judgments about the ratios of weights. The question addressed is: How well does each method perform when based on judgments of attribute weights that are unbiased but subject to random error? To address this question, we employ simulation methods. The simulation results indicate that ratio weights were either better than rank order weights (when error in the ratio weights was small or moderate) or tied with them (when error was large). Both ratio weights and rank order weights were substantially superior to the equal weights method in all cases studied. Our findings suggest that it will usually be worth the extra time and effort required to assess ratio weights. In cases where the extra time or effort required is too great, rank order weights will usually give a good approximation to the true weights. Comparisons of the two rank-order weighting methods favored the rank-order-centroid method over the rank-sum method. © 1998 John Wiley & Sons, Ltd.

Full Text

Duke Authors

Cited Authors

  • Jia, J; Fischer, GW; Dyer, JS

Published Date

  • January 1, 1998

Published In

Volume / Issue

  • 11 / 2

Start / End Page

  • 85 - 105

International Standard Serial Number (ISSN)

  • 0894-3257

Digital Object Identifier (DOI)

  • 10.1002/(SICI)1099-0771(199806)11:2<85::AID-BDM282>3.0.CO;2-K

Citation Source

  • Scopus