Evaluation criteria for human-automation performance metrics

Published

Conference Paper

Previous research has identified broad metric classes for human-automation performance to facilitate metric selection, as well as understanding and comparison of research results. However, there is still lack of an objective method for selecting the most efficient set of metrics. This research identifies and presents a list of evaluation criteria that can help determine the quality of a metric in terms of experimental constraints, comprehensive understanding, construct validity, statistical efficiency, and measurement technique efficiency. Future research will build on these evaluation criteria and existing generic metric classes to develop a cost-benefit analysis approach that can be used for metric selection. © 2010 ACM.

Full Text

Duke Authors

Cited Authors

  • Donmez, B; Pina, PE; Cummings, ML

Published Date

  • December 1, 2008

Published In

  • Performance Metrics for Intelligent Systems (Permis) Workshop

Start / End Page

  • 77 - 82

International Standard Book Number 13 (ISBN-13)

  • 9781605582931

Digital Object Identifier (DOI)

  • 10.1145/1774674.1774687

Citation Source

  • Scopus