Diagnostic verification of probability forecasts
Verification of probability forecasts traditionally consists largely of the computation of a few overall performance measures. This paper outlines a diagnostic approach to the evaluation of probability forecasts. The basic elements of this approach are the joint distribution of forecasts and observations and the conditional and marginal distributions associated with factorizations of the joint distribution. These distributions and their summary measures, together with selected performance measures and their decompositions, provide potentially insightful and useful information concerning the fundamental characteristics of the forecasts of interest, the corresponding observations, and their relationship. This approach and the associated methodology are illustrated by presenting some results of an analysis of U.S. National Weather Service probability of precipitation (PoP) forecasts. The diagnostic analysis of PoP forecasts consists of graphical displays and quantitative measures describing various aspects (or attributes) of forecast quality, including calibration (or reliability), refinement, resolution, discrimination, accuracy, bias, and skill. In general, the samples of PoP forecasts examined here are relatively well-calibrated, unbiased, and skillful, but lacking to some degree in accuracy, refinement, resolution, and discrimination. Some differences in these characteristics as a function of forecast type (modelbased/subjective), season (cool/warm), and lead time are noted. Diagnostic verification of probability forecasts has obvious benefits to modelers and forecasters in terms of providing detailed feedback and suggesting ways in which forecasts might be improved. © 1992.
Volume / Issue
Start / End Page
International Standard Serial Number (ISSN)
Digital Object Identifier (DOI)