What can we learn from a decade of database audits? The Duke Clinical Research Institute experience, 1997--2006.

Published

Journal Article

BACKGROUND: Despite a pressing and well-documented need for better sharing of information on clinical trials data quality assurance methods, many research organizations remain reluctant to publish descriptions of and results from their internal auditing and quality assessment methods. PURPOSE: We present findings from a review of a decade of internal data quality audits performed at the Duke Clinical Research Institute, a large academic research organization that conducts data management for a diverse array of clinical studies, both academic and industry-sponsored. In so doing, we hope to stimulate discussions that could benefit the wider clinical research enterprise by providing insight into methods of optimizing data collection and cleaning, ultimately helping patients and furthering essential research. METHODS: We present our audit methodologies, including sampling methods, audit logistics, sample sizes, counting rules used for error rate calculations, and characteristics of audited trials. We also present database error rates as computed according to two analytical methods, which we address in detail, and discuss the advantages and drawbacks of two auditing methods used during this 10-year period. RESULTS: Our review of the DCRI audit program indicates that higher data quality may be achieved from a series of small audits throughout the trial rather than through a single large database audit at database lock. We found that error rates trended upward from year to year in the period characterized by traditional audits performed at database lock (1997-2000), but consistently trended downward after periodic statistical process control type audits were instituted (2001-2006). These increases in data quality were also associated with cost savings in auditing, estimated at 1000 h per year, or the efforts of one-half of a full time equivalent (FTE). LIMITATIONS: Our findings are drawn from retrospective analyses and are not the result of controlled experiments, and may therefore be subject to unanticipated confounding. In addition, the scope and type of audits we examine here are specific to our institution, and our results may not be broadly generalizable. CONCLUSIONS: Use of statistical process control methodologies may afford advantages over more traditional auditing methods, and further research will be necessary to confirm the reliability and usability of such techniques. We believe that open and candid discussion of data quality assurance issues among academic and clinical research organizations will ultimately benefit the entire research community in the coming era of increased data sharing and re-use.

Full Text

Duke Authors

Cited Authors

  • Rostami, R; Nahm, M; Pieper, CF

Published Date

  • April 2009

Published In

Volume / Issue

  • 6 / 2

Start / End Page

  • 141 - 150

PubMed ID

  • 19342467

Pubmed Central ID

  • 19342467

International Standard Serial Number (ISSN)

  • 1740-7745

Digital Object Identifier (DOI)

  • 10.1177/1740774509102590

Language

  • eng

Conference Location

  • England