A Comprehensive Review of Effect Size Reporting and Interpreting Practices in Academic Journals in Education and Psychology

Published

Journal Article

Null hypothesis significance testing has dominated quantitative research in education and psychology. However, the statistical significance of a test as indicated by a p-value does not speak to the practical significance of the study. Thus, reporting effect size to supplement p-value is highly recommended by scholars, journal editors, and academic associations. As a measure of practical significance, effect size quantifies the size of mean differences or strength of associations and directly answers the research questions. Furthermore, a comparison of effect sizes across studies facilitates meta-analytic assessment of the effect size and accumulation of knowledge. In the current comprehensive review, we investigated the most recent effect size reporting and interpreting practices in 1,243 articles published in 14 academic journals from 2005 to 2007. Overall, 49% of the articles reported effect size-57% of which interpreted effect size. As an empirical study for the sake of good research methodology in education and psychology, in the present study we provide an illustrative example of reporting and interpreting effect size in a published study. Furthermore, a 7-step guideline for quantitative researchers is also summarized along with some recommended resources on how to understand and interpret effect size. © 2010 American Psychological Association.

Full Text

Duke Authors

Cited Authors

  • Sun, S; Pan, W; Wang, LL

Published Date

  • November 1, 2010

Published In

Volume / Issue

  • 102 / 4

Start / End Page

  • 989 - 1004

International Standard Serial Number (ISSN)

  • 0022-0663

Digital Object Identifier (DOI)

  • 10.1037/a0019507

Citation Source

  • Scopus