AbstractRecent metascience suggests that motor behaviour research may be underpowered, on average. Researchers can perform a priori power analyses to ensure adequately powered and informative studies. Although conducting a power analysis can be straightforward, there are common pitfalls that can result in underestimating the required sample size for a given design and effect size of interest. Critical evaluation of power analyses requires successful analysis reproduction, which is conditional on the reporting of sufficient information. Here we attempted to reproduce every power analysis reported (k = 85) in three motor behaviour journals between January 2019 and June 2021. Two researchers independently attempted to reproduce the power estimates with the reported information provided, as well as by making plausible assumptions for missing parameters. We reproduced 8% (n = 7) of power analyses using the information provided. This increased to 35% (n = 30) when we assumed plausible values for missing parameters. Among studies that reported sufficient details to evaluate, we found that 64% used the same statistical analysis in the power analysis as in the study itself. Similarly, the design used in the power analysis matched at least one of the identified hypotheses in 78% of studies we could evaluate. Overall, we observed that power analyses were not commonly reported with sufficient information to reproduce the results and a non-trivial number of power analyses were affected by common pitfalls. There is substantial opportunity to address the issue of underpowered research in motor behaviour by increasing adoption of power analyses and ensuring reproducible reporting practices.
Acknowledgments: NSERC; McMaster University