Statistical review is almost always needed for peer-reviewed journals to ensure proper evaluation of a study's sample size, sampling, randomization, blinding, interpretation of findings, and appropriate handling of uncertainty through confidence intervals. Reviewers should check that authors provide confidence intervals that exclude clinically relevant differences when finding no effect, and that interpret the range of possible values when finding a difference. This helps improve the rigor and reliability of clinical research.
Convert to study guideBETA
Transform any presentation into a summarized study guide, highlighting the most important points and key insights.
1 of 8
Download to read offline
More Related Content
So You Want To Be a Reviewer?
1. Aleksandra Turkiewicz, PhD, CStat
Associate editor for statistics, Osteoarthritis and Cartilage
Clinical epidemiology unit, Lund University, Lund, Sweden
So You Want To Be a Reviewer?
Tips for Writing an Effective Review
for Peer-Reviewed Journals
3. When do we need statistical review?
(Almost) always
Deputy editor:
Prof. Jonas Ranstam
Lund University
Associate editor:
Prof. Simon Skene
University of Surrey
7. Embrace uncertainty!
1. no difference demand a confidence
interval that excludes
biologically/clinically relevant difference
2. there was a difference demand a
confidence interval and interpretation
of the values included in this interval
3. significant the authors probably have
little to come with
YouUncertainty
Bartolomeo Cesi
Editor's Notes
#5: how was the sample size arrived at?
how were the participants/samples selected?
what was randomized (cell wells, joints, animals, humans) and how?
- in conduct of experiment how or why not? In assessment of outcome a must have in experimental research!
#6: Criticism of the use of p-values is almost as old as p-values but has intensified in recent years and this for good reason. In effect the concept of statistical significance has died recently.
#7: Statistical significance has recently died. Why is p-value, and especially classifying results into statistically significant and non-significant so bad? There are many reasons, but some of the main ones are:
- a large p-value does not mean that there is no difference
- a small p-value does not mean that there is a difference
- It is not important at all if there is a difference or if there is no difference, such a distinction is artificial. What matters is how big is the difference and what biological and clinical consequences and meaning does a difference of this size have. And p-values does not answer this crucial question. Further, practically any data can be analysed in a way that will lead to a statistically significant p-value through data driven decisions, both conscious and unconscious. I think we should all be happy that it is gone and dance on its grave. So what to do instead?