2 min readManuscriptMind Team

5 Common Methodology Issues That Get Manuscripts Rejected

Learn the most frequent methodology problems peer reviewers identify and how to avoid them in your research manuscripts.

methodologypeer reviewacademic writingtips

After analyzing thousands of manuscripts, we've identified the most common methodology issues that lead to rejection or major revision requests. Here's what to watch out for.

1. Insufficient Sample Size Justification

One of the most frequent critical issues we flag is the lack of power analysis or sample size justification.

The Problem: Stating "we recruited 50 participants" without explaining why 50 is sufficient.

The Fix: Include a power analysis in your methods section:

A priori power analysis (G*Power 3.1) indicated that
a sample of 48 participants would provide 80% power
to detect a medium effect size (d = 0.5) at α = .05.

2. Missing Control Groups

Studies that make causal claims without appropriate control conditions face immediate scrutiny.

The Problem: "Treatment X improved outcomes" without a comparison group.

The Fix: Design studies with:

  • Placebo or active control groups
  • Pre-post measurements
  • Randomization procedures

3. Inadequate Blinding Procedures

Bias can creep in when researchers or participants know the experimental conditions.

The Problem: Not specifying who was blinded and how.

The Fix: Clearly describe:

  • Single, double, or triple blinding
  • How allocation was concealed
  • Who was blinded (participants, researchers, assessors)

4. Unclear Inclusion/Exclusion Criteria

Vague participant selection criteria make replication impossible.

The Problem: "Healthy adults were recruited from the university."

The Fix: Be specific:

Inclusion criteria: Adults aged 18-65 with no history of neurological disorders, normal or corrected-to-normal vision, and right-hand dominance.

Exclusion criteria: Current use of psychoactive medications, pregnancy, or participation in similar studies within the past 6 months.

5. Inappropriate Statistical Tests

Using the wrong analysis method is a major red flag for reviewers.

Common Mistakes:

  • Using parametric tests on non-normal data
  • Multiple t-tests instead of ANOVA
  • Ignoring nested data structures
  • Not correcting for multiple comparisons

The Fix: Justify your statistical choices:

Data were analyzed using a mixed-effects model to account
for the repeated measures design and non-independence of
observations within participants (Baayen et al., 2008).

How ManuscriptMind Helps

Our AI analysis specifically checks for these methodology issues, providing:

  • Issue identification with severity ratings
  • Specific locations in your manuscript
  • Actionable suggestions for improvement

Upload your manuscript today to catch these issues before submission.


Have questions about methodology? Contact our support team or explore our other writing tips.

Keep reading