How to Turn Your “Study Limitations” From a Weak Apology Into a Mark of Rigor (And How AI Can Help)

Table of Contents

Most limitations sections are written like a confession booth OR a courtroom defense.

Neither helps the reader.

For years, mine didn’t either.

I’d get to the end of a paper, tired, ready to submit, and I’d treat “Limitations” like a box to check.

Then I read Lorelei Lingard’s short piece, The art of limitations, and it put words to something I felt but couldn’t articulate.

She describes 3 common ways people write limitations (Lingard, 2015):

  1. Confessional,
  2. Dismissal, and
  3. Reflection

For most of my research life, I used the first 2.

→ They’re the most common.

→ They’re also the least useful.

Not because they’re “wrong.”

Because they fail at the real job of a limitations section.

And the job is not to protect you.

The job is to help the reader calibrate uncertainty.

Here’s how to do the limitations section right:

The real reason we write bad limitations

These 3 styles are not random.

They’re responses to peer review.

  • Confessional is the “if I admit it first, reviewers won’t punish me” instinct.
  • Dismissal is the “if I don’t downplay this, the paper will get rejected” instinct.
  • Reflection is the only one that actually serves science: it lays out where uncertainty comes from and how that uncertainty should shape interpretation. But it feels risky – maybe it will undermine my results? It is often the natural instinct.

Once I understood that, the limitations section stopped being emotional.

No more apology.

No more defensiveness.

Just clean thinking.

And really the best journals and reporting guidelines basically want the same thing.

Not a list of “flaws.”

Not a “trust us.”

They want you to talk about bias, imprecision, and (this is the part most people skip) the likely direction the limitation could push your results (overestimate? underestimate? limited generalizability?). See the STROBE guidelines here.

1️⃣ The Confessional ❌

This is the please-forgive-me limitations section.

  • Small sample size
  • Single-center study
  • Missing data
  • Convenience sampling
  • “We acknowledge…” repeated five times

It’s honest.

But it reads like you’re apologizing for doing research in the real world.

And it doesn’t tell the reader the one thing they actually need:

How does this limitation change what I can conclude?

A list of flaws is not an argument.

It’s a guilt dump.

Upgrade move (tiny but powerful): don’t just name the limitation, name the mechanism and the consequence.

Instead of:

Single-center study.

Write:

We recruited from a tertiary academic center, which may enrich for higher disease severity and referral complexity. This creates uncertainty about generalizability to community settings, where phenotype mix and treatment patterns may differ.

That’s still honest.

But now it’s useful.

2️⃣ The Dismissal ❌

This one sounds more confident.

But it’s still weak.

  • “Yes, X is a limitation. However, it likely did not influence our findings.”
  • “Although Y was not measured, our results remain valid because we did Z.”

This is the admit → dismiss pattern.

It’s defensive.

And it subtly tells the reader: don’t worry about this, just trust us.

That’s not how serious science earns trust.

Because the reader’s next thought is predictable.

“If it truly didn’t matter, why bring it up at all?”

The nuance here: sensitivity analyses and robustness checks are great—but they’re not a magic eraser.

They should reduce uncertainty, not pretend to eliminate it.

So instead of:

Missingness was unlikely to affect our findings because imputation was similar.

Try:

Missingness could bias estimates if data were not missing at random. Results were directionally consistent across complete-case and imputed analyses, which reduces concern about large bias, but residual uncertainty remains if missingness depended on unmeasured severity or access factors.

Same science.

Completely different tone.

One is defense.

The other is calibration.

Unfortunately, this had been my go-to way of writing limitations for a very long time, until I decided to upgrade it. And I suspect it is way too common.

3️⃣ The Reflection ✔️

The only one that works

This is the approach I wish I had learned earlier.

The limitations section is not an apology.

It’s not a shield against reviewers.

Think of it as an argument about uncertainty.

An argument that answers:

  • Where uncertainty comes from
  • Why it exists because of specific design choices
  • What that uncertainty means for interpretation
  • How others should (and should not) use the findings
  • What kind of future work would reduce this uncertainty

Or even simpler:

What did we choose → what uncertainty did that create → how should this knowledge be taken up by others.

That’s the job.

The 4-line template I use now

When you write each limitation, force yourself into this structure:

  • Design choice: What we did
  • Uncertainty: What that choice leaves unclear
  • Implication: What it means for generalizability, causality, or magnitude
  • Future work: What would reduce this uncertainty

This works for technical limitations (missingness, measurement error, single site).

It also works for conceptual limitations (theoretical frame, variable selection, what you chose to prioritize).

Example: Limitations written as a reflection

However, several limitations should be considered when interpreting our results. First, our decision to recruit solely from academic centers with PsA expertise creates uncertainty regarding the full spectrum of disease phenotypes captured in this study. This selection bias likely explains why we did not identify a “mild PsA with severe psoriasis” cluster, as these patients may predominantly receive care in dermatology clinics rather than the rheumatology-focused sites in our consortium. Consequently, our clusters may not be fully generalizable to community practices where the distribution of disease severity differs.

Next, sampling patients at various disease stages rather than using an inception cohort introduces ambiguity regarding the observed stability of these clusters. It remains unclear whether this stability reflects intrinsic biological phenotypes or the stabilizing effects of prior long-term management. Similarly, the significant improvement observed in Cluster 3 (Severe PsA/Severe PsO) must be interpreted with caution. While this difference may suggest distinct treatment responsiveness, we cannot rule out the influence of regression to the mean, a statistical artifact common in longitudinal studies of high disease activity.

Finally, our specific choices regarding variable selection and data capture shaped the resulting taxonomy. By utilizing total joint counts to reduce redundancy, we may have obscured anatomical subtypes that do not correlate strictly with severity. Additionally, relying on physician discretion for axial imaging rather than systematic screening likely led to an underestimation of asymptomatic axial involvement, suggesting the absence of an “axial” cluster may be a result of measurement bias rather than biological reality.

Notice what’s happening.

No apologizing.

No hand-waving.

No pretending the issues don’t matter.

Each limitation explains how the results should be interpreted.

And where the edges of certainty are.

That’s what reviewers respect.

That’s what readers trust.

And that’s what a limitations section is actually for.

Learn it using your own paper

The example above may or may not map to your field.

The fastest way to learn this is to do it on your own manuscript.

Here’s a prompt you can use:

(If you’re using this for an unpublished draft, remember: this is an AI’s first draft, not your first draft. You still need to own the logic and the claims. Make sure that it is still true to your study.)

PROMPT

Rewrite the limitations section of the paper using the reflection approach.

Lay out the aspects of the research design that create uncertainty about the knowledge contribution, paying attention to the nature of the uncertainty and its implications.

Reflection sounds like:

  • Here’s the research design choice we made
  • Here’s the uncertainty it creates
  • Here’s the likely direction of that uncertainty
  • Here’s how you should interpret and apply our findings because of it

EXAMPLE:

[paste the example limitations section above here]

MY PAPER:

[paste or attach your paper. Ideally the full paper, not just the limitations section]

Quick gut check for your next paper

💬 When you read your limitations section, does it sound like an apology, a defense, or a reflection?

PROMPT OF THE WEEK

The 10-Sec Resume Test

Prompt: Act as a senior recruiter and hiring manager reviewing the first page of my resume for the role of {role} (Job Description attached).

You have 10 seconds to scan the resume, as you would in a high-volume hiring process with strong competition.

Please evaluate:

Immediate attention drivers: What sections, bullets, and keywords stand out first—and why?

Role alignment: How clearly does my experience map to the job requirements and success criteria in the JD?

Keyword & signal strength: Which required or high-value keywords are present, missing, weak, or misaligned (including ATS-relevant terms)?

Differentiation: What makes me competitive—or forgettable—compared to other strong candidates?

Red flags or friction points: Anything that causes hesitation, confusion, or skepticism.

Interview likelihood: Estimate the probability (0–100%) that you would advance me to an interview, and explain the rationale.

Be ruthlessly honest and assume the bar is high.
Conclude with specific, prioritized changes that would most improve my interview callback rate.

Source: Superhuman

P.S. Research Boost can help you craft a strong Discussion section, including structuring your limitations section the way I described here with nuance.

Start writing your next manuscript FREE here: https://researchboost.com/

Leave a Comment

Your email address will not be published. Required fields are marked *

Related Posts

Join the ONLY NEWSLETTER You Need to Publish High-Impact Clinical Research Papers & Elevate Your Academic Career

I share proven systems for publishing high-impact clinical research using AI and open-access tools every Friday.