The 3 Gaps Behind Every Great Research Study (And how to spot them in your own clinical work)

Table of Contents

Finding the right research question is what most of my mentees struggle with.

Not the statistics.

Not the study design.

Not even the writing.

It’s this:

Where do I begin?

And here’s the answer I keep coming back to:

Almost every meaningful, high-impact study begins the same way…

By identifying a gap.

But not just any gap.

A real, consequential one that reflects what the field doesn’t yet know, hasn’t yet seen, or hasn’t yet acted on.

Fundamentally, these gaps fall into 3 categories:

1️⃣ Gap in knowledge (we don’t know about it)

2️⃣ Gap in thinking (we’ve been thinking about it all wrong)

3️⃣ Gap in practice (we’ve not been applying it well)

Let’s break each one down—

1️⃣ Gap in Knowledge

“We don’t know enough—yet.”

This is the most traditional type of research gap.

It’s about absence: no data, outdated data, or conflicting data.

But spotting a true knowledge gap takes more than curiosity.

It takes clinical judgment and pattern recognition.

Here is how it can look like:

A. Knowledge Gap

We simply don’t have the answer yet.

→ Example: What are the long-term cardiovascular outcomes of GLP-1 receptor agonists in non-diabetic obese patients?

They’re approved for weight loss. They improve blood pressure and lipids. But do they reduce MI or stroke risk in those without diabetes?

That’s a wide-open question—and a meaningful one.

B. Contradictory Findings

The evidence exists—but it doesn’t agree.

→ Example: Does intensive blood pressure control reduce the risk of cognitive decline?

A recent trial suggested a possible benefit. But observational studies show mixed signals. And the definitions of “cognitive decline” vary.

That’s a setup for a robust trial or meta-analysis with more nuanced endpoints.

C. Evidence Gap

The theory exists. The real-world data doesn’t.

→ Example: Do wearable devices improve glycemic control in patients with type 2 diabetes?

We think they should. They promote self-monitoring. But RCT-level evidence is scarce—and the few that exist have short follow-up and selective populations.

This gap is waiting to be filled with pragmatic trials.

D. Temporal Gap

Old data, new questions.

→ Example: The last NHANES-based study on hypertension control disparities by race and ethnicity was in 2016.

Since then:

→ New guidelines.

→ COVID-19 disruptions.

→ Widening inequities.

Do those findings still hold?

This is the kind of question that needs revisiting—regularly.

Tip: Read recent guidelines. Look for the footnotes that say “limited evidence” or “low certainty.” That’s where the gap often lives.

2️⃣ Gap in Thinking

“We’ve been approaching this the wrong way.”

This is the most underrated gap—and often the most powerful.

It’s not about missing data.

It’s about questioning assumptions.

A. Theoretical Gap

We don’t have a model that explains what we’re seeing.

→ Example: Why do some patients with similar A1c levels experience vastly different cardiovascular outcomes?

We suspect it’s about glycemic variability. Or inflammatory pathways. But our current risk models don’t account for that.

That’s a theoretical gap—an opening for new frameworks or predictive tools.

B. Methodological Gap

The tools we’re using don’t reflect real life.

→ Example: Randomized trials for hypertension management often exclude patients with multiple comorbidities.

But those are exactly the patients we see in primary care.

Could EHR-based observational cohorts or adaptive trials better reflect the real-world complexity?

→ Bonus: Using natural language processing to extract nuanced social risk factors from notes—smoking patterns, medication adherence, diet—might outperform structured fields.

C. Contextual Gap

Findings don’t generalize across populations.

→ Example: We know SGLT2 inhibitors reduce heart failure hospitalizations in type 2 diabetes. But how do they perform in patients over 60? Or in populations with food insecurity?

These patients were underrepresented in the pivotal trials.

That’s a contextual gap—and the research opportunity is huge.

Tip: When reading a “landmark trial,” ask: “Who’s missing here?” AND “Who do the results not apply to?” That’s often the next study.

3️⃣ Gap in Practice

“We know what works. We’re just not doing it.”

This is the domain of implementation science.

You’re not discovering new treatments—you’re figuring out how to deliver what we already know works.

It’s about doing better, not knowing more.

A. Implementation Gap

Evidence-based strategies aren’t consistently used.

→ Example: Despite strong evidence, why is the use of statins in high-risk diabetic patients still suboptimal?

Your study could examine clinician-level inertia, patient mistrust, or system-level barriers like prior authorization.

It’s not a new drug or a molecular pathway. But it could change more lives than one.

B. Awareness Gap

Guidelines exist—but no one seems to know them.

→ Example: Are emergency physicians aware of ACC/AHA guidelines on blood pressure rechecks after acute care visits?

Many hypertensive patients leave the ED without any clear follow-up plan—even when systolic readings are dangerously high.

This isn’t about innovation. It’s about translation.

C. Access Gap

Proven interventions don’t reach everyone.

→ Example: Continuous glucose monitors improve outcomes in type 1 diabetes.

So why are they so underused in Medicaid populations?

Is it policy? Provider training? Patient digital literacy?

Research here isn’t just publishable. It’s policy-shaping.

D. Training or Competency Gap

Clinicians know what to do—but don’t feel equipped.

→ Example: Many clinicians recognize the benefits of motivational interviewing for lifestyle change, but don’t feel confident using it.

What kind of training models could bridge that?

A short online course? Embedded clinical prompts?

That’s a researchable question.

E. Systems Gap

Workflow or technology breaks the chain.

→ Example: Why don’t more EHRs prompt physicians to initiate aspirin in high-risk diabetics with known ASCVD?

Sometimes the gap isn’t human. It’s a missing checkbox in a clinical decision support tool.

Implementation research can fix that too.

Tip: Ask your colleagues what frustrates them most in practice. The answer is probably a systems gap in disguise.

How to Use This Framework in Your Own Work

Great research starts with the right question.

And the right question usually begins with a gap.

Start listening for them—

In morning rounds.

In journal clubs.

In patient complaints.

In your own clinical doubts.

↳ Is this a Knowledge Gap?

↳ Or a Thinking Gap?

↳ Or a Practice Gap?

Start labeling what you notice.

Soon, you’ll be spotting gaps everywhere.

And when you do?

You’re no longer just observing medicine.

You’re advancing it.

Over to you:

What kind of research gap are you most drawn to right now—in your clinic, your data, or your curiosity?

  • Drop it in a notebook.
  • Share it with a colleague.
  • Start sketching a proposal.

Because every great study starts with seeing what others don’t.

And then choosing to act on it.

P.S. We are building an AI tool called Research Boost trained on all my frameworks. You can use it to find and refine your research ideas. Sign up for the waitlist HERE: https://researchboost.com/

Leave a Comment

Your email address will not be published. Required fields are marked *

Related Posts

Join the ONLY NEWSLETTER You Need to Publish High-Impact Clinical Research Papers & Elevate Your Academic Career

I share proven systems for publishing high-impact clinical research using AI and open-access tools every Friday.