Finding the right research question is what most of my mentees struggle with.
Not the statistics.
Not the study design.
Not even the writing.
Itâs this:
Where do I begin?
And hereâs the answer I keep coming back to:
Almost every meaningful, high-impact study begins the same wayâŚ
By identifying a gap.
But not just any gap.
A real, consequential one that reflects what the field doesnât yet know, hasnât yet seen, or hasnât yet acted on.
Fundamentally, these gaps fall into 3 categories:
1ď¸âŁ Gap in knowledge (we donât know about it)
2ď¸âŁ Gap in thinking (weâve been thinking about it all wrong)
3ď¸âŁ Gap in practice (weâve not been applying it well)
Letâs break each one downâ
1ď¸âŁ Gap in Knowledge
âWe donât know enoughâyet.â
This is the most traditional type of research gap.
Itâs about absence: no data, outdated data, or conflicting data.
But spotting a true knowledge gap takes more than curiosity.
It takes clinical judgment and pattern recognition.
Here is how it can look like:
A. Knowledge Gap
â We simply donât have the answer yet.
â Example: What are the long-term cardiovascular outcomes of GLP-1 receptor agonists in non-diabetic obese patients?
Theyâre approved for weight loss. They improve blood pressure and lipids. But do they reduce MI or stroke risk in those without diabetes?
Thatâs a wide-open questionâand a meaningful one.
B. Contradictory Findings
â The evidence existsâbut it doesnât agree.
â Example: Does intensive blood pressure control reduce the risk of cognitive decline?
A recent trial suggested a possible benefit. But observational studies show mixed signals. And the definitions of âcognitive declineâ vary.
Thatâs a setup for a robust trial or meta-analysis with more nuanced endpoints.
C. Evidence Gap
â The theory exists. The real-world data doesnât.
â Example: Do wearable devices improve glycemic control in patients with type 2 diabetes?
We think they should. They promote self-monitoring. But RCT-level evidence is scarceâand the few that exist have short follow-up and selective populations.
This gap is waiting to be filled with pragmatic trials.
D. Temporal Gap
â Old data, new questions.
â Example: The last NHANES-based study on hypertension control disparities by race and ethnicity was in 2016.
Since then:
â New guidelines.
â COVID-19 disruptions.
â Widening inequities.
Do those findings still hold?
This is the kind of question that needs revisitingâregularly.
Tip: Read recent guidelines. Look for the footnotes that say âlimited evidenceâ or âlow certainty.â Thatâs where the gap often lives.
2ď¸âŁ Gap in Thinking
âWeâve been approaching this the wrong way.â
This is the most underrated gapâand often the most powerful.
Itâs not about missing data.
Itâs about questioning assumptions.
A. Theoretical Gap
â We donât have a model that explains what weâre seeing.
â Example: Why do some patients with similar A1c levels experience vastly different cardiovascular outcomes?
We suspect itâs about glycemic variability. Or inflammatory pathways. But our current risk models donât account for that.
Thatâs a theoretical gapâan opening for new frameworks or predictive tools.
B. Methodological Gap
â The tools weâre using donât reflect real life.
â Example: Randomized trials for hypertension management often exclude patients with multiple comorbidities.
But those are exactly the patients we see in primary care.
Could EHR-based observational cohorts or adaptive trials better reflect the real-world complexity?
â Bonus: Using natural language processing to extract nuanced social risk factors from notesâsmoking patterns, medication adherence, dietâmight outperform structured fields.
C. Contextual Gap
â Findings donât generalize across populations.
â Example: We know SGLT2 inhibitors reduce heart failure hospitalizations in type 2 diabetes. But how do they perform in patients over 60? Or in populations with food insecurity?
These patients were underrepresented in the pivotal trials.
Thatâs a contextual gapâand the research opportunity is huge.
Tip: When reading a âlandmark trial,â ask: âWhoâs missing here?â AND âWho do the results not apply to?â Thatâs often the next study.
3ď¸âŁ Gap in Practice
âWe know what works. Weâre just not doing it.â
This is the domain of implementation science.
Youâre not discovering new treatmentsâyouâre figuring out how to deliver what we already know works.
Itâs about doing better, not knowing more.
A. Implementation Gap
â Evidence-based strategies arenât consistently used.
â Example: Despite strong evidence, why is the use of statins in high-risk diabetic patients still suboptimal?
Your study could examine clinician-level inertia, patient mistrust, or system-level barriers like prior authorization.
Itâs not a new drug or a molecular pathway. But it could change more lives than one.
B. Awareness Gap
â Guidelines existâbut no one seems to know them.
â Example: Are emergency physicians aware of ACC/AHA guidelines on blood pressure rechecks after acute care visits?
Many hypertensive patients leave the ED without any clear follow-up planâeven when systolic readings are dangerously high.
This isnât about innovation. Itâs about translation.
C. Access Gap
â Proven interventions donât reach everyone.
â Example: Continuous glucose monitors improve outcomes in type 1 diabetes.
So why are they so underused in Medicaid populations?
Is it policy? Provider training? Patient digital literacy?
Research here isnât just publishable. Itâs policy-shaping.
D. Training or Competency Gap
â Clinicians know what to doâbut donât feel equipped.
â Example: Many clinicians recognize the benefits of motivational interviewing for lifestyle change, but donât feel confident using it.
What kind of training models could bridge that?
A short online course? Embedded clinical prompts?
Thatâs a researchable question.
E. Systems Gap
â Workflow or technology breaks the chain.
â Example: Why donât more EHRs prompt physicians to initiate aspirin in high-risk diabetics with known ASCVD?
Sometimes the gap isnât human. Itâs a missing checkbox in a clinical decision support tool.
Implementation research can fix that too.
Tip: Ask your colleagues what frustrates them most in practice. The answer is probably a systems gap in disguise.
How to Use This Framework in Your Own Work
Great research starts with the right question.
And the right question usually begins with a gap.
Start listening for themâ
In morning rounds.
In journal clubs.
In patient complaints.
In your own clinical doubts.
âł Is this a Knowledge Gap?
âł Or a Thinking Gap?
âł Or a Practice Gap?
Start labeling what you notice.
Soon, youâll be spotting gaps everywhere.
And when you do?
Youâre no longer just observing medicine.
Youâre advancing it.
Over to you:
What kind of research gap are you most drawn to right nowâin your clinic, your data, or your curiosity?
- Drop it in a notebook.
- Share it with a colleague.
- Start sketching a proposal.
Because every great study starts with seeing what others donât.
And then choosing to act on it.
P.S. We are building an AI tool called Research Boost trained on all my frameworks. You can use it to find and refine your research ideas. Sign up for the waitlist HERE: https://researchboost.com/
