...

Systematic Reviews Take Years—AI Can Do It Faster (But There’s a Catch)

Table of Contents

A good systematic review takes months. A great one takes years.

Why? Because systematic reviews demand patience.

They are slow. Tedious. Methodical.

There is no skimming. No shortcuts. Every paper must be searched systematically, screened rigorously, extracted meticulously, and synthesized thoughtfully.

But here’s the real question: Can AI help?

Yes.

Can AI do a systematic review on its own?

No. And it probably won’t anytime soon.

A systematic review requires precision and reproducibility. AI, by its nature, is unpredictable—it can give different answers every time it runs. That’s a problem when the goal is to be consistent and verifiable.

But AI can make the process faster. More efficient. Less grueling.

It won’t replace systematic reviews. But it will change how they are done.

Let’s break it down.


The Systematic Review Process (And Where AI Fits In)

A systematic review is not just a literature review. It is a rigorous, multi-step process designed to minimize bias and produce credible evidence. AI won’t replace these steps—but it can speed them up, reduce errors, and lighten the workload.

Let’s go step by step.


1. Defining the Research Question (AI’s Role: A Helpful Guide, Not a Decision-Maker)

Everything starts with a question. A clear, specific, well-structured research question.

Most systematic reviews follow the PICO framework:

Population, Intervention, Comparison, Outcome.

How AI Helps:

  • AI can scan thousands of previous systematic reviews to identify gaps in the literature.
  • AI can create knowledge maps showing connections between research topics, helping refine the focus.

🔍 Example:

A researcher is studying “SGLT2 inhibitors for cardiovascular risk reduction in type 2 diabetes.” AI scans the literature and highlights an unanswered question: How do these drugs work in patients with chronic kidney disease?

🚦 Human Oversight Needed:

AI suggests. Humans decide. AI doesn’t understand clinical relevance the way a trained researcher does.

🔗 Pro Tip: Use my Research Idea GPT to explore and refine research questions.


2. Developing a Search Strategy (AI’s Role: A Powerful Assistant, Not a Replacement)

A systematic review is only as good as its search strategy.

Miss key studies, and your review is flawed. Broaden the search too much, and you’re buried in irrelevant papers.

While AI-driven search or semantic search is useful for quick insights, they are not systematic or complete.

To ensure completeness, you must rely on keyword-based search strategies—which can be tedious and time-consuming.

But this is where ChatGPT can help.

Use the prompt below, refine it as needed, and get a solid starting point for your search strategy. (adapted from Wang et al. 2023)

👇 Copy, paste, and adapt.


  • You are a academic search expert, who collaborates and helps researchers and physician-scientists in an academic institution.
  • Take a deep breath and let’s do this step-by-step.

Step 1: Produce a list of 50 “relevant” terms.

Follow my instructions precisely to develop a highly effective Boolean query for a medical systematic review literature search. Do not explain or elaborate. Only respond with exactly what I request. First, Given the following research question or topic, please identify 50 terms or phrases that are relevant. The terms you identify should be used to retrieve more relevant studies, so be careful that the terms you choose are not too broad. You are not allowed to have duplicates in your list. statement: [insert research question]

Step 2: Classify terms into three categories using PICOT

For each item in the list you created in step 1, classify it into as of three categories: terms relating to health conditions (A), terms relating to a treatment (B), terms relating to types of study design (C). When an item does not fit one of these categories, mark it as (N/A). Each item needs to be categorised into (A), (B), (C), or (N/A)

Step 3: Create a Boolean Query in PubMed Syntax.

Using the categorised list you created in step 2, create a Boolean query that can be submitted to PubMed which groups together items from each category. For example: ((itemA1[Title/Abstract] OR itemA2[Title/Abstract] or itemA2[Title/Abstract]) AND (itemB1[Title/Abstract] OR itemB2[Title/Abstract] OR itemB3[Title/Abstract]) AND (itemC1[Title/Abstract] OR itemC2[Title/Abstract] OR itemC3[Title/Abstract]))

Step 4: Refine search strategy and add relevant MeSH.

Use your expert knowledge to refine the query, making it retrieve as many relevant documents as possible while minimising the total number of documents retrieved. Also add relevant MeSH terms into the query where necessary, e.g., MeSHTerm[MeSH]. Retain the general structure of the query, however, with each main clause of the query corresponding to a PICO element. The final query still needs to be executable on PubMed, so it should be a valid query.


🔍 Example:

A researcher studying “Intermittent fasting and its impact on blood sugar control in type 2 diabetes” can use ChatGPT to generate a PubMed search strategy in minutes rather than hours.

🚦 Human Oversight Needed:

AI generates a draft. Then, as a researcher, you must refine and validate the search for accuracy and completeness.


3. Screening and Selecting Studies (AI’s Role: A Massive Time-Saver, But Not Perfect)

You now have thousands of abstracts. Some relevant. Many not.

How do you filter them?

How AI Helps:

  • AI tools like Covidence, Rayyan, and ASReview can pre-screen studies.
  • AI can classify abstracts as “relevant”, “not relevant”, “indeterminate” based on learned patterns.
  • AI can detect duplicate studies, preventing errors in data collection.

🔍 Example:

A researcher analyzing “Omega-3 supplements for reducing heart attack risk” has 10,000+ abstracts to review. AI eliminates 80% of irrelevant papers in minutes, allowing human reviewers to focus on borderline cases.

🚦 Human Oversight Needed:

AI works fast but not flawlessly. It may discard important studies due to phrasing quirks. A hybrid approach (AI-assisted, human-reviewed) is best.


4. Data Extraction (AI’s Role: Automating the Tedious Parts, But Not Perfect Yet)

Extracting data from studies is slow, tedious, and error-prone.

How AI Helps:

  • AI can auto-extract structured data (year, study type, sample size, treatment groups).
  • Platforms such as SciSpace and Elicit.org extract data from multiple papers and generate structured tables.
  • AI tools like RobotReviewer can pull intervention details, effect sizes, and key findings.

🔍 Example:

A researcher studying “Low-carb diets for weight loss in obesity” extracts effect sizes, weight changes, and adherence rates across 100 randomized trials. AI fills in most of the data automatically.

🚦 Human Oversight Needed:

AI can miss nuances in reported outcomes. Humans must verify the extracted data.


5. Assessing Study Quality & Risk of Bias (AI’s Role: A Smart Assistant, Not a Judge)

Not all studies are created equal. Some are flawed, biased, or misleading.

How AI Helps:

  • AI can flag small sample sizes, conflicts of interest, and missing data.
  • AI can pre-score studies using tools like RoB 2 and Newcastle-Ottawa Scale.

🔍 Example:

A meta-analysis on “GLP-1 receptor agonists for cardiovascular risk reduction” uses AI to flag industry-funded studies, prompting deeper scrutiny.

🚦 Human Oversight Needed:

Bias isn’t always obvious. A trained researcher must make the final judgment.


6. Data Synthesis & Meta-Analysis (AI’s Role: A Strong Assistant, But Needs Supervision)

Once data is extracted, it’s time for the meta-analysis.

How AI Helps:

  • AI can generate and troubleshoot R and Python scripts for statistical analysis.
  • AI can detect heterogeneity across studies and suggest subgroup analyses.

🔍 Example:

You can use ChatGPT to generate R codes for generating forest plots for your study, saving hours of statistical coding.

🚦 Human Oversight Needed:

AI produces results. Researchers must interpret them.


7. Writing the Manuscript (AI’s Role: A Productivity Booster, But Not a Replacement)

The final step: Writing the paper.

How AI Helps:

  • AI can generate a structured draft based on PRISMA guidelines.
  • AI can proofread, format, and optimize journal submissions.

🔍 Example:

A researcher submits a systematic review on “Statin therapy for primary prevention of cardiovascular disease”. AI formats it for JAMA Internal Medicine in minutes.

🚦 Human Oversight Needed:

AI writes. But humans must revise, interpret, and own the narrative.

AI and Transparency: How to Disclose AI Use in Your Systematic Review

AI is quickly becoming a standard part of the research workflow. Just as we no longer disclose the use of spell checkers or Grammarly, AI tools may soon be an expected—rather than exceptional—part of the process.

But for now, transparency matters.

If you’re using AI-assisted tools in your systematic review, it’s best to acknowledge them appropriately—without overcomplicating the disclosure. A short, clear, and professional statement in the Methods section should be sufficient.

🔍 Example of an AI Disclosure Statement (for the Methods section):

“ChatGPT assisted in refining the search strategy, Covidence/Rayyan facilitated abstract screening, and AI-assisted extraction tools prepopulated structured data, all of which were manually reviewed. AI-generated R code aided statistical analysis, but all results were verified by human researchers to ensure accuracy and reproducibility.”

This disclosure achieves 3 key things:

  1. Transparency – Clearly states which AI tools were used.
  2. Accountability – Reinforces that AI was an assistant, not the decision-maker.
  3. Scientific Integrity – Ensures that human researchers verified all critical steps.

In the near future, such statements may become as routine as listing software for statistical analysis. But for now, this level of disclosure helps maintain credibility while embracing AI’s role in modern research.

Final Thoughts: AI is Here. Are You Ready?

AI won’t replace systematic reviews. But it will change who succeeds in research.

The real question isn’t:

Can AI conduct systematic reviews?

The real question is:

How are YOU preparing to work alongside it?

P.S. Join like-minded researchers in this high-touch 12-week program, “AI-Powered Clinical Research OS” (March 10 to June 11, 2025) with live lectures and 90 min weekly group mentoring HERE: https://risingresearcheracademy.com/joincr/

Doors close March 8.

Leave a Comment

Your email address will not be published. Required fields are marked *

Join the ONLY NEWSLETTER You Need to Publish High-Impact Clinical Research Papers & Elevate Your Academic Career

I share proven systems for publishing high-impact clinical research using AI and open-access tools every Friday.

Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.