How to Disclose AI Use in Academia: And Why Transparency Matters More Than Ever

Table of Contents

Last month, a trainee asked me: “Dr. K… do I have to say I used AI?”

Not “Should I double-check the output?”

Not “How do I use it well?”

Just: Do I have to tell anyone?

That moment stuck with me, because it captures where academia is right now:

We’re using AI everywhere.

And we’re pretending we aren’t.

The adoption curve is real

Wiley surveyed 2,400+ researchers worldwide and the numbers are striking:

  • 84% are using AI tools in some part of their work (up from 57% the year prior)
  • 62% are using AI for research and/or publication tasks (up from 45%)

AI isn’t “coming” in academia. It’s already here.

But disclosure is still rare

Across 82,829 submissions to 13 JAMA Network journals, only 2.7% included an AI-use disclosure. (Peer Review Congress)

That gap isn’t about “ethics.”

It’s about fear + uncertainty.

  • People don’t know what counts as AI use.
  • They don’t know where to disclose it.
  • And a lot of them are worried disclosure will trigger extra scrutiny, rejection, or judgment.

So they stay quiet.

And silence becomes the norm.

Current AI use disclosure policies for manuscripts (journals)

If you strip away the stigma and just read what major publishers and standards bodies actually say, the pattern is consistent. And for manuscripts, most policies boil down to the same 3 principles:

  1. AI can’t be an author,
  2. Meaningful AI use should be disclosed, and
  3. Researchers remain accountable for every claim, citation, and interpretation.

Where the disclosure should be placed varies by journal, but it usually lands in a dedicated statement, the acknowledgements (for writing help), and/or methods (for research tasks).

SourceWhat they requireWhere to discloseDisclosure extent
Cambridge University Press
(Cambridge University Press & Assessment)AI use must be declared and clearly explained. AI cannot be an author. Authors remain accountable.Not one fixed location stated. Follow the journal’s instructions, and document it where methods/tools are typically reported.Broad
Elsevier
(www.elsevier.com)Disclose AI-tool use for manuscript preparation in a dedicated declaration. Grammar/spellcheck does not need disclosure. AI use in the research process goes in Methods if relevant.End of manuscript above references with a titled “Declaration…” statement. Research-process use in Methods.Broad, with an editing exception
Springer Nature / Nature Portfolio
(Nature)LLMs not authors. LLM use should be documented. “AI-assisted copy editing” does not need declaration (as defined by them).Methods (or a suitable alternative section). Copy-editing exception may not require disclosure.Broad, with an editing exception
Wiley
(Wiley Author Services)If AI helped develop any portion of the manuscript, it must be described transparently. Spelling/grammar/general editing tools are excluded.Methods, Acknowledgements, or a disclosure statement, depending on journal.Broad, with an editing exception
ICMJE (medical journal standard)
(ICMJE)Journals should require disclosure of AI-assisted tech at submission. Describe use in cover letter and manuscript. Writing help goes in Acknowledgements. Data analysis or figure generation goes in Methods.Cover letter + manuscript. Writing help: Acknowledgements. Research use: Methods.Broad

Note: None of the journals allow AI-generated images or charts (yet). This is mostly because they just were not up to academic standards, text rendering was terrible, and they wildly hallucinated. I expect this to change with the arrival of Nano Banana Pro (esp. considering how good it is). As of today, you can only use AI-generated images for ideas and inspiration in academia.

AI uses disclosure policies for grants

For grants, the center of gravity is originality + integrity.

Funders want to know you didn’t outsource your ideas, and they want you to remain fully responsible for what you submit.

Some are explicit about disclosure expectations (and even where to place it), while others focus more on the principle: AI use can’t compromise originality, confidentiality, or accuracy.

FunderWhat they require (applicant side)Where/how to disclosePolicy scope
NIH (incl. AHRQ in notice)
(Grants.gov)NIH says it will not consider applications (or sections) substantially developed by AI as “original ideas.” Also highlights risks like plagiarism/fabricated citations.No universal “AI disclosure statement” format in the notice. Focus is on originality and compliance.Very broad (originality + integrity).
NSF
(NSF – U.S. National Science Foundation)NSF guidance indicates GenAI use should be disclosed and emphasizes that proposers remain responsible for content (and integrity).Disclose within proposal narrative (commonly Project Description, per NSF guidance).Broad (proposal integrity + transparency).
DOE Office of Science
(example NOFO language)Requires applicants to disclose use of any AI tools in applications, unless used solely to edit an original draft. Also warns “machine-generated” narratives may constitute misconduct.Disclosure in the application (NOFO language is tied to the narrative).Broad, with editing-only exception.
NASA (Science Mission Directorate)
(NASA Science)NASA SMD says GenAI use isn’t prohibited, but should be acknowledged and proposers remain accountable for content/accuracy.Provide an acknowledgement/citation (NASA SMD describes including tool name/version/date and how it was used).Broad transparency + accountability.
NEH
(NEH AI policy)Allows AI in proposal preparation, but requires applicants to specifically acknowledge when they’ve inserted AI-generated text (footnotes or marginal notations). Noncompliance can make an application ineligible.Footnotes or marginal notations where AI-generated text is inserted.Very explicit disclosure requirement.

So, the direction of travel is more transparency, not less.

The hard line: don’t use AI for confidential peer review

This is where I see people accidentally step into real risk.

Peer review materials often contain confidential, proprietary, or unpublished information.

  • NIH explicitly prohibits the use of generative AI tools (e.g., ChatGPT) to draft, edit, or otherwise prepare NIH peer review critiques. Confidentiality/security breaches in peer review may be referred to HHS OIG/DOJ and can include pursuing criminal and civil penalties as allowed by law. (NIH AI guidance)
  • NSF explicitly prohibits reviewers from uploading proposal content into non-approved GenAI tools. (NSF – U.S. National Science Foundation)
  • Elsevier says reviewers should not upload manuscripts into generative AI tools due to confidentiality and proprietary rights concerns. (www.elsevier.com)
  • NEH prohibits peer reviewers from uploading NEH applications into third-party cloud databases due to confidentiality. (NEH AI policy)

However, internal peer review of your own work is different. That can be a powerful use case for AI (if done right).

Using AI to criticize your own study → point out flaws, stress-test arguments, and identify opportunities for improvement, can be a great use case of AI (if done right).

But I would never do it on a free chatbot subscription (Free means you’re paying with your data). If you do this, use a paid or institutional subscription, and make sure you turn off “improve model for everyone.” (Here’s my guide if you missed it.)

The safest option is to use an open model such as deepSeek or llama on your own device.

How to disclose:

If you’re using AI-assisted tools, your disclosure should answer 3 questions:

  1. What tool? (name + version if possible)
  2. What did it do? (specific task)
  3. What did you do? (human review, verification, accountability)

And put it where it belongs:

  • Writing help → Acknowledgements or a dedicated AI disclosure statement
  • Methods/analysis/figures → Methods section
  • Always follow the target journal’s instructions (they override everything) and are fast evolving.

Examples of AI Disclosure Statements:

Systematic Review:

“ChatGPT 5.2 assisted in refining the search strategy, Covidence/Rayyan facilitated abstract screening, and AI-assisted extraction tools prepopulated structured data, all of which were manually reviewed and verified by PK and DP. AI-generated R code aided statistical analysis, but all results were verified by RP to ensure accuracy and reproducibility.”

Manuscript Writing:

“During the preparation of this manuscript, the authors used Gemini 3 Pro to assist with refining the structure, improving clarity, and enhancing the coherence of the argument. Additionally, AI-assisted tools were used to summarize literature, which were then critically reviewed, revised, and expanded by the authors. All factual claims and references were independently verified to ensure accuracy, and the final manuscript reflects the intellectual contribution and judgment of the authors.”

Notice how these do 3 important things:

Transparency → Names the tool and what it was used for.

Accountability → Reinforces that AI was an assistant, not the decision-maker.

Scientific Integrity → Ensures human oversight at all critical steps.

That’s the bar.

The bigger picture

Right now, the culture is lagging behind reality.

Most researchers are already using AI.

But disclosure is still treated like a confession.

And the only way that changes is the same way it always has:

Say it.

Clearly.

Calmly.

Without shame.

Experiment with AI.

Teach others.

Share tools you genuinely find helpful.

And talk about them openly.

Because these conversations will shape what “acceptable” AI use looks like in academia.

How are you disclosing AI use in your manuscripts or grant writing right now?

PROMPT OF THE WEEK

High-quality rewrite for accessory research paperwork

Prompt (turn on "thinking mode" for best results): 
Edit the provided text for clarity, tone, and coherence while keeping its original meaning and key details. 

Perform strong rephrasing to avoid plagiarism and remove awkward language, redundancy, or filler. 

Improve sentence structure and transitions for a natural, confident flow. 

Keep the original voice and formality, without adding content or opinions. 

Constraint: Limit the final text to [N] words or fewer. 

Output: Include only the revised text in markdown format, with no commentary or extra formatting.

---
[insert text here]
...

P.S. I built Research Boost AI for academic writing using these same principles of keeping the researcher-first, always. Try it here FREE: http://researchboost.com/

(And if you do, please make sure you disclose its use.)

Leave a Comment

Your email address will not be published. Required fields are marked *

Related Posts

Join the ONLY NEWSLETTER You Need to Publish High-Impact Clinical Research Papers & Elevate Your Academic Career

I share proven systems for publishing high-impact clinical research using AI and open-access tools every Friday.