This document summarizes the guidelines from major scholarly publishers regarding the use of AI-assistance in academic writing.
Nature Portfolio (Springer Nature)
- Large Language Models cannot be listed as authors because they cannot be held accountable for the work (Nature Portfolio Editorial Policies).
- AI-assisted copy editing (grammar, spelling, punctuation) does not require disclosure, but any substantive AI use must be documented in the Methods section or suitable alternative location.
- Generative AI images are prohibited for publication, with limited exceptions for legally obtained agency images, AI-specific research, or scientifically verifiable tools with proper attribution.
Elsevier
- Generative AI tools may assist authors when used with human oversight and control, but authors remain fully responsible and accountable for content accuracy (Elsevier AI policies).
- Authors must disclose AI use in a declaration statement upon submission, specifying the tool name, purpose, and extent of oversight. Basic grammar and spelling checks require no declaration.
- AI tools cannot be listed as authors or co-authors because authorship requires human accountability, approval of final work, and agreement to submission.
IEEE
- AI-generated content must be disclosed in the acknowledgments section, identifying the AI system used, specific sections with AI content, and level of AI involvement (IEEE Guidelines).
- Using AI for editing and grammar enhancement is common practice and disclosure is recommended but not required.
- Peer reviewers must not process manuscripts through public AI platforms as this breaches confidentiality. AI systems generally learn from input data.
ACM
- Generative AI tools cannot be listed as authors of ACM publications because they are not identifiable human beings and cannot take responsibility for content (ACM Authorship Policy).
- Use of generative AI tools must be fully disclosed in the acknowledgements section, including the tool name and how it was used to create text, tables, graphs, code, or data.
- Basic word processing systems that perform spelling or grammar checks are exceptions to disclosure requirements and generally need not be disclosed.
Wiley
- Authors may use AI tools as companions to the writing process, not replacements, and must take full responsibility for accuracy and verify all AI-generated content (Wiley AI Guidelines).
- Authors must disclose AI use to the publisher when submitting manuscripts, documenting the tool used, its purpose, and the human review process.
- Wiley currently does not permit use of generative AI in creating or manipulating images, figures, or original research data for publications.
SAGE
- Authors must disclose any AI-generated content (text, images, translations) when submitting to SAGE or Corwin, though AI-assisted content from embedded tools does not require disclosure (SAGE AI Guidelines).
- Large Language Models cannot be recognized as co-authors and authors are entirely responsible for their submissions.
- Editors and reviewers must not share manuscripts or use generative AI tools to generate review reports, as this breaches confidentiality of the peer review process.
Taylor & Francis
- Generative AI tools must not be listed as authors because they cannot assume responsibility, manage copyright, or provide contractual assurances (Taylor & Francis AI Policy).
- Authors must acknowledge all generative AI use in the Methods or Acknowledgments section, including tool name with version number, how it was used, and reason for use.
- Taylor & Francis currently does not permit generative AI in creating or manipulating images, figures, or original research data. Editors and peer reviewers must not upload unpublished manuscripts into generative AI tools.
Oxford University Press
- Authors must protect content when using generative AI tools by carefully reviewing terms of service and avoiding tools that retain or train on submitted content (OUP AI Guidelines).
- Authors should avoid uploading full manuscripts or unpublished work into generative AI tools that do not permit opting out of data reuse or training.
- Enterprise or professional versions of generative AI tools are recommended where available, as they typically provide stronger privacy protections.