Academic publishing faces a definitional crisis. Every major journal now requires authors to declare AI use and ensure “substantial human intellectual contribution” to their work. But nobody can define what “substantial” actually means. The result? A gray zone where identical work might be acceptable at Nature but rejected by Science, approved by one reviewer but flagged by another.
This post examines the core problem: What constitutes substantial human intellectual contribution when AI assists in research writing?
I propose the author-director model: the author provides vision, direction, and validation, taking full responsibility for the final work. AI handles execution. This mirrors how creative industries have long operated, where directors, composers, and architects create works without personally executing every element.
Substantial intellectual contribution lies in what you decide, verify and take responsibility for, not necessarily what you type.
The Qualitative Standard
Most guidelines focus on the qualitative aspects of AI assistance: “Have I used generative AI for grammar check? drafting? etc?”.
Here’s an overview of major publishers’ AI policies on what’s allowed:
- Grammar and language improvement: AI tools may assist with spelling, grammar, punctuation, and readability improvements to author-generated text. This includes basic language polishing and ensuring clarity of expression, particularly for non-native language writers (Nature Portfolio, Elsevier, Sage, Taylor & Francis, Wiley).
- Idea generation and exploration: AI tools may help authors brainstorm topics, explore different approaches to explaining concepts, and organize research ideas (Taylor & Francis, Wiley).
- Literature synthesis and classification: AI tools may assist in summarizing research papers, identifying themes, and organizing literature sources, provided authors verify accuracy and validate findings (Elsevier, Taylor & Francis, Wiley).
- Content organization: AI tools may be used for structuring manuscripts, improving transitions between sections, and developing initial drafts, with the requirement that all content undergoes substantial human review and revision (Elsevier, Sage, Wiley).
- Coding assistance: AI tools may help with code generation, debugging, and documentation in research software (Taylor & Francis, ACM).
- Translation assistance: AI systems may support translation of content between languages, subject to author verification of accuracy (Sage, Wiley).
- Explanatory image creation: AI-generated diagrams, conceptual illustrations, and visualizations are permitted when verified for accuracy and when they do not represent research data or clinical findings (Wiley).
This qualitative approach is meaningful but avoids the fundamental question entirely. None of these guidelines dare address what happens when AI is used to the extreme. They catalog safe, incremental uses (grammar fixes, brainstorming, translation) but completely avoid the hard question: can one publish a paper if 1) it has been entirely AI generated 2) the human author drives the generation and 3) the human author takes full responsibility?
Thought Experiment
A mechanistic definition would be appealing: “AI may write at most X% of your paper.” But no such threshold exists, because the percentage itself is meaningless without context. Yet percentages do capture something real. A paper could theoretically range from 1% to 99% AI-generated text. The question is: where’s the line? At 20%? 50%? 80%?
One could theoretically:
- Have AI write 90% of the text (grammar, structure, articulation)
- Contribute 10% (the core ideas, analysis, interpretations)
Or conversely:
- Humans write 90% of the words
- Have AI generate all the ideas and analysis (10%)
In which case is there substantial intellectual contribution? Clearly, a quantitative definition of substantial human contribution is not satisfactory and not unenforceable.
Direction
I argue that providing:
- The research question or the strategic direction
- Oversight and validation
- Accountability
…constitutes “substantial human contribution” even if AI generated most of the text, say 99%.
I argue that AI policies should be updated to clarify this inevitable case.
I argue that we’re moving from a traditional definition of “author-writer” to that of “author-director”.
Steven Spielberg doesn’t operate the cameras, edit the footage, or write all the dialogue. He might not even touch the equipment. But he provides the vision, makes the creative decisions, and takes responsibility for the final product. Nobody questions whether Spielberg made a “substantial contribution” to his films just because the cinematographer shot 90% of the footage.
Similarly, a researcher who defines the question, validates the results, and takes accountability for the work has made a substantial intellectual contribution, even if AI did the heavy lifting on prose.
Conclusion
The academic community needs clarity on AI assistance in scholarly writing. I propose a three-part framework for determining substantial human intellectual contribution:
- Research Question and Direction: The human author formulates the research question, determines the approach, and provides strategic direction for the investigation.
- Oversight and Validation: The human author critically evaluates AI outputs, verifies accuracy, validates findings against source material, and ensures coherence with the research goals.
- Accountability: The human author takes full responsibility for all content, including AI-generated text, and stands behind the work’s claims and conclusions.
This framework shifts focus from text percentage to intellectual ownership and responsibility.
Publishers and institutions should:
- Replace vague “substantial contribution” requirements with explicit criteria
- Develop disclosure formats that allow for extreme cases (e.g., “AI wrote the paper; human author provided direction and validated all claims”)
- Train reviewers to evaluate the intellectual contribution rather than hunt for AI-generated prose
We need grounded guidelines that reflect how research is actually conducted in the age of extreme AI assistance.
Appendix for detractors
Some will argue this opens the door to minimal human involvement. However, the framework’s requirements are stringent: formulating research questions requires deep domain expertise; validating AI outputs demands critical thinking and domain knowledge; accepting accountability creates legal and ethical stakes that deter casual delegation.
Others worry about erosion of writing skills. Yet academic writing has always involved writing delegation: research assistants, ghost writers, copyeditors, translators, without diminishing the principal investigator’s contribution. AI is simply an extremely capable assistant.