Substantial human intellectual contribution in the age of extreme AI assistance

by Martin Monperrus

Academic publishing faces a definitional crisis. Every major journal now requires authors to declare AI use and ensure “substantial human intellectual contribution” to their work. But nobody can define what “substantial” actually means. The result? A gray zone where identical work might be acceptable at Nature but rejected by Science, approved by one reviewer but flagged by another.

This post examines the core problem: What constitutes substantial human intellectual contribution when AI assists in research writing?

I propose the author-director model: the author provides vision, direction, and validation, taking full responsibility for the final work. AI handles execution. This mirrors how creative industries have long operated, where directors, composers, and architects create works without personally executing every element.

Substantial intellectual contribution lies in what you decide, verify and take responsibility for, not necessarily what you type.

The Qualitative Standard

Most guidelines focus on the qualitative aspects of AI assistance: “Have I used generative AI for grammar check? drafting? etc?”.

Here’s an overview of major publishers’ AI policies on what’s allowed:

This qualitative approach is meaningful but avoids the fundamental question entirely. None of these guidelines dare address what happens when AI is used to the extreme. They catalog safe, incremental uses (grammar fixes, brainstorming, translation) but completely avoid the hard question: can one publish a paper if 1) it has been entirely AI generated 2) the human author drives the generation and 3) the human author takes full responsibility?

Thought Experiment

A mechanistic definition would be appealing: “AI may write at most X% of your paper.” But no such threshold exists, because the percentage itself is meaningless without context. Yet percentages do capture something real. A paper could theoretically range from 1% to 99% AI-generated text. The question is: where’s the line? At 20%? 50%? 80%?

One could theoretically:

Or conversely:

In which case is there substantial intellectual contribution? Clearly, a quantitative definition of substantial human contribution is not satisfactory and not unenforceable.

Direction

I argue that providing:

…constitutes “substantial human contribution” even if AI generated most of the text, say 99%.

I argue that AI policies should be updated to clarify this inevitable case.

I argue that we’re moving from a traditional definition of “author-writer” to that of “author-director”.

Steven Spielberg doesn’t operate the cameras, edit the footage, or write all the dialogue. He might not even touch the equipment. But he provides the vision, makes the creative decisions, and takes responsibility for the final product. Nobody questions whether Spielberg made a “substantial contribution” to his films just because the cinematographer shot 90% of the footage.

Similarly, a researcher who defines the question, validates the results, and takes accountability for the work has made a substantial intellectual contribution, even if AI did the heavy lifting on prose.

Conclusion

The academic community needs clarity on AI assistance in scholarly writing. I propose a three-part framework for determining substantial human intellectual contribution:

  1. Research Question and Direction: The human author formulates the research question, determines the approach, and provides strategic direction for the investigation.
  2. Oversight and Validation: The human author critically evaluates AI outputs, verifies accuracy, validates findings against source material, and ensures coherence with the research goals.
  3. Accountability: The human author takes full responsibility for all content, including AI-generated text, and stands behind the work’s claims and conclusions.

This framework shifts focus from text percentage to intellectual ownership and responsibility.

Publishers and institutions should:

We need grounded guidelines that reflect how research is actually conducted in the age of extreme AI assistance.

Appendix for detractors

Some will argue this opens the door to minimal human involvement. However, the framework’s requirements are stringent: formulating research questions requires deep domain expertise; validating AI outputs demands critical thinking and domain knowledge; accepting accountability creates legal and ethical stakes that deter casual delegation.

Others worry about erosion of writing skills. Yet academic writing has always involved writing delegation: research assistants, ghost writers, copyeditors, translators, without diminishing the principal investigator’s contribution. AI is simply an extremely capable assistant.