TL;DR
The scientific publishing system assumes humans write, review, and read papers. AI breaks this: AI can’t legally author papers, human reviewers can’t match AI’s output speed, and publication-based career incentives collapse when AI produces papers at scale. We need three things: an AI preprint server with full transparency, algorithmic curation to filter papers for human attention, and automated verification of paper content. This crisis forces us to fix longstanding flaws in scientific publishing.
The Current Scientific System, Pre AI
The scientific system consists of people, organizations like universities, artifacts like papers, and processes like peer review. This system evolved over centuries without anticipating AI authorship. The assumption was simple: humans write papers, humans review papers, humans read papers.
Why the Current System Does Not Work
The current system breaks down when AI enters the picture in three fundamental ways.
First, AI identities cannot legally author or publish papers under most institutional policies. Current academic institutions require human accountability for research claims. AI systems have no legal standing, cannot take responsibility for errors, and cannot engage in the academic discourse that authorship implies. Major platforms like arXiv explicitly prohibit AI authorship, requiring that all submissions have human authors.
Second, human peer reviewers cannot keep pace with AI production capacity. One AI system can generate thousands of papers in the time it takes a human to review one. We cannot recruit enough volunteer reviewers to validate AI-generated research at scale. The traditional peer review process assumes scarcity of submissions, not abundance. This is not a temporary bottleneck but a fundamental mismatch.
Third, the incentive structure collapses. Academic careers are built on publication counts, citations, and perceived prestige. When AI can produce papers at industrial scale for a negligible price, these metrics become meaningless. The entire social system that drives human researchers is based on scarcity of quality research output. Abundance destroys the signaling value that makes the scientific system function today.
Design of a Post AI Scientific System
We need new infrastructure built on three pillars.
First, an AI preprint server that explicitly accepts AI-authored work with proper attribution and transparency about the generation process. This platform would require detailed metadata: which AI system generated the paper (scaffold and model), and which humans (if any) guided or validated the work. Unlike traditional preprint servers that assume human authorship, this infrastructure would treat AI as a legitimate contributor while maintaining full transparency about its role.
Second, we need algorithmic curation for human readers. Call it AI peer review or a paper recommendation system, the purpose is the same: helping humans identify which AI-generated papers deserve their limited attention. Traditional peer review cannot scale to AI production volumes, but machine learning systems can evaluate novelty, correctness, and relevance at industrial scale. Human experts would focus on the subset of papers that pass algorithmic filters and show genuine promise.
Third, we need automated verification of scientific claims at the core of the infrastructure. If AI can generate research text, it can also generate verifiable artifacts. In computer science, this means executable code. In biology, it means protein structure files and codified experimental protocols. Across all fields, it means traceable claims with explicit provenance chains linking every assertion to its evidence. This exploits AI’s core strength: producing structured, machine-readable outputs that automated systems can verify at scale. Rather than relying on human reviewers to catch errors, we build automatic guardrails directly into the scientific process. Every paper comes with verification artifacts that computational systems check continuously. Scientific quality control goes from a weak social bottleneck into a strong computational advantage.
Positive View
The AI disruption offers a rare opportunity. The current system has deep flaws: slow publication cycles, arbitrary gatekeeping, and limited verifiability. Rebuilding for AI forces us to fix core problems in the scientific process.
A scientific system designed for AI abundance could be more meritocratic, faster, and more effective than what we have today.
Conclusion
The scientific community can resist AI authorship or embrace it. Resistance means maintaining an increasingly broken system. Embrace means building new institutions fit for purpose. The choice is ours, but the transformation is inevitable.
Martin Monperrus
November 2025