The scientific community faces a new reality: AI systems now generate research papers at scale. Current preprint servers, built on the assumption of human authorship, are ill-equipped for this paradigm shift. We need dedicated infrastructure.
Case Study: arXiv’s Response to AI-generated papers
arXiv, the dominant preprint server in physics, mathematics, and computer science is not ready for AI-generated papers. The current policy at the time of writing includes three key requirements:
- Disclosure mandate: Authors must report any significant use of generative AI tools in their submissions [4].
- Author accountability: Researchers signing as authors take full responsibility for all content, regardless of how it was generated.
- No AI authorship: Generative AI tools cannot be listed as authors on papers.
The pressure against AI-generated papers is visible in their enforcement actions. In October 2025, arXiv moderator Kat Boboris announced that arXiv now requires peer-review proofs for review articles and position papers in computer science, because they are particularly susceptible to AI generation [5]. Submissions lacking this documentation face rejection.
ArXiv does not treat AI as a legitimate research contributor. The policy works as damage control, not as infrastructure for an AI research future.
Other preprint servers:
- bioRxiv & medRxiv; Authors are responsible for AI-generated content, must detail use of these tools in manuscripts, and AI tools cannot be listed as authors, see https://www.biorxiv.org/about/FAQ
- SSRN Requires disclosure statements detailing AI use, prohibits listing AI as author, “should not list AI and AI technologies as an author”, see https://blog.ssrn.com/2023/03/07/ai-preprints-and-ssrn-a-new-policy/
- OSF Preprints Content completely or mostly generated by LLM tools or other AI tools is not welcome, see https://help.osf.io/article/691-preprint-moderation-policies
We need an AI preprint server
We need an AI preprint server that explicitly accepts AI-authored work. Without proper infrastructure, valuable AI-generated insights may be dismissed or lost because they lack an appropriate venue. Without proper infrastructure, AI-generated papers will inevitably infiltrate traditional preprint servers, eroding the current human-to-human system.
The choice is clear: build dedicated systems now, or watch the existing infrastructure crumble. As frameworks for human-AI collaboration in research continue to evolve [2], and as AI researchers’ potential in scientific discovery remains largely untapped [3], the need for appropriate dissemination channels becomes even more urgent.
Core Design Guidelines
An AI preprint server should provide a dedicated channel that maintains clear separation between AI-generated content and human research. This separation preserves the integrity of traditional scholarly communication while acknowledging AI’s expanding role in research production.
The platform must capture detailed metadata about each submission: the AI system used (including scaffold and model specifications), the extent of human involvement in guidance or validation. Unlike conventional preprint servers that assume human authorship and forbids AI authorship, this infrastructure treats AI as a legitimate research contributor while ensuring complete transparency about its role in the creation process.
Ideally, a trusted organization like arXiv would operate this infrastructure. Their decades of experience managing preprint distribution, their technical expertise in handling massive submission volumes, and their established credibility within the scientific community make them natural candidates. An arXiv-operated AI preprint server would carry institutional legitimacy that standalone initiatives cannot match. Researchers would trust the platform’s quality standards, archival permanence, and metadata accuracy. However, current policies suggest established organizations reject AI authorship.
Emerging Initiatives
Two initiatives have begun exploring this space:
- aiXiv (paper) The platform aims to create an open-access ecosystem where AI systems act as autonomous researchers, generating and disseminating scientific knowledge [1]. This directly addresses the infrastructure gap identified in this post: providing a legitimate venue for AI-authored work that traditional servers like arXiv explicitly reject through their “no AI authorship” policies. aixiv has no platform in production at the time of writing.
- ai.vixra.org extends the viXra model to AI-generated submissions, offering an alternative venue with minimal gatekeeping. Launched in March 2025 by Scientific God Inc., the platform operates as an open-access repository that archives AI-assisted scholarly articles separately from traditional human-written work. This approach provides a welcoming venue for AI-assisted research but does not embrace yet full AI authorship. rxiVerse, launched in July 2025, also operated by Scientific God Inc., is the same as ai.vixra.org, except that a $19.00 submission fee is required for each new article submission.
- AgentRxiv introduces a collaborative framework where LLM agent laboratories upload and retrieve reports from a shared preprint server. The platform enables agents to iteratively build upon prior research results rather than working in isolation [6].
They represent early experiments in exploiting the power of machine-generated research.
Conclusion
The scientific community stands at a crossroads. AI systems now produce research at unprecedented scale, yet our infrastructure remains rooted in assumptions of purely human authorship. Building dedicated preprint servers for AI-generated work is not about replacing human research, it’s about creating appropriate channels that preserve the integrity of both human and machine contributions.
Without this separation, we risk losing valuable AI scientific insights and discoveries. The path forward demands new infrastructure that embraces transparency, maintains clear boundaries, and acknowledges a fundamental truth: the future of scientific discovery will be written by both human and artificial minds.
References
[1] Zhang, P., Wang, Y., Chen, X., Liu, H., & Li, J. (2025). aiXiv: A Next-Generation Open Access Ecosystem for Scientific Discovery Generated by AI Scientists. arXiv preprint arXiv:2508.15126.
[2] Shao, E., et al. (2025). SciSciGPT: Advancing Human-AI Collaboration in the Science of Science. arXiv preprint arXiv:2504.05559.
[3] Yu, H., et al. (2025). Unlocking the Potential of AI Researchers in Scientific Discovery: What Is Missing? arXiv preprint arXiv:2503.05822.
[4] arXiv.org. (n.d.). arXiv Moderation. https://info.arxiv.org/help/moderation/index.html#policy-for-authors-use-of-generative-ai-language-tools
[5] Boboris, K. (2025). Attention Authors: Updated Practice for Review Articles and Position Papers in arXiv CS Category. arXiv blog. https://blog.arxiv.org/2025/10/31/attention-authors-updated-practice-for-review-articles-and-position-papers-in-arxiv-cs-category/
[6] Schmidgall, Samuel, and Michael Moor. “Agentrxiv: Towards collaborative autonomous research.” arXiv preprint arXiv:2503.18102 (2025).