AI Curriculum for Research

by Martin Monperrus

Artificial intelligence tools is transforming research workflows across disciplines. Doctoral students today must acquire competencies that did not exist a decade ago. This post outlines four core areas where AI integration has become essential for modern research: coding, reading, writing, and idea generation.

When used effectively, these AI tools offer extraordinary power to accelerate research, reduce tedious tasks, and expand what a single researcher can accomplish. However, AI integration comes with significant pitfalls: each section below discusses both the capabilities and the risks that researchers must navigate.

Using AI for Coding

Large language models (LLMs) such as GitHub Copilot and Claude can generate, debug, and explain code.

Skills to master:

Common pitfalls:

Using AI for Reading

The volume of scientific literature exceeds human reading capacity. AI tools can summarize papers, extract key findings, and identify relevant citations.

Practical applications:

Common pitfalls:

See reference post https://www.monperrus.net/martin/reading-with-ai

Using AI for Writing

AI assists with drafting, editing, and improving scientific prose. Common use cases include:

Skills to master:

For:

Many journals and conferences now require disclosure of AI tool usage in manuscript preparation. The researcher remains fully responsible and accountable for accuracy and originality.

Common pitfalls:

Using AI for Feedback

AI is excellent at providing critical feedback before formal peer review if used in the right way.

Applications:

Example promts:

  - give constructive feedback on how to improve this paper?
  - what are the main weakbnesses of this paper?

Then, one should critically evaluate AI suggestions in order to take into account or discard specific items.

Common pitfalls:

Using AI for Idea Generation

AI can serve as a brainstorming partner. Researchers use LLMs to:

The value of AI-generated ideas depends on the researcher’s ability to evaluate feasibility, significance and novelty.

Common pitfalls:

Ethics

AI use in research requires transparency and adherence to evolving ethical standards. PhD students must understand disclosure requirements: when submitting papers, grant applications, or theses, they should clearly state which AI tools were used and for what purpose. Most major publishers now mandate such disclosures. For example, Nature requires authors to declare AI use in the Methods or Acknowledgements section (see Nature’s AI policy). IEEE similarly requires disclosure and prohibits listing AI tools as authors (see IEEE Author Guidelines). ACM policies follow comparable principles (see ACM Policy on Authorship). Example disclosure statements include: “GitHub Copilot was used to assist with code generation for data analysis scripts” or “ChatGPT was used to improve grammar and clarity in early drafts; all content was verified and revised by the authors.” Disclosure should specify the tool, version when relevant, and the scope of its use.

Common pitfalls:

AI Slop

AI can generate both high-quality content and what is called “AI slop”: text that is generic, inaccurate, poorly structured and in general of low quality.

PhD students must learn to distinguish between these outcomes by critically reviewing AI outputs, and iteratively refining results.

PhD students must master the techniques to generate high-quality content that is characterized by accuracy, clarity, and relevance.

Conclusion

AI literacy is now a core research competency. Effective researchers combine AI capabilities with domain expertise, critical evaluation, and ethical awareness. The tools will continue to evolve; the principles of rigorous inquiry remain constant.

Martin Monperrus
January 2026