Artificial intelligence tools is transforming research workflows across disciplines. Doctoral students today must acquire competencies that did not exist a decade ago. This post outlines four core areas where AI integration has become essential for modern research: coding, reading, writing, and idea generation.
When used effectively, these AI tools offer extraordinary power to accelerate research, reduce tedious tasks, and expand what a single researcher can accomplish. However, AI integration comes with significant pitfalls: each section below discusses both the capabilities and the risks that researchers must navigate.
Using AI for Coding
Large language models (LLMs) such as GitHub Copilot and Claude can generate, debug, and explain code.
Skills to master:
- Using AI coding assistants directly in IDEs (VS Code, JetBrains, etc.) for inline suggestions and chat interfaces
- Deploying AI agents for multi-step coding tasks such as refactoring, test generation, and documentation
- Generating data visualizations with libraries like Matplotlib by describing the desired plot in natural language
Common pitfalls:
- Accepting generated code without testing or understanding it: code can be logically wrong
- Accumulating technical debt from inconsistent or poorly structured AI-generated code: open science replication package should be of high quality
- Over-relying on AI for tasks that require deep algorithmic understanding. It is also good to practice non AI coding
Using AI for Reading
The volume of scientific literature exceeds human reading capacity. AI tools can summarize papers, extract key findings, and identify relevant citations.
Practical applications:
- Summarizing abstracts and full papers to triage reading lists
- Asking questions about specific papers to extract methods, results, or limitations
- Comparing findings across multiple papers
- Translating papers from unfamiliar languages
Common pitfalls:
- Not verifying citations and propagating fabricated citations (hallucinations)
- Missing key information missed by AI: skimming and reading the most relevant literature is still essential
- Developing shallow understanding of foundational literature
See reference post https://www.monperrus.net/martin/reading-with-ai
Using AI for Writing
AI assists with drafting, editing, and improving scientific prose. Common use cases include:
Skills to master:
- Using AI by prompting (easy)
- Using AI in writing environments such as VS Code (harder)
For:
- Generating first drafts from outlines or notes
- Improving clarity and grammar
- Checking consistency in terminology and notation
Many journals and conferences now require disclosure of AI tool usage in manuscript preparation. The researcher remains fully responsible and accountable for accuracy and originality.
Common pitfalls:
- Losing the deep thinking that occurs during manual writing
- Introducing factual errors or unsupported claims into drafts
- Producing generic prose that lacks precise domain terminology
- Not writing with style, not developing your own writing style
Using AI for Feedback
AI is excellent at providing critical feedback before formal peer review if used in the right way.
Applications:
- Reviewing draft papers for hidden assumptions, logical gaps, unclear arguments, or missing citations
- Critiquing experimental designs for potential confounds or missing controls
- Identifying weaknesses in statistical analyses
Example promts:
- give constructive feedback on how to improve this paper?
- what are the main weakbnesses of this paper?
Then, one should critically evaluate AI suggestions in order to take into account or discard specific items.
Common pitfalls:
- Over-weighting AI critiques (may reflect training biases rather than genuine flaws)
- Asking for strengths (AI tends toward sycophantic responses)
- Relying on AI for novelty assessment (AI does not have the most recent material by construction)
Using AI for Idea Generation
AI can serve as a brainstorming partner. Researchers use LLMs to:
- Generate research questions based on literature gaps
- Propose alternative hypotheses or experimental designs
- Identify connections between disparate fields
The value of AI-generated ideas depends on the researcher’s ability to evaluate feasibility, significance and novelty.
Common pitfalls:
- Spending time in research directions already explored in literature unknown to the AI
- Over-relying on AI for idea generation, disregarding the classical means (deep thinking, discussion with peers)
Ethics
AI use in research requires transparency and adherence to evolving ethical standards. PhD students must understand disclosure requirements: when submitting papers, grant applications, or theses, they should clearly state which AI tools were used and for what purpose. Most major publishers now mandate such disclosures. For example, Nature requires authors to declare AI use in the Methods or Acknowledgements section (see Nature’s AI policy). IEEE similarly requires disclosure and prohibits listing AI tools as authors (see IEEE Author Guidelines). ACM policies follow comparable principles (see ACM Policy on Authorship). Example disclosure statements include: “GitHub Copilot was used to assist with code generation for data analysis scripts” or “ChatGPT was used to improve grammar and clarity in early drafts; all content was verified and revised by the authors.” Disclosure should specify the tool, version when relevant, and the scope of its use.
Common pitfalls:
- Not disclosing AI use when required by journals or institutions
- Using AI in ways that violate specific publisher policies (e.g., generating figures)
AI Slop
AI can generate both high-quality content and what is called “AI slop”: text that is generic, inaccurate, poorly structured and in general of low quality.
PhD students must learn to distinguish between these outcomes by critically reviewing AI outputs, and iteratively refining results.
PhD students must master the techniques to generate high-quality content that is characterized by accuracy, clarity, and relevance.
Conclusion
AI literacy is now a core research competency. Effective researchers combine AI capabilities with domain expertise, critical evaluation, and ethical awareness. The tools will continue to evolve; the principles of rigorous inquiry remain constant.
Martin Monperrus
January 2026