Sounds exciting and dangerously misleading.
What generative AI can realistically do in scientific research is find relevant papers when connected to real databases, accelerate structured summaries and cross-paper comparisons, and polish rough drafts into precise, evidence-based language. What it cannot do is guarantee that a single citation it generates actually exists.
At the 2nd Bicyclos HEurope International School and Workshop in Sevilla, our very own Milo Malanga PhD delivered a hands-on session on the responsible use of generative AI in literature search, analysis and proposal writing, putting five major AI platforms to the test with a deliberately fabricated citation request. Some tools generated complete references to authors, journals, and DOIs to a paper that was never written, while others searched real databases and returned only verified sources.
The difference between a hallucinated DOI in your MSCA proposal and a verified one is not a minor detail; it is a credibility risk that can undermine months of work.
AI is a powerful accelerator for research workflows, but verification is not optional, IP protection is not an afterthought, and critical thinking is not something we can afford to outsource.
Speed matters, but accuracy matters more.
Have you ever caught an AI-generated citation that turned out to be completely fabricated? We’d love to hear how you’re navigating AI in your research workflow.

