
By Hwayoung Cho, PhD, RN
Have you ever drafted a paragraph for a class assignment, presentation, literature review, patient education handout or research manuscript and then asked an AI tool, “Can you find citations to support this?” That workflow is becoming increasingly common. In the past, many of us built our rationale and explanations through intensive literature review. We read multiple scientific papers, compared findings, evaluated the strength of the evidence and then developed our position with specific supporting evidence. Today, with generative AI woven into everyday work, a different pattern is emerging. We often write first and then ask AI to find citations that may fit what we have already written (even if we do not know in advance whether such evidence exists).
That shift is not necessarily a bad thing. AI can help organize ideas, generate keywords, structure a first draft and save time. For busy students, researchers, educators and clinicians, that efficiency is appealing. The risk emerges when users rely on AI-generated citations without confirming that the sources are real and support the claims being made.
Why This Matters Now
Generative AI has changed how many people approach writing. Instead of starting with a database search and building an argument from the literature upward, it is now tempting to start with the message we want to communicate and then ask AI to supply the supporting references. In many situations, that feels practical. It can speed up brainstorming, reduce the intimidation of a blank page and help people move from an idea to a draft more quickly.
But convenience can create a false sense of security. When a citation appears polished and complete, it is easy to assume it is trustworthy. In reality, some AI-generated references point to papers that do not exist at all, while others cite real articles that do not actually support the statement being made. Studies evaluating AI-generated references have shown that large language models can produce fabricated or inaccurate citations even when those references look credible. For example, in one study Walters and Wilder (2023) found that 55% of GPT-3.5 citations and 18% of GPT-4 citations were fabricated, while Chelli et al. (2024) reported hallucination rates of 39.6% for GPT-3.5 and 28.6% for GPT-4 in a systematic-review context.
Even more concerning, this problem is no longer confined to student drafts or informal chatbot use. Editors have reported seeing more AI-generated fake citations in submissions (Whitford, 2026), and commentators have warned that such citations can propagate through the scholarly literature (Sharifi, 2025).
Why Fake Citations Are So Dangerous
Fake AI-generated citations are often convincing because they resemble real scholarship. They may include familiar journal titles, plausible author names, a publication year, volume and issue numbers and even a DOI (digital object identifier), a unique identifier used to locate a specific article online. At first glance, everything looks legitimate, which is exactly why these citations can mislead busy readers, writers and reviewers.
But when you try to verify the reference, the problems begin. The title cannot be found. The DOI does not work. The journal issue exists, but the cited article does not. Or the paper is real, but it says something very different from what the writer claimed. That is what makes this problem so dangerous: the citation does not look suspicious. It looks authoritative.
Generative AI can imitate the form of scholarship without guaranteeing the substance. It can produce something that resembles evidence without actually providing evidence. In nursing and health care, where evidence carries professional and sometimes clinical consequences, that distinction matters.
Why This Is a Nursing Issue
This is not only a problem for researchers or academic writers. It is a nursing issue. Nursing students at all levels use evidence in assignments and presentations, and graduate students rely on it heavily in literature reviews, capstone projects and dissertations. Faculty use evidence in teaching, manuscripts and grant proposals. Practicing nurses use evidence in patient education, quality improvement, staff development, clinical guidance and policy discussions.
In other words, people across nursing and care settings rely on evidence to inform education, scholarship, policy and patient care. That is why citations are not decorative. They are not there simply to make writing look scholarly. They show the foundation of our claims, recommendations, decisions and teaching. If that foundation is false, the work’s credibility is weakened from the start.
In a student assignment, fake citations can distort learning. In research, they can undermine scholarly integrity. In educational materials, they can spread misinformation. In practice, they can make weak or inaccurate claims appear evidence based. This is not just a formatting problem. It is a trust problem.

Writing First Is Not the Real Problem
It is important to be precise here. The problem is not simply that people now write first and search for supporting citations later. In many real-world settings, that workflow is understandable. We often begin with a concept we already know, draft key points, and then look for the most relevant literature to support or refine them. AI can be helpful at that early stage.
The real problem is using AI-generated citations without verification. An AI-generated reference can be a starting point, but it should never be treated as final evidence until it has been checked. The issue is not AI itself. The issue is the temptation to confuse speed with accuracy and appearance with truth.
What We Should Do
Any citation should be verified before use. Before adding an AI-suggested reference, we should confirm that the article actually exists. We should search for it in trusted databases such as PubMed, open the original source and check the author names, title, journal, publication year and DOI. For example, Springer Nature’s guidance states that authors should ensure citations are accurate, that citations support the statements they are used to justify and that authors should not cite sources they have not read (Springer Nature, n.d.).
And we should actually read the article. Even a real article becomes a mis-citation if it does not support the claim attached to it. Accuracy is not only about whether a source exists. It is also about whether it is being used honestly and appropriately.
We should also begin treating citation verification as a core digital literacy skill. In the era of generative AI, information literacy and source verification are no longer optional. They are part of professional competence. Transparency matters too. If AI tools are used during writing or editing, their use should be disclosed in accordance with the policies of the journal, course, institution or workplace. Even when AI assists with drafting, the responsibility for accuracy still belongs to the human author.
This Is a Professional Responsibility Issue
Fake citations generated by AI are not merely a technical flaw. They are a professional practice issue. Nursing stands on trustworthy evidence, careful judgment and ethical responsibility. Whether a paragraph took hours to write or was revised in seconds with AI, that standard does not change.
Generative AI will continue to spread across education, research and health care practice. For that reason, what we need is not blind enthusiasm for technology, but a habit of verification. A citation should not be trusted because it looks scholarly. A trustworthy citation is one that leads to a real source, has been read directly, evaluated critically and used honestly.
Evidence-based nursing practice depends on evidence. Not evidence-shaped text. Real evidence.
At FloGatorAI, we invite you to pause, verify and help build a nursing culture and science where innovation moves forward without leaving evidence behind.