Translate this page into:
The Artificial Intelligence Dilemma in Academic Writing: Balancing Efficiency and Integrity
*Corresponding author: Himel Mondal, Department of Physiology, All India Institute of Medical Sciences, Deoghar, Jharkhand, India. himelmkcg@gmail.com
-
Received: ,
Accepted: ,
How to cite this article: Mondal H, Mondal S, Jana S. The Artificial Intelligence Dilemma in Academic Writing: Balancing Efficiency and Integrity. Indian J Cardiovasc Dis Women. 2025;10:225-30. doi: 10.25259/IJCDW_86_2024
Abstract
The rapid rise of artificial intelligence (AI) tools in academic writing has created a cycle involving AI text generators, AI detection tools, and AI-powered paraphrasing or “humanizing” tools. AI can support authors in drafting essays and research papers, especially those who face challenges with language. However, increasing reliance on AI has sparked concerns about originality and intellectual contribution. With the introduction of large language model chatbots such as ChatGPT, AI detectors have emerged to identify AI-generated content. In response, “humanizer” tools have been developed to alter AI-generated text so that it bypasses detection. The interaction among these three types of tools complicates the relationship between AI use and academic integrity, raising the fundamental question: “To use or not to use AI?” The way forward lies in fostering awareness and following the ethical guidelines outlined by the International Committee of Medical Journal Editors and the World Association of Medical Editors. This article offers a concise overview of these tools, their functions, and the current standards for the responsible use of AI in academic writing.
Keywords
Artificial intelligence
ChatGPT
Large language model
Scientific writing
INTRODUCTION
The landscape of academic writing is undergoing a profound transformation with the advent of artificial intelligence (AI) tools, particularly large language models (LLMs) such as ChatGPT.[1] These technologies have made it significantly easier for researchers, students, and professionals to draft and refine scholarly content.[2] Especially for non-native English speakers or those facing difficulties with academic expression, AI offers assistance in producing grammatically correct and coherent texts.[3] However, this convenience comes with growing concerns about the authenticity, originality, and intellectual ownership of AI-assisted writing.
In response to the increasing prevalence of AI-generated content, a new category of software – “AI detectors” has emerged. These tools aim to identify text that may have been produced by AI.[4] The presence of such detection tools has, in turn, prompted the rise of yet another innovation - AI “humanizers” or paraphrasing tools designed to modify AI-generated content so that it escapes detection.[5] This loop between generation, detection, and obfuscation has created a complex ecosystem that challenges traditional notions of academic integrity.
Although fully relying on AI-generated content can diminish the author’s credibility,[6] we presume that its use would help the authors a lot in research and writing.[7] In this context, we would like to share our perspectives on how the current growth in the AI industry is influencing authors and academicians and how to find a way forward.
HOW AI HELPS IN WRITING
AI can help in three stages of academic writing, as shown in Figure 1. It starts with an AI text generator, followed by an AI detector, and an AI paraphraser or so-called “humanizer.”[8] While LLMs like ChatGPT can help in generating texts, tools like Turnitin can help detecting the AI content in the text, and tools like Stealth Writer can twist language to bypass the AI detection. With the easy availability of these three types of tools, academicians, authors, and researchers are becoming more confused and having a dilemma – “to use or not to use AI?” In the following section, we provide a brief of the three tools and then comment on the current guidelines for using AI for academic purposes.

- Steps of the writing process where artificial intelligence can be used along with two examples of such tool. (AI: Artificial intelligence)
THREE AI TOOLS
Generation
AI models, such as LLMs, are being increasingly used to draft essays and research papers. These tools can produce text based on prompts,[9] saving time and effort for students, researchers, and professionals.[10] What once required hours of research and writing can now be completed in minutes. It can be of great help to those who struggle to write scientific content in English due to language weakness.[11] They can prepare their first draft and get it edited for language and grammar to improve the fluency of the text. Mishra et al. conducted a study with participants from the Global Clinical Scholars’ Research Training certificate program of Harvard Medical School and reported that researchers globally are increasingly recognizing and integrating LLMs for literature review, manuscript drafting, and editing while also expressing strong ethical concerns and calling for clearer guidelines to govern their use.[12]
Detection
AI detection tools have been developed to identify content created by AI text generators like ChatGPT, Claude, or Perplexity. AI detectors analyze text for patterns and features that suggest the involvement of AI in the writing process.[13] Some of the teachers and book editors started using it to screen submitted text.[14] The problem with such a detector is that it will detect as “AI-generated” even if the language and grammar are edited by an LLM. For example, a part of a manuscript was copied from a previously published article.[15] It was detected as 100% human-generated text. Then, it was edited for grammar and language by an LLM chatbot. Checking it by an AI-detector showed 100% AI-generated [Figure 2]. The scientific content of those two texts was the same. The report says one is human-generated, and another one is AI-generated. However, it was not AI-generated; it was just edited by AI.

- An example showing how human written texts can pass through various artificial intelligence tools for detecting human-generated content, editing text, detecting artificial intelligence (AI)-generated text, “humanize” text, and bypassing AI detection, from top to bottom.
In addition to this, these tools can also flag texts as “AI-generated and AI-refined,” and “Human-written and AI-refined.” However, in all cases, scientific content remains the same. Only the language is different.[16] The same content with a different writing style is flagging the content as AI-generated. Hence, these tools are nothing but adding another level of unnecessary screening for scientific papers. We need to read science, not to investigate what tool was used to write it.
Humanization
There are professional writers who write manuscripts afresh from the data or initial draft provided by the authors. This is a big industry, and no stakeholders are against agency-written manuscript.[15,17] However, these agencies charge a huge fee for writing or editing. They can help in the removal of AI content too.[14] However, to reduce human effort, another segment of AI tools has emerged as text “humanizer.” These tools take machine-generated text and reword it to make it appear more original, natural, or human-like. Hence, it can evade AI detection systems. For example, we used the text that was detected as 100% AI-generated and “humanized” it, and rechecked the AI content. It showed 100% human-generated! Hence, it is nothing but a game of playing with language.[5]
PROBLEM
If the journals, editors, and reviewers start detecting AI content in manuscripts and reject manuscripts on the basis of AI-generated content, authors may develop an “AI phobia,” and they would avoid using AI tools. A recent survey found divergent views of researchers on AI’s role in academic writing. Some argued that AI, like calculators, has become so common that disclosure should be unnecessary, reflecting a pragmatic normalization of its use. Others opined that using AI in writing or reviewing to be considered “cheating.”[18] By rejecting AI, this group may struggle to stay competitive, as AI can assist with data analysis, literature review, and drafting, all of which can enhance research productivity.[19]
On the other hand, another group of authors may edit or generate content and “humanize” the text, which may reduce the fluency and clarity of scientific communication. Over-editing can result in convoluted phrasing, loss of coherence, and a lack of smooth transitions.[20,21]
Hence, some authors are in a dilemma, and some are using two tools (AI text generator and humanizer). A study by Ng et al.[22] found that researchers think they need institutional support and formal training or clear policies on AI tool usage. The findings underscore a clear need for improved guidance, transparency, and educational initiatives around AI in research.[22]
ROADMAP FOR POTENTIAL SOLUTION
LLM chatbots can be used in various stages of research and manuscript preparation, as shown in Table 1.[11,23-25]
| Stage | How AI can help | Important considerations |
|---|---|---|
| Idea generation | Suggest research topics, identify gaps from prior literature, formulate research questions | Human must refine and verify relevance; AI should not replace domain expertise |
| Literature review | Summarize background knowledge, suggest keywords for search | Always read and cite original sources. AI summaries may omit nuance or include inaccuracies |
| Study design | Help brainstorm study methodology, suggest statistical approaches or designs | Should be used for support only; consult domain experts and statisticians for validation |
| Data analysis (support) | Explain statistical concepts, guide on test selection, write R code drafts | AI must not perform unsupervised analysis; results must be interpreted by a human |
| Drafting | Assist with language refinement, provide structure of introduction, methods, or discussion | Must be edited thoroughly for accuracy and scientific tone |
| Abstract and title | Generate drafts of titles and abstracts based on the main findings | Should be revised to align with the journal scope and actual data |
| Reference formatting | Help format references in desired style (e.g., Vancouver, APA) | AI-generated references must be cross-checked for correctness |
| Proofreading | Identify grammar, syntax, and stylistic errors | AI proofreading is helpful but cannot ensure domain accuracy or content logic |
| Journal submission preparation | Suggest cover letter drafts, prepare for formatting requirements of specific journals | Final version must reflect journal-specific instructions and author voice |
AI: Artificial intelligence, APA: American Psychological Association
When using chatbots like ChatGPT in a research and academic writing, several precautions are essential to ensure ethical integrity, accuracy, and responsible use.[26] Researchers must recognize that chatbots are tools, not substitutes for human expertise.[27] While these AI systems can assist in drafting text, suggesting ideas, or summarizing content, their outputs should always be critically evaluated.[28] Chatbots may generate plausible-sounding but factually incorrect or misleading information.[29] Hence, researchers must cross-verify all chatbot-generated content with reliable primary sources, especially when citing literature.[30]
Transparency is another crucial precaution. Any substantial use of chatbots in manuscript preparation, such as drafting, editing, or language enhancement, should be clearly disclosed in the acknowledgments or methods section.[31] This fosters transparency and allows peer reviewers and readers to understand the extent of AI involvement in the work. For more detailed current guidelines, authors may visit the International Committee of Medical Journal Editors (ICMJE)[32] criteria or World Association of Medical Editors (WAME) guidelines.[33]
While using chatbots, confidentiality, and data privacy must be preserved. Researchers should avoid inputting sensitive, unpublished, or personally identifiable information into chatbot interfaces, as the terms of service of some AI platforms may allow usage data to be stored or used for training.[34] This is especially critical in medical and clinical research, where patient confidentiality is a legal and ethical mandate.[35]
Finally, the use of chatbots should not replace genuine scholarly engagement or critical thinking. While AI can help streamline some aspects of writing, the responsibility for the intellectual content, argumentation, and conclusions lies solely with the human authors.[36] Both ICMJE and WAME emphasize that authors must take full responsibility for the accuracy and integrity of their work after thoroughly checking the content. A summary of the guidelines are shown in Figure 3.

- Major guidelines for using artificial intelligence in academic writing.
In addition, biomedical journals are also not against using AI in manuscript preparation.[37] However, researchers should also ensure they are not violating the editorial policies of the particular journal, some of which may have specific rules about AI-generated content. For example, journals published by Scientific Scholars have a field where authors need to declare the use of AI in addition to acknowledgment.
In this era of AI-assisted writing and publication process, AI-detectors and AI-humanizers should be obsolete in the near future if authors start following guidelines for the optimum use of chatbots.[38] Journal and book editors and reviewers should evaluate the scientific merit of the paper and not to check the AI report, except for extreme suspicion of unscientific argument.
PROTECTING WOMEN’S HEALTH DATA
Sharing women’s health data with chatbots raises particular concerns about privacy, confidentiality, and the ethical handling of sensitive information. Women’s health data often includes details on reproductive health, mental health, and other personal health factors that require a high level of confidentiality.[39]
In the context of using chatbots for research writing on women’s health, researchers need to be cautious. They should verify each chatbot’s data handling policies to ensure that sensitive information is not stored or reused. When the chatbot stores or uses the input data for training models, it is suggested that the researchers use anonymized data in a prompt. This can provide an additional layer of protection.[40]
CONCLUSION
Human effort is still the best for scientific writing; however, AI can augment that! The integration of AI into academic writing presents both opportunities and challenges, marked by the dynamic interplay between AI-generated content, detection tools, and humanizing technologies. The escalating use of AI necessitates a clear understanding of its capabilities and limitations, alongside a commitment to transparency and academic integrity. Adhering to established guidelines from the ICMJE and WAME, authors can ethically embrace AI with transparency. The academic community must focus on the judicious use of AI to augment productivity, keeping a vigilant eye on the scientific vigor of the scholarly work.
Ethical approval:
Institutional Review Board Approval is not required as it is a retrospective study.
Declaration of patient consent:
Patient’s consent is not required as there are no patients in this study.
Conflict of interest:
There are no conflict of interest.
Use of artificial intelligence (AI)-assisted technology for manuscript preparation:
The authors confirms that they have used artificial intelligence (AI)-assisted technology for assisting in the writing or editing of the manuscript or image creations.
Financial support and sponsorship: Nil.
References
- Large Language Models and the Future of Academic Writing. J Postgrad Med. 2024;70:67-8.
- [CrossRef] [PubMed] [Google Scholar]
- The Role of ChatGPT in Scientific Communication: Writing Better Scientific Review Articles. Am J Cancer Res. 2023;13:1148-54.
- [Google Scholar]
- Is ChatGPT a “Fire of Prometheus” for Non-Native English-Speaking Researchers in Academic Writing? Korean J Radiol. 2023;24:952-9.
- [CrossRef] [PubMed] [Google Scholar]
- Performance of Artificial Intelligence Content Detectors Using Human and Artificial Intelligence-Generated Scientific Writing. Ann Surg Oncol. 2024;31:6387-93.
- [CrossRef] [PubMed] [Google Scholar]
- Artificial Intelligence in Manuscript Preparation: Are We Becoming Dependent on Machines? Indian J Nucl Med. 2024;39:415-6.
- [CrossRef] [PubMed] [Google Scholar]
- How Sensitive Are the Free AI-Detector Tools in Detecting AI-Generated Texts? A Comparison of Popular AI-Detector Tools. Indian J Psychol Med. 2025;47:275-8.
- [CrossRef] [Google Scholar]
- ChatGPT in Academic Writing: Maximizing its Benefits and Minimizing the Risks. Indian J Ophthalmol. 2023;71:3600-6.
- [CrossRef] [PubMed] [Google Scholar]
- Detecting Generative Artificial Intelligence in Scientific Articles: Evasion Techniques and Implications for Scientific Integrity. Orthopaedics Traumatol Surg Res. 2023;109:103706.
- [CrossRef] [PubMed] [Google Scholar]
- Response Generated by Large Language Models Depends on the Structure of the Prompt. Indian J Radiol Imaging. 2024;34:574-5.
- [CrossRef] [PubMed] [Google Scholar]
- AI in Research and Publication: Prioritizing Content Over Language. Eur Arch Otorhinolaryngol. 2025;282:1119-20.
- [CrossRef] [PubMed] [Google Scholar]
- The Use of Artificial Intelligence to Improve the Scientific Writing of Non-Native English Speakers. Rev Assoc Med Bras (1992). 2023;69:e20230560.
- [CrossRef] [PubMed] [Google Scholar]
- Use of Large Language Models as Artificial Intelligence Tools in Academic Research and Publishing among Global Clinical Researchers. Sci Rep. 2024;14:31672.
- [CrossRef] [PubMed] [Google Scholar]
- Why Technical Solutions for Detecting AI-Generated Content in Research and Education are Insufficient. Patterns (N Y)2023;. ;4:100796.
- [CrossRef] [PubMed] [Google Scholar]
- Do I Write Like Artificial Intelligence? Ann Surg Oncol. 2025;32:2423-4.
- [CrossRef] [PubMed] [Google Scholar]
- Development of Medical Writing in India: Past, Present and Future. Perspect Clin Res. 2017;8:45-50.
- [CrossRef] [PubMed] [Google Scholar]
- Using ChatGPT and other AI-Assisted Tools to Improve Manuscripts Readability and Language. Int J Imaging Syst Technol. 2023;33:773-5.
- [CrossRef] [Google Scholar]
- The Paradigm Shift in Scientific Publications. Prev Med Res Rev. 2024;1:64.
- [CrossRef] [Google Scholar]
- Is it OK for AI to Write Science Papers? Nature Survey Shows Researchers are Split. n.d. Available from: https://www.nature.com/articles/d41586-025-01463-8 [Last accessed on 2025 Jun 08]
- [Google Scholar]
- The Impact of Artificial Intelligence on Research Efficiency. Results Eng. 2025;26:104743.
- [CrossRef] [Google Scholar]
- "Tortured Phrases" in Preprints. Curr Med Res Opin. 2023;39:785-7.
- [CrossRef] [PubMed] [Google Scholar]
- The Use of “Tortured Phrases𠇍 in Science Communication. Indian J Med Ethics 2025 Epub ahead of print
- [CrossRef] [Google Scholar]
- Attitudes and Perceptions of Medical Researchers Towards the Use of Artificial Intelligence Chatbots in the Scientific Process: An International Cross-Sectional Survey. Lancet Digit Health. 2025;7:e94-102.
- [CrossRef] [PubMed] [Google Scholar]
- The use of artificial intelligence in writing scientific review articles. Curr Osteoporos Rep. 2024;22:115-21.
- [CrossRef] [PubMed] [Google Scholar]
- Use of artificial intelligence in scientific writing. Mymensingh Med J. 2025;34:592-7.
- [Google Scholar]
- Using Artificial Intelligence in Academic Writing and Research: An Essential Productivity Tool. Comput Methods Programs Biomed Update. 2024;5:100145.
- [CrossRef] [Google Scholar]
- Artificial Intelligence Tools for Scientific Writing: The Good, The Bad and The Ugly. Top Ital Sci J. 2025;2
- [CrossRef] [Google Scholar]
- Chatbots in Education and Research: A Critical Examination of Ethical Implications and Solutions. Sustainability. 2023;15:5614.
- [CrossRef] [Google Scholar]
- Artificial Intelligence-Generated Content Needs a Human Oversight. Indian J Dermatol. 2024;69:284.
- [CrossRef] [PubMed] [Google Scholar]
- A Case of Artificial Intelligence Chatbot Hallucination. JAMA Otolaryngol Head Neck Surg. 2024;150:457-8.
- [CrossRef] [PubMed] [Google Scholar]
- Accuracy of Chatbots in Citing Journal Articles. JAMA Netw Open. 2023;6:e2327647.
- [CrossRef] [PubMed] [Google Scholar]
- Academic Publisher Guidelines on AI Usage: A ChatGPT Supported Thematic Analysis. F1000Res. 2024;12:1398.
- [CrossRef] [PubMed] [Google Scholar]
- Responsible Use of Generative Artificial Intelligence for Research and Writing: Summarizing ICMJE Guideline. Indian J Orthop. 2024;58:1504-5.
- [CrossRef] [PubMed] [Google Scholar]
- Generative AI, and Scholarly Manuscripts: WAME Recommendations on Chatbots and Generative Artificial Intelligence in Relation to Scholarly Publications. Open Access Maced J Med Sci. 2023;11:263-5.
- [CrossRef] [Google Scholar]
- The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs) NPJ Digit Med. 2024;7:183.
- [CrossRef] [PubMed] [Google Scholar]
- Challenges and Recommendations for Enhancing Digital Data Protection in Indian Medical Research and Healthcare Sector. NPJ Digit Med. 2025;8:48.
- [CrossRef] [PubMed] [Google Scholar]
- Can an Artificial Intelligence Chatbot be the Author of a Scholarly Article? J Educ Eval Health Prof. 2023;20:6.
- [CrossRef] [PubMed] [Google Scholar]
- Artificial Intelligence in Academic Writing: Insights from Journal Publishers' Guidelines. Perspect Clin Res. 2025;16:56-7.
- [CrossRef] [PubMed] [Google Scholar]
- Ethical Use of Artificial Intelligence for Scientific Writing: Current Trends. J Hum Lact. 2024;40:211-5.
- [CrossRef] [PubMed] [Google Scholar]
- Privacy, Data Sharing, and Data Security Policies of Women's mHealth Apps: Scoping Review and Content Analysis. JMIR Mhealth Uhealth. 2022;10:e33735.
- [CrossRef] [PubMed] [Google Scholar]
- Health Care Privacy Risks of AI Chatbots. JAMA. 2023;330:311-2.
- [CrossRef] [PubMed] [Google Scholar]

