Abstract
Objective: To establish formal guidelines for the responsible use of generative Artificial Intelligence tools in the processes of authorship, peer review, and editorial decision-making in academic publishing. The recommendations aim to ensure scientific integrity, editorial reliability, and the protection of confidentiality, grounded in internationally recognized principles endorsed by organizations such as COPE and WAME.
Method: The guidelines were developed based on an analysis of normative documents published in scholarly journal editorials, editorial policies of major international publishers (Springer, Wiley, Elsevier, and MDPI), and specialized scientific literature. The content incorporates evidence regarding risks associated with the use of Large Language Models, including factual inaccuracies, fabricated references (hallucinations), reproduction of biases embedded in training data, and potential breaches of confidentiality. The framework is structured around three guiding principles: Responsibility, Transparency, and Confidentiality (RTC), applicable to authors, reviewers, and editors.
Results: The guidelines establish that AI systems cannot be recognized as authors and that any use of such tools must be explicitly disclosed. For authors, responsibilities include rigorous fact-checking, verification of references, prevention of plagiarism, and a detailed description of how AI was used. For reviewers, the recommendations emphasize preservation of manuscript confidentiality, prohibition of inserting protected content into tools whose terms of use allow data reuse, and the obligation to inform editors and authors when AI assistance is employed. For editors, the guidelines highlight responsibility over editorial decisions, critical evaluation of peer-review reports generated with AI support, and caution when using AI-detection tools due to risks of information leakage or false positives.
Conclusion: The ethical use of generative AI requires full human oversight, transparent declarations, and strict protection of editorial confidentiality. Continuous updates to these guidelines are essential to keep pace with rapid technological and regulatory developments, ensuring ongoing scientific integrity and alignment with international best practices in scholarly communication.
References
Committee on Publication Ethics (COPE). (2023, February 13). Authorship and AI tools. https://publicationethics.org/cope-position-statements/ai-author
COPE Council. (2006). COPE flowcharts and infographics: Plagiarism in a published article (English). Committee on Publication Ethics. https://doi.org/10.24318/cope.2019.2.2
Elsevier. (2025, November 4). Elsevier’s global survey of 3,000 researchers reveals less than half have time to do research but see AI as transformative if given the right tools. Elsevier.
https://www.elsevier.com/insights/confidence-in-research/researcher-of-the-future
Galindo-Cuesta, J. A. (2025). Glossary of generative artificial intelligence for education: A conceptual and pedagogical framework. Review of Artificial Intelligence in Education, 6(i), e047. https://doi.org/10.37497/rev.artif.intell.educ.v6ii.47
Gerlich, M. (2025). Correction: Gerlich, M. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies 2025, 15, 6. Societies, 15(9), 252. https://doi.org/10.3390/soc15090252
Jackson, J., Landis, G., Baskin, P. K., Hadsell, K. A., English, M., & CSE Editorial Policy Committee. (2023, May 1). CSE guidance on machine learning and artificial intelligence tools. Science Editor. https://www.csescienceeditor.org/article/cse-guidance-on-machine-learning-and-artificial-intelligence-tools/
Leung, T. I., De Azevedo Cardoso, T., Mavragani, A., & Eysenbach, G. (2023). Best practices for using AI tools as an author, peer reviewer, or editor. Journal of Medical Internet Research, 25, e51584. https://doi.org/10.2196/51584
Limongi, R. (2024). The use of artificial intelligence in scientific research with integrity and ethics. Review of Artificial Intelligence in Education, 5(00), e22. https://doi.org/10.37497/rev.artif.intell.educ.v5i00.22
Rodrigues, L. C. (2025). Inovação tecnológica no sistema jurídico brasileiro. Revista do CEJUR/TJSC: Prestação Jurisdicional, 13, e0468. https://doi.org/10.37497/revistacejur.v13i-TJSC-.468
Silva, A. de O., Janes, D. dos S., & Santos, R. (2024). GPT Alumni AI Pesquisa: A practical tutorial for the adoption and ethical use of AI in scientific research. Review of Artificial Intelligence in Education, 5(00), e033. https://doi.org/10.37497/rev.artif.intell.educ.v5i00.33
Silva, A. de O., & Janes, D. dos S. (2023). Challenges and opportunities of artificial intelligence in education in a global context. Review of Artificial Intelligence in Education, 4(00), e1. https://doi.org/10.37497/rev.artif.intell.education.v4i00.1
Silva, A. de O., Rodrigues, L. C., Martins, C. B., & Sellos-Knoerr, V. C. de. (2022). Publicações técnicas para a área do Direito. Revista Opinião Jurídica (Fortaleza), 20(34), 228–246. https://doi.org/10.12662/2447-6641oj.v20i34.p228-246.2022
World Association of Medical Editors (WAME). (2023, May 31). Chatbots, generative AI, and scholarly manuscripts. https://wame.org/page3.php?id=106

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
