Guidelines on the use of Artificial Intelligence in Wimb lu

General considerations on the use of Artificial Intelligence (AI) resources in the Wimb lu journal

Generative artificial intelligence (AI) is developing rapidly and represents both an opportunity and a significant challenge for scientific research and dissemination. Therefore, it is essential to establish guidelines that allow us to harness its potential benefits and mitigate its potential harmful effects. Furthermore, continuous updates are necessary to ensure the efficient and ethical use of the resources it can provide. Transparency is also crucial regarding its use, whether for assistance or content generation. In the latter case, human oversight will always be required, and AI should never be held legally or morally responsible for the generated content (COPE 2026).

I. Considerations Regarding the Use of AI in Academic Writing

a. The opacity of AI algorithms, the lack of ethical guidelines and human rights principles in AI systems, and the difficulty in determining responsibility for decisions made by AI systems can lead to human rights violations, without it being clear in all cases who should assume responsibility (UNESCO 2023, COPE 2026).

b. AI's are generated from artificial neural networks (ANNs) whose internal workings are not open to inspection. Therefore, it is not possible to understand and explain how they obtain their results (UNESCO 2024).

c. AI's inherit and disseminate biases present in the information used for their training, which can be difficult to detect and modify (UNESCO 2024). d. AIs can violate data privacy and intellectual property rights because they are built from large amounts of data, frequently obtained from the internet without the necessary international permissions being obtained (UNESCO 2024, 2025, COPE 2026). 

d. AI can violate data privacy and intellectual property rights because it is built from large amounts of data, often obtained from the internet without the necessary international permissions (UNESCO 2024, 2025, COPE 2026).

e. AI can generate inaccurate or incorrect information, presenting it convincingly in a way that makes it seem realistic and relevant. Therefore, without mediation from a specialized and ethical perspective by those who base their arguments on AI, academic dissemination can reinforce inaccurate or incorrect information and reproduce biases and stereotypes that harm not only the advancement of knowledge but also specific populations (UNESCO 2023, 2024). 

f. AI, especially with increased use, can threaten the ability to write and perform other creative activities, progressively generating a dependence on its use that limits the development of intellectual skills and human action in general. Writing is associated with the structuring and development of thought. Therefore, writing with AI support carries the risk of “writing without thinking” (UNESCO 2024, 2025). The risk is that the voice of authorship and creativity in writing will disappear (COPE 2026). 

g. AI can deepen exclusion and structural inequalities, as well as generate new forms of discrimination, especially due to developer bias. For example, AI can contribute to disseminating and reinforcing stereotypes against certain population groups based on their sex, gender, religion, race, and other factors. Furthermore, the obsolescence or irrelevance of the information used by AI could lead to inadequate, incorrect, or biased results in certain geographical, cultural, and historical contexts (Penabad-Camacho et al. 2024; UNESCO 2025). 

h. AI is not informed by real-world observations or other key aspects of the scientific method. Instead, AIs tend to produce only responses based on the values ​​of the information used for their training, which often represents dominant viewpoints. Therefore, the risk is limiting a pluralism of ideas (UNESCO 2024). i. AI can intensify climate change (UNESCO 2025). 

II. Some guiding principles

1. AI is an expanding tool that can support and strengthen scientific and academic work. Therefore, instead of blocking or prohibiting its use, the approach is to make its use transparent and regulate it (Penabad-Camacho et al. 2024, COPE 2026).

2. The use of AI should be guided and regulated by academic institutions in order to teach and promote best practices, including respect for human rights and professional ethics (Penabad-Camacho et al. 2024; UNESCO 2023, COPE 2026). 

III. Guidelines for the ethical and responsible use of AI

Editorial Team  

The editorial team will notify authors of the use of AI in the evaluation of their articles and during the editorial process (Penabad-Camacho et al. 2024, COPE 2026).

The editorial team will inform readers of the use of AI in published articles (Penabad-Camacho et al. 2024, COPE 2026).

Reviewers

Reviewers must declare the use of AI for article evaluation (Penabad-Camacho et al. 2024, COPE 2026).

AI does not replace the responsibility for reviewing texts or for accountability regarding the criteria and recommendations issued (Penabad-Camacho et al. 2024, COPE 2026).

Authors 

Authors must assume responsibility for making a substantial contribution to the published content, bear moral and legal responsibility and accountability for the accuracy and integrity of their writing, be legally and ethically accountable for the published content, approve or endorse the final versions of the content to be published, and address any conflicts of interest. None of the aforementioned responsibilities can be delegated to AI (Penabad-Camacho et al. 2024).

Authors must have sufficient knowledge of the subject matter they are writing about when using AI support. This knowledge must be sufficient to allow them to identify biases, inaccurate information, missing citations, and other issues (UNESCO 2024).

Authors must explain, in a reasoned and transparent manner, how they have used AI and how, during its use, they have avoided creating or propagating biases in the information (Penabad-Camacho et al. 2024).

References

Committee of Publication Ethics (COPE). Should there be limits to the editorial use of assistive AI tools?. Consultado el 18 de febrero de 2026. https://publicationethics.org/guidance/case/should-there-be-limits-editorial-use-assistive-ai-tools

Penabad-Camacho, L., Morera-Castro, M., & Penabad-Camacho, M. A. (2024). Guía para uso y reporte de inteligencia artificial en revistas científico-académicas. Revista Electrónica Educare28(S), 1–41. https://doi.org/10.15359/ree.28-S.19830

UNESCO. (2023). Kit de herramientas global sobre IA y el estado de derecho para el poder judicial. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000387331_spa

UNESCO. (2024). Guía para el uso de IA generativa en educación e investigación. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000389227

UNESCO. (2025). Marco de competencias para docentes en materia de IAhttps://unesdoc.unesco.org/ark:/48223/pf0000393813