AI Policy
Artificial Intelligence (AI) and Generative AI Policy
TARBIYA: Journal of Education in Muslim Society
Adopted: 2026 | Reviewed annually
Quick Guide
- AI may assist, but it must not replace human scholarship, judgment, or accountability.
- Authors must disclose meaningful AI use in a separate AI Declaration statement.
- AI tools must not be listed as authors or co-authors.
- Reviewers and editors must not upload confidential journal materials to generative AI tools.
- AI must not be used by reviewers or editors to evaluate manuscripts or make decisions.
- Generative AI may not be used to create or alter figures, images, or artwork in submitted manuscripts, except where such use is part of the research method and is fully described in the Methods section.
1. Introduction
These policies have been developed in response to the increasing use of generative artificial intelligence (AI) and AI-assisted technologies in scholarly work and have been refined to reflect evolving good practice. They are intended to provide greater transparency and guidance to authors, reviewers, editors, readers, and contributors.
TARBIYA: Journal of Education in Muslim Society will continue to monitor developments in this area and may revise this policy as necessary. This policy follows and substantively refers to the policy framework used in the Elsevier’s generative AI policies for journals.
3. For Reviewers
3.1 The Use of Generative AI and AI-Assisted Technologies in the Peer Review Process
When a reviewer is invited to assess a manuscript, the manuscript must be treated as a confidential document. Reviewers must not upload a submitted manuscript, any part of it, or the peer review report into a generative AI tool, even if the purpose is only to improve language or readability. Such use may violate author confidentiality, proprietary rights, and, where applicable, data privacy rights.
Peer review is a human intellectual responsibility at the core of the scientific ecosystem. Generative AI or AI-assisted technologies must not be used by reviewers to assist in the scientific review of a manuscript, because the critical thinking and original assessment required for peer review fall outside the proper scope of these tools, and because they may generate incorrect, incomplete, or biased conclusions.
The reviewer remains fully responsible and accountable for the content of the review report. Reviewers may, however, consult the AI disclosure statement provided by authors in the manuscript.
4. For Editors
4.1 The Use of Generative AI and AI-Assisted Technologies in the Editorial Process
A submitted manuscript must be treated as a confidential document. Editors must not upload a submitted manuscript or any part of it into a generative AI tool, as such action may violate confidentiality, proprietary rights, and data privacy obligations.
This confidentiality requirement also extends to editorial communication about the manuscript, including notification letters and decision letters. Editors therefore must not upload such correspondence into an AI tool, even for the sole purpose of improving language or readability.
Managing the editorial evaluation of a scientific manuscript requires human judgment and responsibility. Generative AI or AI-assisted technologies must not be used by editors to evaluate a manuscript or to support editorial decision-making, because the critical thinking and original assessment required for this role cannot be delegated to AI.
Editors remain fully responsible and accountable for the editorial process, the final decision, and communication with authors. Editors may refer to the AI disclosure statement provided by authors. If an editor suspects that an author or reviewer has violated this policy, the matter should be escalated through the journal’s ethics or publisher procedures.
5. Publication Process
5.1 The Use of AI and AI-Assisted Technologies in the Publication Process
As part of its commitment to improving the publishing experience for authors, reviewers, and editors, TARBIYA may explore carefully controlled uses of AI and AI-assisted technologies to support publication workflows. The goal is to use technology in ways that help maintain publication quality and preserve trust in published content, while ensuring that human oversight remains central to all decision-making.
AI-supported tools may be used, under appropriate safeguards, for purposes such as:
- identifying relevant reviewers;
- matching submissions with journal scope;
- detecting duplicate submissions;
- conducting technical checks, including adherence to submission requirements and completeness;
- performing research integrity checks; and
- supporting post-acceptance processes such as proof preparation, copyediting, and the identification of possible inconsistencies or inaccuracies in the final paper.
Human oversight remains at the core of decision-making throughout the publication process.
6. Ethical Alignment
This policy is consistent with broader principles of publication ethics and responsible AI use, including transparency, accountability, confidentiality, human oversight, fairness, and the protection of rights and scholarly integrity. In this regard, TARBIYA acknowledges the relevance of guidance issued by COPE, Nature Editorial, UNESCO, Elsevier, and STM Association.
7. Violations and Consequences
Any misuse of AI or violation of this policy may be treated as unethical conduct and handled under the journal’s publication ethics procedures. Depending on the seriousness of the case, actions may include manuscript rejection, post-publication correction or retraction, notification of relevant institutions or funders, and temporary or permanent restrictions on future submissions.
8. Policy Review and Updates
This policy will be reviewed periodically and revised when necessary to reflect developments in AI technologies, evolving international standards, and emerging best practices in scholarly publishing. Updated versions will be published on the official TARBIYA website.
References
- COPE (Committee on Publication Ethics). 2023. Guidance on AI and Ethics in Publishing. https://publicationethics.org
- Nature Editorial. 2023. Tools such as ChatGPT Threaten Transparent Science; Here Are Our Ground Rules for Their Use. https://www.nature.com/articles/d41586-023-00191-1
- UNESCO. 2021. Recommendation on the Ethics of Artificial Intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000381137
- Elsevier. 2023. AI Policy for Authors and Reviewers. https://www.elsevier.com/about/policies/artificial-intelligence
- STM Association. 2023. Recommendations for a Classification of AI Use in Academic Manuscript Preparation. https://www.stm-assoc.org
