AI Policy

Artificial Intelligence (AI) and Generative AI Policy

TARBIYA: Journal of Education in Muslim Society

Adopted: 2026 | Reviewed annually

Quick Guide

  1. AI may assist, but it must not replace human scholarship, judgment, or accountability.
  2. Authors must disclose meaningful AI use in a separate AI Declaration statement.
  3. AI tools must not be listed as authors or co-authors.
  4. Reviewers and editors must not upload confidential journal materials to generative AI tools.
  5. AI must not be used by reviewers or editors to evaluate manuscripts or make decisions.
  6. Generative AI may not be used to create or alter figures, images, or artwork in submitted manuscripts, except where such use is part of the research method and is fully described in the Methods section.

1. Introduction

These policies have been developed in response to the increasing use of generative artificial intelligence (AI) and AI-assisted technologies in scholarly work and have been refined to reflect evolving good practice. They are intended to provide greater transparency and guidance to authors, reviewers, editors, readers, and contributors.

TARBIYA: Journal of Education in Muslim Society will continue to monitor developments in this area and may revise this policy as necessary. This policy follows and substantively refers to the policy framework used in the  Elsevier’s generative AI policies for journals.

2. For Authors

2.1 The Use of Generative AI and AI-Assisted Technologies in Manuscript Preparation

TARBIYA recognizes the potential of generative AI and AI-assisted technologies (“AI Tools”), when used responsibly, to help researchers work more efficiently, gain insights more quickly, and improve aspects of manuscript preparation. These tools may support activities such as synthesizing literature, identifying research gaps, generating ideas, organizing content, and improving language and readability.

Authors preparing a manuscript for TARBIYA may use AI Tools for such support. However, these tools must never be used as a substitute for human critical thinking, scholarly expertise, and evaluative judgment. AI Tools must always be used with human oversight and control. Authors remain fully responsible and accountable for the content of their work.

2.2 Author Responsibilities

Authors are responsible for:

  • carefully reviewing and verifying the accuracy, comprehensiveness, and impartiality of all AI-generated output, including checking sources because AI-generated references may be incorrect or fabricated;
  • editing and adapting all AI-assisted material thoroughly so that the manuscript reflects the authors’ authentic and original scholarly contribution, including their own analysis, interpretation, insights, and ideas;
  • ensuring that the use of AI Tools is made clear and transparent to readers through an appropriate disclosure statement at submission; and
  • ensuring that the manuscript is prepared in a way that protects data privacy, intellectual property, confidentiality, and other rights by reviewing the terms and conditions of any AI Tool used.

2.3 Responsible Use of AI Tools

Authors must review the terms and conditions of any AI Tool they use to ensure that the privacy and confidentiality of their data and inputs, including unpublished manuscripts, are protected. Particular care must be taken when personally identifiable data are involved. Authors must also check for factual errors, hallucinated references, and possible bias.

Images that duplicate or refer to copyrighted images, real people, identifiable products or brands, or an individual’s voice must not be generated. Authors must also ensure that the AI Tool does not obtain rights over input materials beyond what is strictly necessary to provide the service, including any right to use the submitted material for training, and must ensure that the AI Tool does not impose restrictions that would prevent subsequent journal publication.

2.4 Disclosure

Authors must disclose the use of AI Tools for manuscript preparation in a separate AI Declaration statement in the manuscript at the time of submission, and that statement may appear in the published article. The declaration should identify the name of the AI Tool used, the purpose of the use, and the extent of the authors’ oversight.

Disclosure supports transparency and trust among authors, readers, reviewers, editors, and contributors and helps ensure compliance with the terms of use of the relevant AI Tool. Basic checks of grammar, spelling, and punctuation do not require declaration. If AI is used in the research process itself, that use must be declared and described in appropriate detail in the Methods section.

Suggested AI Declaration

The authors used [tool/model name] for [specific purpose]. All outputs were critically reviewed, verified, and substantially revised by the authors. The authors take full responsibility for the accuracy, integrity, and originality of the manuscript.

2.5 Authorship

AI Tools must not be listed as authors or co-authors, and they must not be cited as authors. Authorship entails responsibilities and tasks that can only be attributed to human beings, including responsibility for the accuracy and integrity of the work, approval of the final version, agreement to submission, and the capacity to respond to questions concerning the manuscript.

Authors are also responsible for ensuring that the work is original, not previously published, that all listed contributors qualify for authorship, and that the manuscript does not infringe third-party rights.

2.6 The Use of Generative AI and AI-Assisted Tools in Figures, Images, and Artwork

TARBIYA does not permit the use of generative AI or AI-assisted tools to create or alter images in submitted manuscripts. This includes enhancing, obscuring, moving, removing, or introducing a specific feature within an image or figure. Adjustments to brightness, contrast, or color balance may be acceptable only insofar as they do not obscure or eliminate information present in the original image.

The journal may apply image forensics tools or specialized software to identify suspected image irregularities.

The only exception is when the use of AI or AI-assisted imaging tools forms part of the research design or research methods themselves, such as AI-assisted imaging approaches used to generate or interpret underlying research data. In such cases, the use must be described in a reproducible manner in the Methods section, including an explanation of how the tool was used and the name of the model or tool, version or extension numbers, and manufacturer.

Authors may also be asked to provide pre-AI-adjusted images or composite raw images for editorial assessment. The use of generative AI or AI-assisted tools for artwork such as graphical abstracts is not permitted. Generative AI for cover art may be allowed only with prior permission from the editor and publisher, where applicable, with all necessary rights cleared and correct attribution ensured.

3. For Reviewers

3.1 The Use of Generative AI and AI-Assisted Technologies in the Peer Review Process

When a reviewer is invited to assess a manuscript, the manuscript must be treated as a confidential document. Reviewers must not upload a submitted manuscript, any part of it, or the peer review report into a generative AI tool, even if the purpose is only to improve language or readability. Such use may violate author confidentiality, proprietary rights, and, where applicable, data privacy rights.

Peer review is a human intellectual responsibility at the core of the scientific ecosystem. Generative AI or AI-assisted technologies must not be used by reviewers to assist in the scientific review of a manuscript, because the critical thinking and original assessment required for peer review fall outside the proper scope of these tools, and because they may generate incorrect, incomplete, or biased conclusions.

The reviewer remains fully responsible and accountable for the content of the review report. Reviewers may, however, consult the AI disclosure statement provided by authors in the manuscript.

4. For Editors

4.1 The Use of Generative AI and AI-Assisted Technologies in the Editorial Process

A submitted manuscript must be treated as a confidential document. Editors must not upload a submitted manuscript or any part of it into a generative AI tool, as such action may violate confidentiality, proprietary rights, and data privacy obligations.

This confidentiality requirement also extends to editorial communication about the manuscript, including notification letters and decision letters. Editors therefore must not upload such correspondence into an AI tool, even for the sole purpose of improving language or readability.

Managing the editorial evaluation of a scientific manuscript requires human judgment and responsibility. Generative AI or AI-assisted technologies must not be used by editors to evaluate a manuscript or to support editorial decision-making, because the critical thinking and original assessment required for this role cannot be delegated to AI.

Editors remain fully responsible and accountable for the editorial process, the final decision, and communication with authors. Editors may refer to the AI disclosure statement provided by authors. If an editor suspects that an author or reviewer has violated this policy, the matter should be escalated through the journal’s ethics or publisher procedures.

5. Publication Process

5.1 The Use of AI and AI-Assisted Technologies in the Publication Process

As part of its commitment to improving the publishing experience for authors, reviewers, and editors, TARBIYA may explore carefully controlled uses of AI and AI-assisted technologies to support publication workflows. The goal is to use technology in ways that help maintain publication quality and preserve trust in published content, while ensuring that human oversight remains central to all decision-making.

AI-supported tools may be used, under appropriate safeguards, for purposes such as:

  • identifying relevant reviewers;
  • matching submissions with journal scope;
  • detecting duplicate submissions;
  • conducting technical checks, including adherence to submission requirements and completeness;
  • performing research integrity checks; and
  • supporting post-acceptance processes such as proof preparation, copyediting, and the identification of possible inconsistencies or inaccuracies in the final paper.

Human oversight remains at the core of decision-making throughout the publication process.

6. Ethical Alignment

This policy is consistent with broader principles of publication ethics and responsible AI use, including transparency, accountability, confidentiality, human oversight, fairness, and the protection of rights and scholarly integrity. In this regard, TARBIYA acknowledges the relevance of guidance issued by COPE, Nature Editorial, UNESCO, Elsevier, and STM Association.

7. Violations and Consequences

Any misuse of AI or violation of this policy may be treated as unethical conduct and handled under the journal’s publication ethics procedures. Depending on the seriousness of the case, actions may include manuscript rejection, post-publication correction or retraction, notification of relevant institutions or funders, and temporary or permanent restrictions on future submissions.

8. Policy Review and Updates

This policy will be reviewed periodically and revised when necessary to reflect developments in AI technologies, evolving international standards, and emerging best practices in scholarly publishing. Updated versions will be published on the official TARBIYA website.

References

  1. COPE (Committee on Publication Ethics). 2023. Guidance on AI and Ethics in Publishing. https://publicationethics.org
  2. Nature Editorial. 2023. Tools such as ChatGPT Threaten Transparent Science; Here Are Our Ground Rules for Their Use. https://www.nature.com/articles/d41586-023-00191-1
  3. UNESCO. 2021. Recommendation on the Ethics of Artificial Intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000381137
  4. Elsevier. 2023. AI Policy for Authors and Reviewers. https://www.elsevier.com/about/policies/artificial-intelligence
  5. STM Association. 2023. Recommendations for a Classification of AI Use in Academic Manuscript Preparation. https://www.stm-assoc.org