Al-Ihkam has developed its policy on the use of generative artificial intelligence in scholarly publishing by referencing and adapting Elsevier’s official guidelines, as outlined in
Elsevier’s Generative AI Policies for Journals. This policy aims to promote the ethical, transparent, and responsible use of generative AI tools and AI-assisted technologies throughout Al-Ihkam’s editorial and publication processes. It applies to the authors, reviewers, and editors.
For Authors
Authors must disclose any use of generative AI tools during the manuscript preparation process when submitting their papers. Al-Ihkam permits the responsible use of generative artificial intelligence (AI) tools and AI-assisted technologies, provided their application remains under human supervision. Authors are expected to ensure the accuracy, completeness, and impartiality of AI-generated content, including automatically generated references. All use of such tools must be transparently reported in a Declaration of Generative AI and AI-assisted Technologies, which should specify the name of the tool, its purpose, and the degree of human editing or oversight. This declaration must appear in the manuscript before the reference list.
Please be noted that Generative AI tools cannot substitute for authors’ critical thinking, analysis, interpretation, or original intellectual contributions. Responsibility for the manuscript’s content lies entirely with the human author(s), and AI tools must not be credited as authors or co-authors. The use of AI to create or modify images or illustrations is permitted only if it does not alter the underlying scientific information. If AI technologies are used as part of the research methodology, their application must be clearly described in the Methods section.
The format of the Declaration of Generative AI and AI-assisted Technologies statement is as follows:
During the preparation of this work, the author(s) used [NAME OF TOOL/SERVICE] in order to [SPECIFY PURPOSE, e.g., check grammar, improve readability, assist in translation, or structure ideas]. After using this tool/service, the author(s) reviewed, verified, and edited the content as necessary and take(s) full responsibility for the content of the published article.
Example:
During the preparation of this work, the author(s) used Grammarly in order to check grammar and enhance language clarity. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.
Ultimately, authors are responsible and accountable for the contents of their work. This includes accountability for:
- Carefully reviewing and verifying the accuracy, comprehensiveness, and impartiality of all AI-generated output (including checking the sources, as AI-generated references can be incorrect or fabricated).
- Editing and adapting all material thoroughly to ensure the manuscript represents the author’s authentic and original contribution and reflects their own analysis, interpretation, insights, and ideas.
- Ensuring the use of any tools or sources, AI-based or otherwise, is made clear and transparent to readers. If AI Tools have been used, we require a disclosure statement upon submission.
- Ensuring the manuscript is developed in a way that safeguards data privacy, intellectual property, and other rights by checking the terms and conditions of any AI tool that is used.
The declaration does not apply to the use of basic tools, such as those used to check grammar, spelling, and references. If you have nothing to disclose, you do not need to add a statement. Please read Elsevier’s author policy on the use of generative AI and AI-assisted technologies, which can be found in our generative AI policies for journals.
For Reviewers
Reviewers are required to uphold strict confidentiality for all manuscripts assigned to them for peer review and must not upload any portion of the manuscript to external generative AI tools or third-party AI systems. Doing so may breach confidentiality and infringe upon authors’ copyrights. Reviewers are also prohibited from using AI tools to draft or edit their review reports if this necessitates uploading confidential manuscript content. The use of AI for assessing scientific merit, methodology, or the overall contribution of the work is strictly forbidden. Reviewers retain full responsibility for the content, integrity, and quality of their evaluations.
For Editors
Editors are responsible for safeguarding the confidentiality of all submitted manuscripts and must not upload manuscripts, decision letters, or any sensitive content to external generative AI platforms. All editorial decisions and scientific evaluations must be conducted exclusively by human editors and editorial board members, without reliance on AI systems. AI may be employed as a supplementary tool for technical or administrative tasks—such as format checking, plagiarism detection, data integrity screening, or reviewer matching—provided these activities remain under strict human supervision. The responsibility for final editorial decisions and publication rests entirely with the human editors.