Generative AI Policies

Declaration of Generative AI and AI-assisted Technologies

This policy is adopted and adapted with reference to Elsevier’s official guideline on the use of generative artificial intelligence in scholarly publishing, available at Elsevier’s Generative AI Policies for Journals. The purpose of this policy is to ensure ethical, transparent, and responsible use of generative AI tools and AI-assisted technologies in the editorial and publication processes of the KARSA. It applies to all parties involved in the publication process, including authors, reviewers, and editors.

For Authors

Authors must declare the use of generative AI in the manuscript preparation process upon submission of the paper.  KARSA allows the use of generative artificial intelligence (AI) tools and AI-assisted technologies, provided that they are used responsibly and under human supervision. Authors are required to verify the accuracy, completeness, and potential bias of AI-generated outputs, including automatically generated references. The use of such tools must be transparently declared through a Declaration of Generative AI and AI-assisted Technologies, specifying the tool’s name, purpose, and the extent of human editing or oversight. This declaration should be included in the manuscript before the reference list. Generative AI tools must not replace authors’ critical thinking, analysis, interpretation, or original intellectual contribution. All responsibility for the content remains with the human author(s), and AI tools must not be listed as authors or co-authors. Furthermore, the use of AI for creating or modifying images or illustrations is only permitted when it does not alter the underlying scientific information. If AI use is part of the research methodology, it must be clearly described in the Methods section.

During the preparation of this work, the author(s) used [NAME OF TOOL/SERVICE] in order to [SPECIFY PURPOSE, e.g., check grammar, improve readability, assist in translation, or structure ideas]. After using this tool/service, the author(s) reviewed, verified, and edited the content as necessary and take(s) full responsibility for the content of the published article.

Example:
During the preparation of this work, the author(s) used ChatGPT (OpenAI) in order to check grammar and enhance language clarity. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.

Elsevier recognizes the potential of generative AI and AI-assisted technologies (“AI Tools”), when used responsibly, to help researchers work efficiently, gain critical insights fast and achieve better outcomes. Increasingly, these tools, including AI agents and deep research tools, are helping researchers to synthesize complex literature, provide an overview of a field or research question, identify research gaps, generate ideas, and provide tailored support for tasks such as content organization and improving language and readability.  Authors preparing a manuscript for an Elsevier journal can use AI Tools to support them. However, these tools must never be used as a substitute for human critical thinking, expertise and evaluation. AI technology should always be applied with human oversight and control.

Ultimately, authors are responsible and accountable for the contents of their work. This includes accountability for:

  • Carefully reviewing and verifying the accuracy, comprehensiveness, and impartiality of all AI-generated output (including checking the sources, as AI-generated references can be incorrect or fabricated).
  • Editing and adapting all material thoroughly to ensure the manuscript represents the author’s authentic and original contribution and reflects their own analysis, interpretation, insights and ideas.
  • Ensuring the use of any tools or sources, AI-based or otherwise, is made clear and transparent to readers. If AI Tools have been used, we require a disclosure statement upon submission; please see example below.
  • Ensuring the manuscript is developed in a way that safeguards data privacy, intellectual property and other rights, by checking the terms and conditions of any AI tool that is used.

Finally, authors must not list or cite AI Tools as an author or co-author on the manuscript since authorship implies responsibilities and tasks that can only be attributed to, and performed by, humans. The use of AI Tools in the manuscript preparation process must be declared by adding a statement at the end of the manuscript when the paper is first submitted. The statement will appear in the published work and should be placed in a new section before the references list.

An example:

  • Title of new section: Declaration of generative AI and AI-assisted technologies in the manuscript preparation process.
  • Statement: During the preparation of this work the author(s) used [NAME OF TOOL / SERVICE] in order to [REASON]. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the published article.

The declaration does not apply to the use of basic tools, such as tools used to check grammar, spelling and references. If you have nothing to disclose, you do not need to add a statement.  Please read Elsevier’s author policy on the use of generative AI and AI-assisted technologies, which can be found in our generative AI policies for journals.  Please note: to protect authors’ rights and the confidentiality of their research, this journal does not currently allow the use of generative AI or AI-assisted technologies such as ChatGPT or similar services by reviewers or editors in the peer review and manuscript evaluation process, as is stated in our generative AI policies for journals. We are actively evaluating compliant AI Tools and may revise this policy in the future.

For Reviewers

Reviewers must maintain strict confidentiality regarding manuscripts assigned for peer review and are prohibited from uploading any part of the manuscript to external generative AI tools or third-party AI systems. Such actions may compromise confidentiality and violate authors’ copyrights. Reviewers are also not permitted to use AI tools to draft or edit their review reports if doing so requires uploading confidential manuscript content. The use of AI to evaluate scientific merit, methodology, or the contribution of the work is not allowed. Reviewers remain fully responsible for the content, integrity, and quality of their evaluations.

For Editors

Editors are responsible for maintaining the confidentiality of all submitted manuscripts and must not upload manuscripts, decision letters, or any confidential content to external generative AI systems. Editorial decisions and scientific evaluations must be made entirely by human editors and editorial board members, not by AI systems. However, AI may be used as a supporting tool for technical or administrative purposes—such as format checking, plagiarism detection, data integrity screening, or reviewer matching—provided that these activities remain under human oversight. The final editorial decision and publication responsibility rest solely with the human editors.