AI Policy
Policy on the Use of Artificial Intelligence and AI-Supported Technologies
The policy of the proceedings “Design, Production and Operation of Agricultural Machinery” regarding the use of artificial intelligence (AI) tools is based on statements by COPE, WAME, JAMA Network, the recommendations of ICMJE, the requirements of the European Artificial Intelligence Act, the Concept for the Development of Artificial Intelligence in Ukraine, and the Law of Ukraine “On Academic Integrity.”
Given the rapid growth in the popularity of generative artificial intelligence and related technologies, which are increasingly used by authors of scientific publications, the editorial board has developed an appropriate policy governing the use of AI. Its purpose is to enhance transparency in the preparation of materials and to ensure an appropriate level of publication quality for authors, reviewers, editors, and readers.
AI Use Policy for Authors
If generative artificial intelligence and related technologies are used in the preparation of an article, their use should be limited to improving readability and correcting grammatical errors. AI use must occur under proper human supervision and control, and authors are required to carefully review and edit the results obtained, as such technologies may generate plausible but inaccurate, incomplete, or biased statements. Authors bear full responsibility for the content of the publication.
Authors must disclose the use of artificial intelligence and AI-supported technologies in their manuscripts. Such use must be indicated in the published work. Disclosure supports transparency and trust among authors, readers, reviewers, and editors, and ensures compliance with the terms of use of the relevant tool or technology.
Authors must not list generative artificial intelligence or related technologies as an author or co-author, nor cite them as authors. Authorship entails responsibilities and accountability that can only be assigned to humans. Each (co-)author is responsible for properly verifying and resolving issues related to the accuracy and integrity of any part of the work and must be able to approve the final version of the manuscript and agree to its submission. In addition, authors are responsible for the originality of the work and for respecting third-party rights. All authors must familiarize themselves with the publication ethics policy before submission.
The editorial board prohibits the use of generative artificial intelligence or other AI tools to create or modify figures, images, and illustrations in submitted manuscripts. This includes actions such as enhancing, obscuring, moving, removing, or adding specific elements within an image. Only adjustments to parameters such as brightness, contrast, or color balance are permitted, provided that such changes do not result in the loss, distortion, or concealment of information present in the original image.
The only exception applies when the use of artificial intelligence or relevant tools is an integral part of the research design or methodology (for example, the use of AI in visualization approaches for creating or interpreting primary research data, particularly in biomedical imaging). In such cases, the use must be properly disclosed and described in the manuscript. This description should include how AI or AI-assisted tools were used in the creation or modification of the image, as well as the name of the model or tool, version number, extension, and manufacturer.
Authors must comply with specific rules for the use of AI-based software and ensure proper attribution of content. Where applicable, the editorial board may require authors to provide AI-processed preliminary versions of images and/or compiled raw images used to create the final submitted versions for editorial evaluation.
The use of generative artificial intelligence or AI-supported tools in the creation of artistic works, such as graphical abstracts, is prohibited. The use of generative AI in the creation of cover images may be permitted in certain cases, provided that the author obtains prior approval from the editor and publisher and can demonstrate that all necessary rights for the use of the material have been obtained, and that proper attribution is ensured.
AI Use Policy for Reviewers
A manuscript received for review is considered a confidential document that is the intellectual property of the authors and must not be shared or discussed with third parties. Reviewers must not upload the submitted manuscript or any part of it to generative AI tools, as this may violate confidentiality and the authors’ property rights, and, if the article contains personal data, may infringe data privacy rights.
This confidentiality requirement also applies to the reviewer’s report, as it may contain confidential information about the manuscript and/or the authors. For this reason, reviewers must not upload their review to AI tools, even for the purpose of correcting grammar or improving readability.
Peer review is fundamental to the scientific ecosystem, and the Editorial Board adheres to the highest standards of integrity in this process. Reviewing a scientific manuscript involves responsibilities that can only be assigned to humans. Reviewers are prohibited from using generative AI or other AI technologies when preparing their reviews. The peer review process requires critical thinking and independent expert evaluation beyond the capabilities of such technologies. Moreover, their use may lead to inaccurate, incomplete, or biased conclusions regarding the manuscript. The reviewer is responsible for the content of the review.
The Editorial Board’s policy states that authors are permitted to use generative AI and AI-assisted technologies in the process of writing a manuscript prior to submission, but only to improve readability and correct grammatical errors, with appropriate disclosure.
AI Use Policy for Editors
A submitted manuscript must be treated as a confidential document. Editors must not upload the manuscript or any part of it to generative AI tools, as this may violate confidentiality and the authors’ property rights, and, if the article contains personal data, may infringe data privacy rights.
This confidentiality requirement extends to all communications regarding the manuscript, including any decision letters or correspondence, as they may contain confidential information about the manuscript and/or the authors. For this reason, editors must not upload such communications to AI tools, even for the purpose of improving language or readability.
Managing the editorial evaluation of a scientific manuscript involves responsibilities that can only be assigned to humans. Editors are not permitted to use generative AI or related technologies to support the evaluation process or decision-making regarding manuscripts. Such activities require critical thinking and independent expert judgment beyond the capabilities of these technologies. Furthermore, their use may lead to incorrect, incomplete, or biased conclusions about submitted materials. The editor is responsible for the editorial process, the final decision, and communication of that decision to the authors.
The Editorial Board policy states that authors are allowed to use generative artificial intelligence and AI-supported technologies in the manuscript writing process prior to submission, but only to improve readability and correct grammatical errors, with appropriate disclosure.
If there is any suspicion of a violation of the AI use policy by an author or reviewer, the editor must report it to the Editorial Board.