Discussion Boards

Find answers, ask questions, and connect with our
community around the world.

Forums Forums Open Forum Peer Review Week 2023 Q4: AI-Generated Peer Review Reports

Tagged: 

  • Peer Review Week 2023 Q4: AI-Generated Peer Review Reports

    Posted by Sarah Black on September 27, 2023 at 1:23 am

    One way generative AI tools have been used, or banned, is in peer review or the preparation of peer review reports. What do you think? How
    would you feel as an author receiving an AI-generated peer review report?

    Share your thoughts here to be included in a future blog post or article!

    • This discussion was modified 1 year ago by  Sarah Black.
    Xiaohui Li replied 1 year ago 2 Members · 1 Reply
  • 1 Reply
  • Xiaohui Li

    Member
    October 13, 2023 at 3:31 am

    For the manuscripts I contributed to recently, we have not received easy-to-spot-on AI-generated review reports. However, when I reviewed manuscripts and read the review report from other reviewers, I have encountered several times of AI-generated reports. One time it was so obvious that it even had a remark of “generated by ChatGPT” embeded in the texts, likely due to the use of ChatGPT through a third-party application. In all these cases, the comments were very general and vague, with no clear actions suggested. For example, if I ask chatGPT to write a review report for a “plant biology manuscript”, it will produce something like “Conclusions: The conclusions drawn are largely supported by the data, but the authors should avoid overgeneralizing their findings, especially considering the limitations mentioned in the discussion.” Yet, this is exactly what I saw from real review reports recently.

    If reviewers are simply using AI to finish this “task”, without producing meaningful input tailored to a specific manuscript, then it will be a big problem for several reasons:

    1. It hinders the generation of diverse opinions. It will all be AI’s “opinion”.

    2. The process of peer review becomes meaningless. The authors, or anyone, could use AI to get the same comments.

    3. There is usually no clear action items in the AI-generated reports, which does not help the authors to improve their manuscript.

    4. It generates a feeling that the authors are now working for AI, and usually the same AI, instead of working with the editor and reviewers to improve the quality of science.

    5. Many times people only feed the abstract to AI, due to character limit in the input. This again leads to vague review report.

    I feel there is a need for a community-wide discussion about the use of AI in peer review, about what is accepted, and what should the editor/journal do when the reviewer’s report relies heavily on AI.

    • This reply was modified 1 year ago by  Xiaohui Li.

Log in to reply.