AI-Generated Content in Scientific Journals Raises Concerns
Brief news summary
The academic publishing industry is facing challenges with AI-generated content in scientific journals. While this technology has its benefits, concerns about the reliability of the peer review process have emerged. AI-generated anomalies and exaggerated images hinder the evaluation process. Respected journals are actively addressing these concerns, but the use of AI in academia is increasing the production of substandard, plagiarized, and fake papers by paper mills. These paper mills, currently at 2%, are rapidly growing due to easy access to AI technology. This situation exposes an academic culture that prioritizes quantity over quality, raising concerns about errors, fabrications, and unintentional plagiarism facilitated by AI tools like ChatGPT. To combat this, Wiley, a prominent academic publishing company, has retracted over 11,300 papers and implemented an AI-enabled "paper mill detection service." The misuse of AI and systemic issues in academic publishing demand significant attention and action.Recent instances of AI-generated text and images infiltrating scientific journals have highlighted the challenges posed by the rise of artificial intelligence in the academic publishing industry. While experts acknowledge that AI programs like ChatGPT can be useful for writing and translating papers when used responsibly, there have been cases where AI-generated content bypassed proper review processes and made it into publications. Examples include an AI-generated graphic of a rat with exaggerated genitalia and another depicting legs with unusual multi-jointed bones resembling hands. ChatGPT, a chatbot launched in November 2022, is believed to have significantly impacted how researchers present their findings. Although such embarrassing examples are rare and unlikely to have slipped through rigorous review processes at reputable journals, the use of AI in scientific publishing remains a concern. Andrew Gray, a librarian at University College London, discovered that over 60, 000 papers in 2023 involved the use of AI, and this number is expected to grow in 2024.
Additionally, the proliferation of "paper mills" that produce poor quality, plagiarized, or fake papers has been further facilitated by AI, making detection of such fraudulent practices more challenging. The desire for increased publication output puts pressure on academics, leading to a culture in which quantity is prioritized over quality. While AI can be beneficial for researchers in areas such as translation, there are concerns that errors, inventions, and inadvertent plagiarism by AI may undermine public trust in science. One recent example involved a researcher discovering an AI-generated, rephrased version of their own study published in an academic journal. The repercussions of AI misuse are far-reaching, hence academic publishers are beginning to deploy AI-powered solutions to detect and address such issues.
Watch video about
AI-Generated Content in Scientific Journals Raises Concerns
Try our premium solution and start getting clients — at no cost to you