Antoni Shkraba Studio/Pexels

Academic researchers are demonstrating increasing polarisation regarding the integration of artificial intelligence in scholarly peer review processes, with adoption rates rising alongside persistent concerns about research integrity.

IOP Publishing’s global survey reveals 41 per cent of physical science reviewers now anticipate a positive AI impact on peer review, representing a 12 per cent increase from 2024 findings. Conversely, 37 per cent expect negative consequences, whilst neutral positions declined from 36 per cent to 22 per cent.

The research indicates that 32 per cent of academics currently utilise AI tools during review processes, despite IOP Publishing maintaining prohibitions on the use of generative AI due to ethical, legal, and scholarly standards requirements.

Significant resistance emerges when researchers consider AI application to their own work, with 57 per cent expressing dissatisfaction if AI systems write peer reviews of their manuscripts and 42 per cent objecting to AI augmentation of reviews.

Current AI applications vary substantially among users, with 21 per cent employing tools for grammar editing and text improvement, while 13 per cent utilise AI for article summarisation, raising concerns about confidentiality and data privacy. A minority of two per cent admit uploading complete manuscripts to chatbots for automated review generation.

Demographic divisions characterise adoption patterns, with women expressing less enthusiasm than men, and junior researchers demonstrating greater optimism compared to the scepticism of senior colleagues.

Laura Feetham-Walker, Reviewer Engagement Manager at IOP Publishing, emphasised the necessity for enhanced community standards and transparency frameworks surrounding AI implementation in scholarly publishing.

“These findings highlight the need for clearer community standards and transparency around the use of generative AI in scholarly publishing,” Feetham-Walker stated.

The organisation proposes developing integrated AI tools within peer review systems to support reviewers whilst maintaining security and research integrity, addressing current practices of uploading manuscripts to third-party platforms.

The survey demonstrates growing technological engagement within academic communities alongside persistent concerns about maintaining scholarly rigour and research confidentiality in an increasingly AI-integrated publishing environment.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

World nears quarter million crypto millionaires in historic wealth boom

Global cryptocurrency millionaires have reached 241,700 individuals, marking a 40 per cent…

Wong warns AI nuclear weapons threaten future of humanity at UN

Australia’s Foreign Minister Penny Wong has warned that artificial intelligence’s potential use…

Mistral targets enterprise data as public AI training resources dry up

Europe’s leading artificial intelligence startup Mistral AI is turning to proprietary enterprise…

Legal scholar warns AI could devalue humanity without urgent regulatory action

Artificial intelligence systems pose worldwide threats to human dignity by potentially reducing…

MIT accelerator shows AI enhances startup building without replacing core principles

Entrepreneurs participating in MIT’s flagship summer programme are integrating artificial intelligence tools…

AI creates living viruses for first time as scientists make artificial “life”

Stanford University researchers have achieved a scientific milestone by creating the world’s…

Engineers create smarter artificial intelligence for power grids and autonomous vehicles

Researchers have developed an artificial intelligence system that manages complex networks where…