lang icon En
Oct. 19, 2023, 4:23 a.m.
558

None

Brief news summary

None

Thank you for visiting nature. com. Your current browser version has limited support for CSS. For the optimal browsing experience, we recommend using a more up-to-date browser or disabling compatibility mode in Internet Explorer. To ensure continued support, the site is currently being displayed without styles and JavaScript. Nearly a year after the release of OpenAI's chatbot, ChatGPT, companies are in a race to develop increasingly powerful generative artificial intelligence (AI) systems. These systems are capable of producing text, images, videos, and computer programs in response to human prompts, making information more accessible and accelerating technology development. However, they also pose risks, such as flooding the internet with misinformation and deepfakes that can be indistinguishable from real people. These risks can undermine trust in individuals, politicians, the media, and institutions, including the integrity of scientific research. Banning generative AI technology seems unrealistic, so governments are beginning to regulate it. The draft European Union AI Act, currently in the final stages of negotiation, emphasizes transparency, such as disclosing AI-generated content and summarizing copyrighted data used for training AI systems. The US government, under President Joe Biden, aims for self-regulation and has obtained voluntary commitments from leading tech companies to manage AI risks and protect Americans' rights and safety. China's Cyberspace Administration has also announced AI regulations to prevent the spread of misinformation and content that challenges Chinese values. The UK government is organizing a summit in November to establish intergovernmental agreements on limiting AI risks. However, legal restrictions and self-regulation may not be fully effective, as AI technology advances rapidly. Continuous monitoring and balancing of expertise and independence are crucial in controlling AI developments. Therefore, scientists should play a central role in safeguarding the impact of generative AI.

They should take the lead in testing, improving safety and security, and evaluating these systems. Ideally, independent specialized institutes should be established to support this work, as most scientists lack the necessary resources to develop or evaluate generative AI tools independently. To ensure responsible use of generative AI, experts have developed "living guidelines" for its use. These guidelines prioritize accountability, transparency, and independent oversight. Human involvement and verification are essential for evaluating the quality of generative AI-generated content, and researchers should disclose their use of generative AI in scientific publications. Generative AI developers and companies should make details of training data and algorithms available to independent scientific organizations, and there should be mechanisms to report and address biased or inaccurate responses. Scientific journals should acknowledge their use of generative AI for peer review. The guidelines also propose the establishment of an independent scientific auditing body, in collaboration with scientists from interdisciplinary fields, to evaluate generative AI tools and ensure their accuracy, safety, and ethical use. This body would develop quality standards, certification processes, and benchmarks for AI tools, addressing issues such as bias, hate speech, and truthfulness. However, financial investments and careful management of independence are necessary to make this system effective. To maintain relevance in the rapidly evolving field of generative AI, the guidelines must remain up to date and regularly reviewed by a diverse committee of scientific, policy, and technical experts. Collaboration between the auditing body and a second committee would be crucial for mapping, measuring, and managing risks associated with generative AI. Funding for the auditing body, estimated to be at least $1 billion, will be necessary and could be sourced from public funding, research institutes, and tech companies. Although the auditing body would initially operate in an advisory capacity, it is hoped that the living guidelines will inspire better legislation on generative AI. The guidelines address issues such as scientific fraud, copyright, AI literacy, and the trade-off between accessibility and intellectual property. It is urgent for the scientific community to take a central role in shaping responsible generative AI by establishing necessary bodies and securing adequate funding. Reference: Quach, K. 'Thanks to generative AI, catching fraud science is going to be this much harder' The Register (11 March 2023).


Watch video about

None

Try our premium solution and start getting clients — at no cost to you

Content creator image

I'm your Content Creator.
Let’s make a post or video and publish it on any social media — ready?

Language

Hot news

March 22, 2026, 2:21 p.m.

Learning When to Quit in Sales Conversations

Sales professionals frequently face a difficult dilemma during outbound sales calls: whether to continue engaging a prospective client or end the conversation to pursue another lead.

March 22, 2026, 2:18 p.m.

Artificial Intelligence Techniques Revolutionize …

In today’s fast-changing retail environment, artificial intelligence (AI) has become a vital force influencing consumer engagement and purchasing decisions.

March 22, 2026, 2:17 p.m.

AI-Generated Videos Gain Popularity on Social Med…

Social media platforms worldwide are currently witnessing a notable surge in the sharing of AI-generated videos.

March 22, 2026, 2:16 p.m.

AI Models Generate Misinformation about President…

A recent study by Proof News reveals significant concerns about the accuracy of information generated by leading artificial intelligence (AI) models, particularly regarding high-profile political figures.

March 22, 2026, 2:14 p.m.

Gemini, Crypto.com Latest Crypto Firms to Blame D…

With bitcoin prices remaining roughly 44% below the October peak near $125,000, several crypto firms have announced workforce reductions, often citing increased AI integration and internal upgrades as key reasons.

March 22, 2026, 10:20 a.m.

Svedka's AI-Generated Super Bowl Ad Faces Viewer …

During Super Bowl LX in 2026, the vodka brand Svedka took an innovative advertising approach by airing a commercial entirely generated through artificial intelligence.

March 22, 2026, 10:19 a.m.

AI Video Summarization Tools Aid in Legal Documen…

Law firms worldwide are increasingly integrating artificial intelligence (AI) video summarization tools into their daily workflows to streamline the review of lengthy legal videos and depositions.

All news

AI Company

Launch your AI-powered team to automate Marketing, Sales & Growth

AI Company welcome image

and get clients on autopilot — from social media and search engines. No ads needed

Begin getting your first leads today