lang icon En
Oct. 19, 2023, 4:23 a.m.
577

None

Brief news summary

None

Thank you for visiting nature. com. Your current browser version has limited support for CSS. For the optimal browsing experience, we recommend using a more up-to-date browser or disabling compatibility mode in Internet Explorer. To ensure continued support, the site is currently being displayed without styles and JavaScript. Nearly a year after the release of OpenAI's chatbot, ChatGPT, companies are in a race to develop increasingly powerful generative artificial intelligence (AI) systems. These systems are capable of producing text, images, videos, and computer programs in response to human prompts, making information more accessible and accelerating technology development. However, they also pose risks, such as flooding the internet with misinformation and deepfakes that can be indistinguishable from real people. These risks can undermine trust in individuals, politicians, the media, and institutions, including the integrity of scientific research. Banning generative AI technology seems unrealistic, so governments are beginning to regulate it. The draft European Union AI Act, currently in the final stages of negotiation, emphasizes transparency, such as disclosing AI-generated content and summarizing copyrighted data used for training AI systems. The US government, under President Joe Biden, aims for self-regulation and has obtained voluntary commitments from leading tech companies to manage AI risks and protect Americans' rights and safety. China's Cyberspace Administration has also announced AI regulations to prevent the spread of misinformation and content that challenges Chinese values. The UK government is organizing a summit in November to establish intergovernmental agreements on limiting AI risks. However, legal restrictions and self-regulation may not be fully effective, as AI technology advances rapidly. Continuous monitoring and balancing of expertise and independence are crucial in controlling AI developments. Therefore, scientists should play a central role in safeguarding the impact of generative AI.

They should take the lead in testing, improving safety and security, and evaluating these systems. Ideally, independent specialized institutes should be established to support this work, as most scientists lack the necessary resources to develop or evaluate generative AI tools independently. To ensure responsible use of generative AI, experts have developed "living guidelines" for its use. These guidelines prioritize accountability, transparency, and independent oversight. Human involvement and verification are essential for evaluating the quality of generative AI-generated content, and researchers should disclose their use of generative AI in scientific publications. Generative AI developers and companies should make details of training data and algorithms available to independent scientific organizations, and there should be mechanisms to report and address biased or inaccurate responses. Scientific journals should acknowledge their use of generative AI for peer review. The guidelines also propose the establishment of an independent scientific auditing body, in collaboration with scientists from interdisciplinary fields, to evaluate generative AI tools and ensure their accuracy, safety, and ethical use. This body would develop quality standards, certification processes, and benchmarks for AI tools, addressing issues such as bias, hate speech, and truthfulness. However, financial investments and careful management of independence are necessary to make this system effective. To maintain relevance in the rapidly evolving field of generative AI, the guidelines must remain up to date and regularly reviewed by a diverse committee of scientific, policy, and technical experts. Collaboration between the auditing body and a second committee would be crucial for mapping, measuring, and managing risks associated with generative AI. Funding for the auditing body, estimated to be at least $1 billion, will be necessary and could be sourced from public funding, research institutes, and tech companies. Although the auditing body would initially operate in an advisory capacity, it is hoped that the living guidelines will inspire better legislation on generative AI. The guidelines address issues such as scientific fraud, copyright, AI literacy, and the trade-off between accessibility and intellectual property. It is urgent for the scientific community to take a central role in shaping responsible generative AI by establishing necessary bodies and securing adequate funding. Reference: Quach, K. 'Thanks to generative AI, catching fraud science is going to be this much harder' The Register (11 March 2023).


Watch video about

None

Try our premium solution and start getting clients — at no cost to you

Content creator image

I'm your Content Creator.
Let’s make a post or video and publish it on any social media — ready?

Language

Hot news

April 12, 2026, 2:36 p.m.

ChatGPT Ads Hit $100M in Six Weeks, Meta Unveils …

OpenAI’s ChatGPT advertising pilot in the U.S. has swiftly achieved a significant milestone, reaching an annualized revenue of $100 million within just six weeks.

April 12, 2026, 2:23 p.m.

Amazon's Alexa Introduces AI-Generated Music Play…

Amazon has enhanced its virtual assistant, Alexa, by integrating AI-generated music playlists, significantly advancing the music streaming experience within its ecosystem.

April 12, 2026, 2:21 p.m.

AI Is Changing Lead Gen: 3 Things SEO & PPC Teams…

Artificial intelligence (AI) search tools, especially large language models (LLMs), are fundamentally transforming how consumers discover and engage with businesses.

April 12, 2026, 2:17 p.m.

Anthropic's 'Mythos Preview' Raises AI Concerns

Anthropic, a leading company in artificial intelligence, has introduced its latest AI model, the Mythos Preview.

April 12, 2026, 2:17 p.m.

FireYourSMM | Your AI Social Media Manager

FireYourSMM is a cutting-edge AI-driven social media management platform designed to transform how brands create and manage their social media content.

April 12, 2026, 2:11 p.m.

AI Daily: Microsoft Achieves ‘Audacious’ Copilot …

Microsoft has recently shifted its marketing strategy for its AI technology, specifically the Copilot product, following feedback from Wall Street analysts and investors.

April 12, 2026, 10:35 a.m.

Seedance 2.0: ByteDance's AI Video Model

Seedance 2.0, a pioneering text-to-video model developed by ByteDance and released in February 2026, quickly attracted widespread attention for its exceptional ability to generate detailed, realistic video clips from simple text descriptions.

All news

AI Company

Launch your AI-powered team to automate Marketing, Sales & Growth

AI Company welcome image

and get clients on autopilot — from social media and search engines. No ads needed

Begin getting your first leads today