None
Brief news summary
NoneThank you for visiting nature. com. Your current browser version has limited support for CSS. For the optimal browsing experience, we recommend using a more up-to-date browser or disabling compatibility mode in Internet Explorer. To ensure continued support, the site is currently being displayed without styles and JavaScript. Nearly a year after the release of OpenAI's chatbot, ChatGPT, companies are in a race to develop increasingly powerful generative artificial intelligence (AI) systems. These systems are capable of producing text, images, videos, and computer programs in response to human prompts, making information more accessible and accelerating technology development. However, they also pose risks, such as flooding the internet with misinformation and deepfakes that can be indistinguishable from real people. These risks can undermine trust in individuals, politicians, the media, and institutions, including the integrity of scientific research. Banning generative AI technology seems unrealistic, so governments are beginning to regulate it. The draft European Union AI Act, currently in the final stages of negotiation, emphasizes transparency, such as disclosing AI-generated content and summarizing copyrighted data used for training AI systems. The US government, under President Joe Biden, aims for self-regulation and has obtained voluntary commitments from leading tech companies to manage AI risks and protect Americans' rights and safety. China's Cyberspace Administration has also announced AI regulations to prevent the spread of misinformation and content that challenges Chinese values. The UK government is organizing a summit in November to establish intergovernmental agreements on limiting AI risks. However, legal restrictions and self-regulation may not be fully effective, as AI technology advances rapidly. Continuous monitoring and balancing of expertise and independence are crucial in controlling AI developments. Therefore, scientists should play a central role in safeguarding the impact of generative AI.
They should take the lead in testing, improving safety and security, and evaluating these systems. Ideally, independent specialized institutes should be established to support this work, as most scientists lack the necessary resources to develop or evaluate generative AI tools independently. To ensure responsible use of generative AI, experts have developed "living guidelines" for its use. These guidelines prioritize accountability, transparency, and independent oversight. Human involvement and verification are essential for evaluating the quality of generative AI-generated content, and researchers should disclose their use of generative AI in scientific publications. Generative AI developers and companies should make details of training data and algorithms available to independent scientific organizations, and there should be mechanisms to report and address biased or inaccurate responses. Scientific journals should acknowledge their use of generative AI for peer review. The guidelines also propose the establishment of an independent scientific auditing body, in collaboration with scientists from interdisciplinary fields, to evaluate generative AI tools and ensure their accuracy, safety, and ethical use. This body would develop quality standards, certification processes, and benchmarks for AI tools, addressing issues such as bias, hate speech, and truthfulness. However, financial investments and careful management of independence are necessary to make this system effective. To maintain relevance in the rapidly evolving field of generative AI, the guidelines must remain up to date and regularly reviewed by a diverse committee of scientific, policy, and technical experts. Collaboration between the auditing body and a second committee would be crucial for mapping, measuring, and managing risks associated with generative AI. Funding for the auditing body, estimated to be at least $1 billion, will be necessary and could be sourced from public funding, research institutes, and tech companies. Although the auditing body would initially operate in an advisory capacity, it is hoped that the living guidelines will inspire better legislation on generative AI. The guidelines address issues such as scientific fraud, copyright, AI literacy, and the trade-off between accessibility and intellectual property. It is urgent for the scientific community to take a central role in shaping responsible generative AI by establishing necessary bodies and securing adequate funding. Reference: Quach, K. 'Thanks to generative AI, catching fraud science is going to be this much harder' The Register (11 March 2023).
Watch video about
None
Try our premium solution and start getting clients — at no cost to you