None

Concerns are being raised by digital experts regarding Google's development of an artificial intelligence tool meant to generate news articles. These experts fear that such devices may inadvertently propagate propaganda or compromise the safety of sources. According to The New York Times, Google is currently testing a product internally referred to as Genesis, which utilizes artificial intelligence (AI) to produce news content. Utilizing information about ongoing events, Genesis can generate news articles. Google has already presented this product to news organizations like The Washington Post, News Corp, and The New York Times. The launch of the AI chatbot ChatGPT has sparked debates about the role of AI in the news industry. AI tools can assist reporters by hastening data analysis and extraction from PDF files, a process known as scraping. AI can also aid in fact-checking sources. However, concerns regarding the potential spread of propaganda or the elimination of human nuance in reporting outweigh these advantages. These concerns extend beyond Google's Genesis tool and apply to the broader use of AI in news gathering. If AI-generated articles are not properly verified, they could unwittingly include disinformation or misinformation, as noted by John Scott-Railton, a disinformation researcher at Citizen Lab in Toronto. Scott-Railton highlights the susceptibility of non-paywalled content, popular among AI systems for scraping, to disinformation and propaganda. Removing humans from the loop does not make it easier to identify disinformation.
Paul M. Barrett, the deputy director at New York University's Stern Center for Business and Human Rights, concurs that artificial intelligence can amplify the dissemination of falsehoods. He believes that the supply of misleading content will increase. Google responded to VOA via email, stating that they are exploring ideas to provide AI-enabled tools to help journalists, particularly smaller publishers, but this is still in its early stages. They emphasize that these tools are not intended to replace journalists' essential role in reporting, creating, and fact-checking articles. The credibility implications for news outlets are another crucial aspect to consider when utilizing AI. News outlets are currently grappling with a credibility crisis, with half of Americans believing that national news outlets intentionally mislead or misinform audiences, according to a February report from Gallup and the Knight Foundation. Scott-Railton, who was formerly a Google Ideas fellow, is puzzled by the notion that introducing a less credible tool, with a weaker command of facts, can solve this problem. Reports demonstrate that AI chatbots frequently produce incorrect or fabricated responses, referred to by AI researchers as "hallucination. " Security risks associated with using AI tools to generate news articles are also a concern among digital experts. Anonymous sources, for example, could face retaliation if their identities are exposed. Barrett advises caution when disclosing confidential source identities or any other information that journalists want to remain private to AI systems. Scott-Railton believes that AI likely has a future in various industries but stresses the importance of not rushing the process, especially in the news industry. He fears that valuable reputations and factual accuracy could be compromised as lessons are learned from this case.
Brief news summary
None
AI-powered Lead Generation in Social Media
and Search Engines
Let AI take control and automatically generate leads for you!

I'm your Content Manager, ready to handle your first test assignment
Learn how AI can help your business.
Let’s talk!

Blockchain and Digital Assets Virtual Investor Co…
NEW YORK, June 06, 2025 (GLOBE NEWSWIRE) — Virtual Investor Conferences, the premier proprietary investor conference series, today announced that the presentations from the Blockchain and Digital Assets Virtual Investor Conference held on June 5th are now accessible for online viewing.

Lawyers Face Sanctions for Citing Fake Cases with…
A senior UK judge, Victoria Sharp, has issued a strong warning to legal professionals about the dangers of using AI tools like ChatGPT to cite fabricated legal cases.

What Happens When People Don't Understand How AI …
The widespread misunderstanding of artificial intelligence (AI), especially large language models (LLMs) like ChatGPT, has significant consequences that warrant careful examination.

Scalable and Decentralized, Fast and Secure, Cold…
In today’s fast-changing crypto market, investors gravitate toward blockchain projects that blend scalability, decentralization, speed, and security.

Blockchain in Education: Revolutionizing Credenti…
The education sector faces significant challenges in verifying academic credentials and maintaining secure records.

Exploratorium Launches 'Adventures in AI' Exhibit…
This summer, San Francisco’s Exploratorium proudly presents its newest interactive exhibition, "Adventures in AI," aimed at delivering a thorough and engaging exploration of artificial intelligence to visitors.

Google Unveils Ironwood TPU for AI Inference
Google has unveiled its latest breakthrough in artificial intelligence hardware: the Ironwood TPU, its most advanced custom AI accelerator to date.