Concerns are being raised by digital experts regarding Google's development of an artificial intelligence tool meant to generate news articles. These experts fear that such devices may inadvertently propagate propaganda or compromise the safety of sources. According to The New York Times, Google is currently testing a product internally referred to as Genesis, which utilizes artificial intelligence (AI) to produce news content. Utilizing information about ongoing events, Genesis can generate news articles. Google has already presented this product to news organizations like The Washington Post, News Corp, and The New York Times. The launch of the AI chatbot ChatGPT has sparked debates about the role of AI in the news industry. AI tools can assist reporters by hastening data analysis and extraction from PDF files, a process known as scraping. AI can also aid in fact-checking sources. However, concerns regarding the potential spread of propaganda or the elimination of human nuance in reporting outweigh these advantages. These concerns extend beyond Google's Genesis tool and apply to the broader use of AI in news gathering. If AI-generated articles are not properly verified, they could unwittingly include disinformation or misinformation, as noted by John Scott-Railton, a disinformation researcher at Citizen Lab in Toronto. Scott-Railton highlights the susceptibility of non-paywalled content, popular among AI systems for scraping, to disinformation and propaganda. Removing humans from the loop does not make it easier to identify disinformation.
Paul M. Barrett, the deputy director at New York University's Stern Center for Business and Human Rights, concurs that artificial intelligence can amplify the dissemination of falsehoods. He believes that the supply of misleading content will increase. Google responded to VOA via email, stating that they are exploring ideas to provide AI-enabled tools to help journalists, particularly smaller publishers, but this is still in its early stages. They emphasize that these tools are not intended to replace journalists' essential role in reporting, creating, and fact-checking articles. The credibility implications for news outlets are another crucial aspect to consider when utilizing AI. News outlets are currently grappling with a credibility crisis, with half of Americans believing that national news outlets intentionally mislead or misinform audiences, according to a February report from Gallup and the Knight Foundation. Scott-Railton, who was formerly a Google Ideas fellow, is puzzled by the notion that introducing a less credible tool, with a weaker command of facts, can solve this problem. Reports demonstrate that AI chatbots frequently produce incorrect or fabricated responses, referred to by AI researchers as "hallucination. " Security risks associated with using AI tools to generate news articles are also a concern among digital experts. Anonymous sources, for example, could face retaliation if their identities are exposed. Barrett advises caution when disclosing confidential source identities or any other information that journalists want to remain private to AI systems. Scott-Railton believes that AI likely has a future in various industries but stresses the importance of not rushing the process, especially in the news industry. He fears that valuable reputations and factual accuracy could be compromised as lessons are learned from this case.
None
The artificial intelligence (AI) market within the social media sector is experiencing remarkable growth, with forecasts predicting an increase from a market value of 1.68 billion US dollars in 2023 to an estimated 5.95 billion US dollars by 2028.
Epiminds, a marketing technology startup, is betting that AI can help marketers accomplish more.
It’s time to get ahead in AI + B2B—not next quarter or next year, but right now.
Machine learning (ML) algorithms are increasingly vital in Search Engine Optimization (SEO), transforming how businesses improve search rankings and content relevance.
xAI, an artificial intelligence company founded by Elon Musk, has quickly become a major player in the AI field since its creation.
Deepfake technology has seen significant advancements in recent years, enabling the creation of highly realistic manipulated videos that convincingly replicate real people and scenarios.
Elon Musk’s AI company, xAI, is making a significant foray into the video game industry by leveraging its advanced ‘world models’ AI systems, designed to comprehend and interact with virtual environments.
Automate Marketing, Sales, SMM & SEO
and get clients on autopilot — from social media and search engines. No ads needed
and get clients today