lang icon En
July 30, 2023, 8:10 a.m.
664

None

Brief news summary

None

Discover the VB Transform 2023 sessions in our on-demand library. Register Here Initially, when ChatGPT was released by OpenAI, it appeared to me as an all-knowing entity. Trained on extensive datasets that represented a collective sum of human knowledge and interests available online, this statistical prediction machine seemed to hold the potential to serve as a singular source of truth. Walter Cronkite, with his nightly proclamation of "That's the way it is, " had previously fulfilled this role for the American public, and was widely trusted. In our current era of polarization, misinformation, and dwindling trust in society, a reliable source of truth would undoubtedly be invaluable. However, these hopes were quickly shattered as the flaws of this technology became evident. One major drawback was its tendency to generate answers that were mere fabrications. It soon became apparent that, while the output appeared impressive, it was based solely on patterns within the training data rather than reflecting an objective truth. But that wasn't the only problem. Soon after ChatGPT's release, a flood of other chatbots emerged from tech giants like Microsoft, Google, Tencent, Baidu, Snap, SK Telecom, Alibaba, Databricks, Anthropic, Stability Labs, Meta, and others. One notable example is Sydney. What's more, these various chatbots yielded drastically different results when presented with the same query. The discrepancies stem from differences in models, training data, and any guiding principles provided to the models. These guiding principles are meant to prevent the perpetuation of biases, generation of disinformation, hate speech, and other toxic content within these systems. Unfortunately, it became clear after the launch of ChatGPT that not everyone agreed with the guiding principles set by OpenAI. Conservatives, for instance, criticized the bot's answers for displaying a distinct liberal bias. In response, Elon Musk proclaimed his intentions to build a chatbot that is less constrained and politically correct compared to ChatGPT. With his recent announcement of xAI, it is highly likely that he will fulfill that promise. A different approach was taken by Anthropic, who implemented a "constitution" for their Claude chatbots (and now Claude 2). As reported by VentureBeat, this constitution delineates a set of values and principles that Claude must adhere to when interacting with users, emphasizing helpfulness, harmlessness, and honesty. According to a blog post from the company, Claude's constitution draws inspiration from the U. N. Declaration of Human Rights while also incorporating non-western perspectives. It's safe to say that these principles are agreeable to a majority. Meta, too, recently introduced their LLaMA 2 large language model (LLM), which stands out not only for its capabilities but also for being open source. This means that anyone can freely download and use it for their own purposes. Other open-source generative AI models with minimal restrictions are also available. Using these models renders the idea of guiding principles and constitutions somewhat quaint. However, troubling research reported by The New York Times has surfaced, revealing a prompting technique that effectively undermines the guardrails of all models, whether open-source or closed-source. Fortune reported a near 100% success rate in utilizing this method against Vicuna, an open-source chatbot powered by Meta's original LLaMA. Consequently, anyone seeking detailed instructions on creating bioweapons or engaging in consumer fraud could obtain such information from various LLMs. While developers may be able to address some of these security threats, researchers maintain that preventing all such attacks is currently impossible. Beyond the safety concerns raised by this research, there is a growing number of disparate outcomes from multiple models, even when presented with the same query.

Similar to our fragmented social media and news landscape, this fragmented AI ecosystem poses a threat to truth and undermines trust. We are heading towards a future inundated with chatbots that will only contribute to the noise and chaos. The fragmentation of truth not only affects text-based information but also extends to the realm of digital human representations, which are rapidly evolving. Presently, chatbots powered by LLMs impart information via text. However, as these models progress towards multimodal capabilities, enabling the generation of images, videos, and audio, their application and effectiveness will undoubtedly expand. One potential application of multimodal AI can be observed in "digital humans, " synthetic creations that convincingly replicate human appearance. A recent article in Harvard Business Review outlined the technologies that have facilitated the emergence of digital humans, combining computer graphics with artificial intelligence. These digital beings possess high-end features that accurately mimic the appearance of real humans. Kuk Jiang, co-founder of Series D startup ZEGOCLOUD, describes digital humans as highly detailed, realistic models that surpass the limitations of traditional realism and sophistication. Jiang also highlights their ability to interact with real humans in natural and intuitive ways, efficiently assisting in virtual customer service, healthcare, and remote education settings. Another emerging use case is the adoption of digital humans as newscasters. Early deployments have already begun, with Kuwait News introducing a digital newscaster named "Fedha, " which has quickly gained popularity. Fedha initiates interactions with the audience, asking for their preferred news topics, thus introducing the possibility of personalized news feeds. China's People's Daily is also exploring AI-powered newscasters. Meanwhile, startup Channel 1 plans to leverage gen AI to establish a novel type of video news channel, aptly described as an AI-generated CNN by The Hollywood Reporter. According to reports, Channel 1 will debut this year with a 30-minute weekly show scripted using LLMs. Their ultimate aim is to produce news broadcasts tailored to individual users. The article mentions the presence of both liberal and conservative hosts, allowing for news delivery through specific ideological lenses. Scott Zabielski, co-founder of Channel 1, acknowledges that current digital human newscasters do not yet appear indistinguishable from real humans. However, he predicts that technology will advance to the point where differentiating between AI and human news presenters will become increasingly difficult, potentially indiscernible. This raises concerns highlighted in a Scientific American study, which found that synthetic faces are not only highly realistic but also deemed more trustworthy than real faces. Hany Farid, co-author of the study and a professor at the University of California, Berkeley, expresses concerns that these faces could be effectively wielded for malicious purposes. While there is no indication that Channel 1 intends to misuse the persuasiveness of personalized news videos and synthetic faces, technological advancements may enable unscrupulous actors to exploit these tools. As a society, we are already grappling with the fear that what we read could be disinformation, phone calls could employ cloned voices, and the images we encounter may be manipulated. In the near future, even video content, including purportedly credible news broadcasts, may contain subtly crafted messaging designed to manipulate opinions rather than inform or educate. The assault on truth and trust has been ongoing, and this development suggests that the trend will persist. We are a far cry from the days of Walter Cronkite delivering the evening news. Gary Grossman is the SVP of the technology practice at Edelman and serves as the global lead of the Edelman AI Center of Excellence. Welcome to the VentureBeat community!Join DataDecisionMakers to engage with experts, including technical professionals involved in data-related work, who share insights and innovations related to this field. If you seek cutting-edge ideas, up-to-date information, best practices, and a glimpse into the future of data and data tech, come join us at DataDecisionMakers.


Watch video about

None

Try our premium solution and start getting clients — at no cost to you

I'm your Content Creator.
Let’s make a post or video and publish it on any social media — ready?

Language

Hot news

Dec. 12, 2025, 1:42 p.m.

Disney Sends Cease-and-Desist to Google Over AI C…

The Walt Disney Company has initiated a significant legal action against Google by issuing a cease-and-desist letter, accusing the tech giant of infringing on Disney’s copyrighted content during the training and development of generative artificial intelligence (AI) models without providing compensation.

Dec. 12, 2025, 1:35 p.m.

AI and the Future of Search Engine Optimization

As artificial intelligence (AI) advances and increasingly integrates into digital marketing, its influence on search engine optimization (SEO) is becoming significant.

Dec. 12, 2025, 1:33 p.m.

Artificial Intelligence: MiniMax and Zhipu AI Pla…

MiniMax and Zhipu AI, two leading artificial intelligence companies, are reportedly preparing to go public on the Hong Kong Stock Exchange as early as January next year.

Dec. 12, 2025, 1:31 p.m.

OpenAI Appoints Slack CEO Denise Dresser as Chief…

Denise Dresser, CEO of Slack, is set to leave her position to become Chief Revenue Officer at OpenAI, the company behind ChatGPT.

Dec. 12, 2025, 1:30 p.m.

AI Video Synthesis Techniques Improve Film Produc…

The film industry is experiencing a major transformation as studios increasingly incorporate artificial intelligence (AI) video synthesis techniques to improve post-production workflows.

Dec. 12, 2025, 1:24 p.m.

19 best social media AI tools to transform your s…

AI is revolutionizing social media marketing by offering tools that simplify and enhance audience engagement.

Dec. 12, 2025, 9:42 a.m.

AI Influencers on Social Media: Opportunities and…

The emergence of AI-generated influencers on social media signifies a major shift in the digital environment, sparking widespread debates about the authenticity of online interactions and the ethical concerns tied to these virtual personas.

All news

AI Company

Launch your AI-powered team to automate Marketing, Sales & Growth

and get clients on autopilot — from social media and search engines. No ads needed

Begin getting your first leads today